id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
118975581 | pes2o/s2orc | v3-fos-license | Electroweak baryogenesis via chiral gravitational waves
We propose a new mechanism for electroweak baryogenesis based on gravitational waves generated by helical magnetic fields that are present during a first order electroweak phase transition. We generate a net lepton number through the gravitational chiral anomaly which appears due to the chiral gravitational waves produced by these magnetic fields. The observed value of baryon asymmetry can be obtained in our mechanism within parameter space of scenarios with an inverse cascade evolution for magnetic fields which can also be candidates for large-scale magnetic fields.
Introduction
Cosmological evidence implies the excess of matter over antimatter in the Universe. This asymmetry is characterized by η B ≡ n B /s, where n B is the net baryon number density and s is the entropy density of the Universe. Based on the Big Bang nucleosynthesis and cosmological abundances of light nuclei, this ratio is determined to be η B = (0.84 ± 0.07) × 10 −10 , which is in agreement with CMB observations [1]. To explain the baryon asymmetry of the Universe each scenario of interest should contain three conditions proposed by Sakharov [2]: 1-baryon number violation 2-C and CP violation, 3-departure from thermal equilibrium. Baryon production scenarios suggested during the Electroweak Phase Transition (EWPT), which is one of cosmological PTs manifestly containing the third condition, are known as EW baryogenesis. In the Standard Model (SM), the first Sakharov condition can be achieved by the triangle anomaly where J µ B , J µ L are baryon and lepton currents, respectively, W a µν is the SU(2) field strength and F µν is the U(1) field strength. The second term can contribute to the baryon and lepton number violation in the case of helical gauge field [3]. Moreover, in the SM there is a gravitational chiral anomaly which can lead to a lepton number violation [4]. This anomaly is given by where N = 3 in the SM due to different number of left and right-handed degrees of freedom in the leptonic sector, whereas in beyond the SM it can be less than 3 [5]. Also, R µνρσ denotes the curvature tensor of the space-time. The value of quantity RR does not change from its initial value which is supposedly zero unless chiral components of the metric evolve differently. This can be achieved if there is a CP violating source in the system. Due to this gravitational anomaly, chiral leptons and antileptons can be generated in the processes. Sphalerons act on left-handed leptons and convert them into antiquarks and also act on right-handed antileptons and convert them into quarks. These rival processes do not lead to a net matter asymmetry without a CP violating source. In the case of a strong first-order EWPT, sphaleron-mediated processes are suppressed in the broken phase and the produced matter asymmetry is preserved [6]. However, the usual scenarios for the EW baryogenesis within the SM cannot account for the observed baryon asymmetry since strongly first-order PT and sufficient CP violation cannot be provided. As a consequence, many beyond the SMs including supersymmetric SMs and SMs with an extended Higgs sector have been proposed to solve this puzzle [7]. In these models, due to electroweak symmetry breaking, a first-order PT at which two thermodynamical states are separated through bubble walls is fulfilled. During the history of the Universe, cosmic first-order PTs are important yet from another aspect. That is, they are sources of the Gravitational Wave (GW) radiation which can not only be a powerful probe for the early Universe, but also impact its evolution (see [8] for GW production at EWPT and [9] for QCD PT with a holographic approach). Three different mechanisms have been proposed for the production of these GWs: the collision of bubbles nucleated during a first-order PT [10], sound waves [11], and Magnetohydrodynamic (MHD) turbulence [12] produced by turbulent fluid and magnetic field in the plasma.
In this work, we propose a novel mechanism for baryon production during a first-order EWPT based on the gravitational anomaly. We show that this effect can be as important as other conventional mechanisms proposed for the electroweak baryogenesis.
In fact, during a first-order EWPT the GWs produced due to the presence of helical (chiral) magnetic fields are also chiral so that left and right-handed fluctuations of the metric components have different dispersion relations. In addition, the helical component of the magnetic field provides a CP violating source in the model. We demonstrate that this mechanism leads to a non-zero gravitational anomaly and find that for magnetic field values compatible with large-scale magnetic fields observed today, baryon asymmetry relying on sphaleron processes can be explained. Although it is possible to choose a specific model, our mechanism works for any extension of the SM which provides a first-order EWPT and generates chiral GWs. Henceforth, we shall assume that such a strong first order EWPT is provided by an extension of the SM.
In Section 2, we express the gravitational anomaly in terms of FRW metric perturbations, and then study the magnetic field generated in a first-order EWPT and derive the energy momentum tensor for such magnetic fields contributing to generation of chiral gravitational waves. Subsequently, we solve the equation of motion for these GWs and calculate the gravitational anomaly term and the lepton number density. Then, the numerical results for baryon asymmetry is presented. Finally, we present our conclusions in Section 3.
2 The electroweak baryogenesis mechanism 2.1 Metric perturbations A homogeneous and isotropic universe is described by FRW metric which has no contribution to RR , where denotes the quantum expectation value. The generic perturbed form of this background may be parametrized as where a(τ ) is the scale factor and dτ = a −1 dt is the conformal time. Also, φ, ϕ, v i , and h ij are scalar, vector, and tensor perturbations of the metric, respectively. Among these perturbations only h ij has non-vanishing contribution to RR . Hence, we consider tensor modes as GW polarization and neglect other fluctuations. Furthermore, we restrict to the transverse and traceless gauge for h ij . In this gauge, one can write RR as follows [13] whereḣ ij ≡ ∂ τ h ij . We assume the GWs propagate in the z direction. The left and right-handed polarizations can be defined as h R,L ≡ (h 11 ± ih 12 )/2. The term RR is odd under the parity operation which exchanges the left and right components and can be generated only if these components have different dispersion relations. In the next part, we investigate the helical magnetic field as the source of these GW components.
Energy momentum tensor of helical magnetic field
During a first-order EWPT with high Reynolds number, bubble collisions processes give rise to turbulence and charge separation in the plasma. This process leads to the generation of magnetic fields which can be helical due to chiral anomaly during this era, as pointed out by many investigations [14]. In addition, in order to consider the coupling between the turbulent fluid and magnetic fields, MHD effects must be taken into account. On the other hand, these helical magnetic fields can create chiral GWs and as we will see in the following, these birefringent GWs bring about non-zero RR. The equation of motion for the GWs in the radiation-dominated epoch of the Universe is given by [15]: where H = aH, H is the Hubble expansion rate, M p = 2.44 × 10 18 GeV is the reduced Planck mass and Π ± are the sources for h R,L , respectively. Moreover, a(τ ) ∼ H 0 τ √ Ω rad where H 0 ≃ 10 −43 GeV is the Hubble rate today and Ω rad ≃ 10 −4 is the radiation density parameter today. Π ± are components of Π ij in the helical basis such that Π ij (k) = e + ij Π + (k) + e − ij Π − (k) and Π ij (k) = (P im P jn − 1 2 P ij P mn )T mn (k) is the transverse and traceless part of anisotropic energy-momentum tensor which is given by [16] where P ij = δ ij −k ikj is the transverse projector and B i is the magnetic field. Assuming that the helical magnetic field in momentum space is a stochastic quantity, it can be described by a Gaussian profile with an UV cutoff, which is determined by the dissipation length, λ d , in the spectrum. Therefore, the required properties are characterized by the two-point correlation function of magnetic fields. By defining the helical component, A(k), and the symmetric part of magnetic field, S(k), its two-point correlation function can be written as To express the tensor source in terms of the magnetic field two-point correlation function, one can parametrize the two-point function of the tensor source as where M ijmn ≡ P im P jn + P in P jm − P ij P mn , A ijmn ≡k l 2 (P jn ǫ iml + P im ǫ jnl + P in ǫ inl + P jm ǫ inl ).
Moreover, in the helical basis the two-point function can be written as Finally, f (k) and g(k) are obtained from S(k) and A(k) as where γ =k ·p and β =k · ( k − p). We assume a power law approximation for the magnetic field produced by a causal process, as S(k) = S 0 k 2 and A(k) = A 0 k 3 up to an UV cutoff, where S 0 and A 0 can be obtained from the magnetic field energy density, B 2 , and the averaged helicity, B 2 . Then, f (k) and g(k) can be expressed as [16] f (k) ≃ λ 3 14 where λ is the length scale on which coherent magnetic fields exist, and λ d denotes the dissipation scale below which the magnetic power spectrum is exponentially suppressed. Moreover, in the maximally helical case,
baryon asymmetry calculations
Now we can write the GW equation of motion for the right-handed component as where the derivative is taken with respect to the new variable u = kτ . Also, for the left-handed component the same equation holds with (f (k) + g(k))/3 as the source. The general solution can be obtained by the following Wronskian method The second term diverges at small u, hence we consider the first term as the relevant solution for the GWs. We find c 1 (u) as To obtain the second line, the Wronskian determinant is calculated to be W (u) = 1/u 2 , and we have kept only the largest part of c 1 (u) which gives the dominant contribution to the final result for the matter asymmetry. Therefore, the GW solutions becomes where τ i ≃ 10 4 s ≃ 10 29 GeV −1 is the conformal time at the EWPT. One can define the GW components in terms of the creation and annihilation operators aŝ and an analogous relation forĥ L (τ, x). From this equation, RR is given by [13] RR = 2 π 2 a 4 k 3 dk Since there is an UV cutoff in the helical magnetic field power spectrum, the integral also runs over all physical momentum space up to the UV cut-off, k d = 1/λ d . To find the lepton number density, n ≡ L/( a 3 d 3 x), we should integrate over the time interval of the EWPT. As can be seen clearly from Eqs. (12,16,18), non-zero RR is produced by chiral GW components sourced by helical magnetic fields. Furthermore, to gain a net lepton number, CP violating processes should exist. The presence of helical magnetic fields with non-vanishing helicity induces a CP violating source in the system. Besides, the magnetic field affects the scattering of fermions from bubble walls [18] which provides an additional source of CP violation and hence the required amount of CP violation can be fulfilled in the system. Putting Eq. (16) into Eq. (18), we find Note that only the helical part, g(k), contributes to RR and incidentally, the dominant term is taken in each step of calculation. This term generates a net lepton number which is subsequently converted to a net baryon number via sphaleron processes. The produced baryon asymmetry is preserved, provided that the EWPT is first-order and so sphaleron processes are suppressed in the broken phase. Taking into account some necessary conditions, including hypercharge neutrality and Yukawa interactions, the ratio of the baryon number to the lepton number can be calculated as n B /n L ≃ 0.3 [19]. Using this ratio and integrating Eq. (19) over time, the net baryon number density is obtained as where we take N = 3 and the duration of the phase transition is 0.01H −1 [10] in which H ≃ T 2 /M p = 10 −14 GeV, and T = 100 GeV at the EWPT. Finally, to compute the baryon asymmetry, we need to divide n B by the entropy density, s = 2π 2 g * T 3 /45, where g * is the effective number of massless degrees of freedom which at the EWPT is g * = 106.75. The dissipation length to correlation length ratio is of order of (λ d /λ) = 10 −10 at the EWPT [17]. Notice that 1/λ d provides a natural cutoff for the otherwise divergent integral in Eq. (19). Subsequently, the factor (λ/λ d ) 9 counteracts the usual suppression factor for processes involving GWs, i.e. the factor M 4 p in the denominator. The remaining factors in Eq. (20) work out, as we shall see below, to yield reasonable values for B and λ. We now solve the integral numerically. To obtain the baryon asymmetry consistent with observations, B and λ should be properly determined. For helical magnetic field with inverse cascade evolution, these two parameters can be related to each other. Inverse cascade evolution allows energy shift from small to large scale and will stretch the correlation length. Indeed, cosmological magnetic fields undergone inverse cascade are interesting proposals for the large-scale magnetic fields observed in the galaxies and intergalactic spaces [20]. We can relate the magnetic field magnitude and its correlation length scale to their present values through the following evolution relations [21] B ≃ 9.3 × 10 19 G T 10 2 GeV where G B (T ) and G λ (T ) are O(1) factors at the EWPT. Using Eq. (21), λ can be obtained in terms of B. Finally, putting helical magnetic field of the order of B ≃ 10 4 GeV 2 corresponding to λ ≃ 10 −3 m ≃ 10 13 GeV −1 , the observed value of baryon asymmetry can be obtained. Moreover, according to Eq. (21) the present values of the required quantities are B 0 ≃ 10 −10 G and λ 0 ≃ 10 kpc which are in agreement with observed large-scale magnetic fields.
Conclusion
We have presented a new mechanism for EW baryogenesis which relies on the gravitational anomaly sourced by chiral GWs. We assume the existence of a first-order EWPT and helical magnetic fields generating chiral GWs. We solve the GW equation during the PT, and find the gravitational anomaly violating the lepton number. The leptonic number can be transformed to the baryonic number by sphaleron processes. Furthermore, the magnetic helicity is a CP-odd quantity which provides the Sakharov's second condition and the net baryon number. Thus, it is interesting to note that in our work three Sakharov's criteria are dependent. The baryon asymmetry produced can be compatible with the observed value if the magnetic field and its correlation length scale are of the order of B ≃ 10 24 G and λ ≃ 10 −3 m, respectively, at the EW scale. Using an inverse cascade evolution, these magnetic fields can be considered as a primordial source for the observed large-scale magnetic fields. Moreover, another important advantage of this idea is that it is not constrained to any specific model and its necessary ingredients might be found in a wide variety of models for electroweak physics. | 2018-09-23T07:55:27.000Z | 2018-05-27T00:00:00.000 | {
"year": 2018,
"sha1": "beabfe10d34748035d8e8dd6dc78290c533ec966",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.physletb.2018.07.065",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "beabfe10d34748035d8e8dd6dc78290c533ec966",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
124860766 | pes2o/s2orc | v3-fos-license | PSEUDO-SYMMETRY ON UNIT TANGENT SPHERE BUNDLES
In this paper, we study the pseudo-symmetry of unit tangent sphere bundle. We prove that if the unit tangent sphere bundle T1M with standard contact metric structure over a locally symmetric M, n ≥ 3 is pseudo-symmetric, then M is of constant curvature.
Introduction
It is interesting to study the geometric interplay between a given Riemannian manifold (M, g) and its unit tangent sphere bundle T 1 M . Many authors have studied the standard contact metric structure of unit tangent sphere bundle. For example, D. E. Blair ([2]) proved that the unit tangent sphere bundle of a Riemannian manifold is locally symmetric if and only if the base manifold is of constant curvature 0 or 2-dimensional and of constant curvature 1. This says that local symmetry gives too strong restriction for the unit tangent sphere bundle. In this context, E. Boeckx and G. Calvaruso ( [5]) studied the so-called semi-symmetry on T 1 M as a natural generalization of local symmetry ( [5]). A Riemannian manifold (M ,ḡ) is said to be semi-symmetric if its curvature tensorR satisfies the conditionR (X,Ȳ ) ·R = 0 for any vector fieldsX andȲ onM , whereR(X,Ȳ ) acts as a derivation onR. They obtained that if the unit tangent sphere bundle of a Riemannian manifold is semi-symmetric, then it is already locally symmetric.
As a generalization of semi-symmetry, we may consider the notion of pseudo-symmetry which was introduced by R. Deszcz ([9]). A Riemannian manifold (M ,ḡ) is said to be pseudo-symmetric if there exists a function L such that for any vector fieldsX andȲ onM . HereX ∧Ȳ is the endomorphism field defined by (X ∧Ȳ )Z =ḡ(Ȳ ,Z)X −ḡ(Z,X)Ȳ . In particular, a pseudo-symmetric space is said to be constant type if L is constant. Then a semi-symmetric space is a pseudo-symmetric space of constant type (with L = 0).
In [8], the first author and J. Inoguchi proved that the unit tangent sphere bundle over a 2-dimensional Riemannian manifold M is pseudosymmetric if and only if M is of constant curvature. Then the following question will naturally arise. "Can we extend the above result to higher dimensional cases?" In the present paper, we obtained the partial answer about this question. The main theorem is the following.
Main Theorem. Let (M, g) be an n(≥ 3)-dimensional locally symmetric space and T 1 M be the unit tangent sphere bundle with standard contact metric structure over M . If T 1 M is pseudo-symmetric, then M is of constant curvature.
Preliminaries
All manifolds in the present paper are assumed to be connected and of class C ∞ . We start by collecting some fundamental materials of contact metric geometry. We refer to [1] for further details. A (2n + 1)dimensional manifoldM 2n+1 is said to be a contact manifold if it admits a global 1-form η such that η ∧ (dη) n = 0 everywhere. Given a contact form η, we have a unique vector field ξ, the characteristic vector field, satisfying η(ξ) = 1 and dη(ξ,X) = 0 for any vector fieldX onM . It is well-known that there exists a Riemannian metricḡ onM and a (1, 1)-tensor field φ such that whereX andȲ are vector fields onM . From (1) it follows that Pseudo-symmetry on unit tangent sphere bundles 377 A Riemannian manifoldM equipped with structure tensors (η,ḡ, φ, ξ) satisfying (1) is said to be a contact metric manifold and is denoted bȳ M = (M ; η,ḡ, φ, ξ). Given a contact metric manifoldM , we define the structural operator h by h = 1 2 L ξ φ, where L denotes Lie differentiation. Then we may observe that h is symmetric and satisfies where ∇ is the Levi-Civita connection. From (3) and (4) we see that each trajectory of ξ is a geodesic. We denote by R the Riemannian curvature tensor defined bȳ for all vector fieldsX,Ȳ andZ. Along a trajectory of ξ, the Jacobi operator =R(·, ξ)ξ is a symmetric (1, 1)-tensor field. We call it the characteristic Jacobi operator. A contact metric manifold for which ξ is Killing is called a K-contact manifold. It is easy to see that a contact metric manifold is K-contact if and only if h = 0 or, equivalently, = I − η ⊗ ξ.
The unit tangent sphere bundle
The basic facts and fundamental formulae about tangent bundles are well-known (cf. [10], [11], [13]). We only briefly review some notations and definitions. Let M = (M, g) be an n-dimensional Riemannian manifold and let T M denote its tangent bundle with the projection π : T M → M, π(p, u) = p. For a vector field X on M , its vertical lift X v on T M is the vector field defined by X v ω = ω(X) • π, where ω is a 1-form on M . For the Levi Civita connection ∇ on M , the horizontal lift X h of X is defined by X h ω = ∇ X ω. The tangent bundle T M can be endowed in a natural way with a Riemannian metricg, the so-called Sasaki metric, depending only on the Riemannian metric g on M . It is determined bỹ for all vector fields X and Y on M . Also, T M admits an almost complex structure tensor J defined by JX h = X v and JX v = −X h . Theng is a Hermitian metric for the almost complex structure J.
The unit tangent sphere bundleπ : T 1 M → M is a hypersurface of T M given by g p (u, u) = 1. Note thatπ = π • i, where i is the immersion of T 1 M into T M . A unit normal vector field N = u v to T 1 M is given by the vertical lift of u for (p, u). The horizontal lift of a vector is tangent to T 1 M , but the vertical lift of a vector is not tangent to T 1 M in general. So, we define the tangential lift of X to (p, u) ∈ T 1 M by Clearly, the tangent space T (p,u) T 1 M is spanned by vectors of the form X h and X t , where X ∈ T p M .
We now define the standard contact metric structure of unit tangent sphere bundle T 1 M over a Riemannian manifold (M, g). The metric g on T 1 M is induced from the Sasaki metricg on T M . Using the almost complex structure J on T M , we define a unit vector field ξ , a 1-form η and a (1,1)-tensor field φ on T 1 M by Since g (X, φ Ȳ ) = 2dη (X,Ȳ ), (η , g , φ , ξ ) is not a contact metric structure. If we rescale this structure by we get the standard contact metric structure (η,ḡ, φ, ξ). The tensors ξ and φ are explicitly given by where X and Y are vector fields on M . From now on, we consider T 1 M = (T 1 M, η,ḡ, φ, ξ) with the standard contact metric structure. Then we have the following formulas (cf. [2], [3], [4], [7], [12]). The Levi-Civita connection∇ of T 1 M is described bȳ for all vector fields X and Y on M .
Pseudo-symmetry on unit tangent sphere bundles 379 Also the Riemann curvature tensorR of T 1 M is given bȳ for all vector fields X, Y and Z on M . Using the formulae (7), we get where
Proof of Main Theorem
Suppose that T 1 M is pseudo-symmetric. Then we have We putȲ =V =W = ξ. Then from (9) we have the following equation: (10), and applying a Riemmanian metricḡ with respect to Y h on both sides, then we have SettingX = X h ,Z = Z t in (10), and applying a Riemmanian metric g with respect to Y t on both sides, then we have Since λ i = λ j , from (21) and (28), we have (29) λ i + λ j − 1 = 0.
But, we see that λ i and λ j satisfying (29) and (30) do not exist. After all, we find that M should satisfy g(R(e i , e j )u, e j ) = 0 or λ i = λ j . If λ i = λ j , the Jacobi operator R u for an arbitrary u ∈ T 1 M has only one eingenvalue. Thus M is of constant curvature. And if M satisfies g(R(e i , e j )u, e j ) = 0, M must have constant sectional curvature when dim M ≥ 3, by Cartan's theorem ( [6]). Therefore, we have completed the proof of Main Theorem. | 2019-04-21T13:13:31.078Z | 2016-06-25T00:00:00.000 | {
"year": 2016,
"sha1": "afb9b87be21ade1861a2ee0d9c487551bb81aa19",
"oa_license": null,
"oa_url": "http://society.kisti.re.kr/sv/SV_svpsbs03V.do?cn1=JAKO201620039076241&method=download",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "51af213a2b07f27da914540abc206af84ebabafa",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
46909045 | pes2o/s2orc | v3-fos-license | Universal imbedding of a Hom-Lie Triple System
In this article we will build a universal imbedding of a regular Hom- Lie triple system into a Lie algebra and show that the category of regular Hom-Lie triple systems is equivalent to a full subcategory of pairs of $\mathbb{Z}/2\mathbb{Z}$- graded Lie algebras and Lie algebra automorphism, then finally give some characterizations of this subcategory.
Introduction
Ternary algebras, that is algebras with ternary multiplication are studied heavily in Lie and Jordan theories, geometry, analysis and physics. For example, Jordan triple system ( [7], [10], [11], [13]) can be realized as 3-graded Lie algebras through the TKK construction ( [8]), from which special Lie algebras can be obtained. Also more to the heart of this study are Lie triple systems, which give rise to Z 2graded Lie algebras (see [6], [9]), which are Lie algebras associated to symmetric spaces.
Extending the theory of deformations into the realm of ternary algebras has been done with the use of ternary Hom-Nambu Lie algebras in papers like ([4], [5]). Most recently it has been attempted by Yau in [2] with Hom-Lie triple systems (in the form we present them here). Here we will expand this theory by constructing a universal imbedding of a regular Hom-Lie triple system into a Z 2 graded Lie algebra.
While at first inspection the regular assumption may seem to be inordinately restrictive, especially since it is most popular in the research of this field to only use the assumption of multiplicative. Yet, it is immediately present that in the finite dimensional case any simple multiplicative Hom-Lie triple systems (or regular Hom-Lie algebra) is regular. In addition, any finite dimensional multiplicative Hom-Lie triple system (reps. Hom-Lie algebra) is an extension of a regular Hom-Lie triple system (regular Hom-Lie algebra) by a Hom-Lie triple system (Hom-Lie algebra) with the trivial twisting map.
Next we will discuss the organization of this paper. In the first section basic definitions of Hom-Lie algebras, and Hom-Lie triple systems and a few consequences are considered. Furthermore, we review homotopes and isotopes of algebras, which we find as a natural setting for Hom-objects. In the second section we recall some results from [2] which show some methods of constructing Hom-Lie triple systems from Lie algebras, Hom-Lie algebras from Lie algebras, Lie algebras from Hom-Lie algebras, and even constructing Lie algebras from Hom-Lie triple systems. In the third section we introduce the concept of universal imbedding and build an imbedding for a Hom-Lie triple system into a Lie algebra, and a category isomorphism between regular Hom-Lie algebras and Lie algebras. While in the fourth section we construct the universal imbedding of a Hom-Lie triple system and finally in section five we describe the category of Lie algebras that is equivalent to the Hom-Lie triple systems.
Basic Definitions
In this section we recall a few definitions. Throughout this article k will be a commutative ring with a multiplicative identity, have Char(k) = 2, and will commonly be omitted.
Our fist definition is that of a Hom-Lie algebra, which we will see is equivalent to Lie algebra under a certain condition. Yet for our treatment is all we need. for all a, b ∈ L, and is known as the skew symmetry identity. While identity (2) is known as the the twisted Jacobi identity.
Remark 2.
Notice that in the above definition when α = Id L this definition is just the definition of a Lie algebra. A linear map f : L → L ′ is a Hom-Lie algebra homomorphism whenever for all Definition 3. A Hom-Lie algebra, (L, [ , ], α), is called • multiplicative when α is a Hom-Lie algebra homomorphism.
• regular when α is a Hom-Lie algebra automorphism.
Hom-Lie Triple Systems
Our next definition was first introduced by Yau in [2].
Remark 3. Notice again under our assumptions identity (3) is equivalent to for all a, b, c ∈ T , and is known as the left skew symmetry identity. while (5) is known as the ternary Hom-Nambu identity.
Remark 4. For brevity we will refer to Hom-Lie Triple System as simply a Hom-LTS.
• regular when α is a Hom-LTS automorphism.
Remark 5. For (T , [ , , ],α) a regular Hom-Lie triple system and a, b, c, d, e ∈ T we have the following:
Homotope Theory
Here we review some definitions of homotope theory that will be useful to us.
Definition 7.
Given an associative unital k-algebra, A, and an element a ∈ A, the algebra A a will be called the homotope of A. The homotope retains the same module structure of A, yet we replace the composition of A by When a is invertible we call a homotope an isotope.
Remark 6. It is immediately clear that from the associtivity of A, that A a is also associative.
Definition 8.
From the preceding remark we may then form the homotope Lie algebra with the standard commutator, that is (A a , [ , ]) is a Lie algebra with the commutator defined as,
Inducing Lie algebras and Hom-LTS
There are many constructions using Hom-Lie algebras for making Hom-LTS and ternary Hom-Nambu algebras, in papers like [2] and [4]. Yet they both rely on the the trivial case for the twisting map to draw the connection between Lie algebras and Hom-LTS. We will use this section to show the strong connection between Hom-LTS and strictly Lie algebras.
Hom-Lie algebra induced by a Lie algebra
The following is Lemma 2.8 from [2], in the case that is of most interest to us.
Lie algebra induced by a Hom-Lie algebra
The next Lemma is a rewording of Theorem 2.5 in [2], for the case we will be first examining.
Hom-LTS induced by a Lie algebra
In [2] Corollary 3.14 Yau uses a standard construction of a Jordan Algebra to make a multiplicative Hom-Jordan triple system. The following Lemma is an analogous result for Lie algebras and Hom-LTS.
Lie Algebra induced by a Hom-LTS
Before diving into this next construction we will make the assumption for the rest of the paper that α is an algebra automorphism.
Definition 9. For a regular Hom-Lie triple system (T , [ , , ],α) we will define HDR(T ) It is immediately obvious that this is a closed sub-vector space of End(T ).
Remark 7. It turns out that HDR(T ) is a sub-lie algebra of the isotope Lie algebra of End( Definition 10. Furthermore we will define IHD(T ) Remark 8. Notice that IHD(T ) is an ideal of HDR(T ) since for X ∈ HDR(T ) and D ab ∈ IHD(T ) we have the following.
Which makes IHD(T ) a Lie algebra.
The Category of Lie algebras, Hom-LTS and Hom-Lie algebras
For our next construction we will need to define the category of Lie algebras we will be concerned with.
Definition 11. We will denote, T L as the category with the pairs (L, α), where L is a Lie algebra and α : L → L is a Lie algebra automorphism, as objects and a Lie algebra homomorphism f : Definition 12. Furthermore will denote the category of regular Hom-Lie algebras as RHLA.
Remark 9. From Lemma 1 and Lemma 2 we can build a category isomorphism L : T L → RHLA.
Definition 13. We will also denote the category of all regular Hom-LTS as RLT S. So far no one has attempted a theory for this (to the authors knowledge). So we will be making some very reasonable assumptions, ones which align with the researchers in the area of interest, and developing a similar theory to that found in [6] and [9] for Lie triple systems. This theory will be equivalent to the functor T : T L → RLT S having a left adjoint.
Imbedding a Hom-LTS into a Lie Algebra
Before moving on to our universal structure we will extend Lemma 4 to build an imbedding of a regular Hom-LTS into a Lie Algebra. Proof. The verification that GHE(T ) is a Lie algebra is a straightforward calculation and is left to the reader. Furthermore, we havê
Definition 17. Let (T, [ , , ],α) be a regular Hom-LTS, then define GHE(T )
for all a, b, c, d, e, f ∈ T , thusα is a Lie algebra homomorphism. To see that it is an automorphism notice we may build a similar extension of α -1 and that extension is indeed the inverse ofα.
Z 2 -gradings
Recall that a Lie algebra L is said to be Z 2 -graded if L is a direct sum of a pair of k-submodules L 0 and L 1 such that It follows immediately from Definition 17 and Lemma 5 that the direct sum decomposition of the algebra GHE(T ) = IHD(T ) ⊕ T is a Z 2 -grading andα is a graded automorphism.
The Universal Imbedding
In this section it is our intention to construct a universal imbedding. The model for our treatment is the universal imbedding of a Lie triple system (see [9]).
Construction of U(T )
Throughout this section (T, [, , ], α) will be a regular Hom-LTS. Our first objective is to build a central extension of IHD(T ).
Since T is a HDR(T )-module, we can consider the exterior product T ∧ T as a HDR(T )-module under the the unique action, motivated by (9).
for D ∈ HDR(T ) and a, b ∈ T . Furthermore, formula (8) and identity (10) imply that the map λ : T ∧ T → HDR(T ) defined by λ(a ∧ b) = D ab is a module homomorphism from T ∧ T to the adjoint module HDR(T ) and that Im(λ) = IHD(T ).
It is easy to see that A(T ∧ T ) for a, b, c, d ∈ T . From Lemma 3.1 in [9] we have that A(T ∧ T ) is a sub module of T ∧ T . Next we define T, T as the quotient module (T ∧ T )/A(T ∧ T ) and let a, b denote the coset containing a ∧ b. We then gain from Corollary 3.2 in [9] that, As well the map µ : T, T → IHD(T ) , defined by µ( a, b ) = D ab , is a central extension.
The following Lemma is verified by a straightforward calculation and is thus omitted.
where X, Y ∈ T, T and a, b ∈ T , is a Z 2 -graded Lie algebra. Moreover, the map ν : U(T ) → IHD(T ), defined by ν(X + a) = µ(X) + a, is a central extension.
Remark 11.
For what follows we will need a canonical realization of U(T ) inside the category T L. That is we will need to define an α U : U(T ) → U(T ) which is an automorphism. The next Lemma defines this automorphism. = α, and we define its extension as α U ( a, b ) . = α(a), α(b) for all a, b ∈ T , then α U is an automorphism.
Proof. First we need to verify that α U is well defined on T, T . It will therefore suffice to show that the map γ : for all X ∈ A(T ∧ T ). We will verify this by checking that the elements which span A(T ∧ T ), that is (11) and (12) are invariant. So we may calculate for all a, b, c, d ∈ T , Next we check that α U is a Lie algebra homomorphism, Finally we have that α U is an automorphism since we can build a similar extension of α -1 , and that extension is indeed the inverse of α U .
Universal Property of U(T )
Next we consider T as a subspace of U(T ). It follows from (14) that T generates U(T ). Proof. First it is straightforward consequence of Lemma 8 that ι T • α = α U • ι T , and we have for a, b, c ∈ T , thus ι T is indeed an imbedding. Now let (L,[ , ]) be a Lie algebra and let ǫ : T → TL be an imbedding, i.e. there exists a Lie algebra automorphism α L : L → L such that for every and ǫ • α = α L • ǫ and since both α and α L are invertible (since they are automorphisms) we have Thus by combining (15) and (2) we arrive at, To prove the universality of the canonical inclusion we will need to construct a Lie algebra homomorphism ϕ : U → L extending ǫ and proving that it is unique, then show that this is indeed a morphism in the category T L, that is To build this we first claim that there is a well-defined map ǫ, ǫ : for every a, b ∈ T . To show that this map does not depend on the choice of representative (and is thus well defined) it will suffice to show that A(T ∧ T ) is in the kernel of the map ζ : for all a, b ∈ T . We will verify this by checking that the elements which span A(T ∧ T ), that is (11) and (12) are sent to zero by ζ. First note that, and for all a, b, c, d ∈ T , and therefore we may calculate, (15) and (17) imply that the map ϕ, defined by ϕ(X + a) . = ǫ, ǫ (X) + ǫ(a) for X ∈ T, T and a ∈ T , is a Lie algebra homomorphism. The next condition we need to show for ϕ is that Tϕ is a Hom-LTS homomorphism, that is ϕ is a morphism of the category T L. So we need to show that Yet for a ∈ T , ǫ(α(a)) = α L (ǫ(a)), thus (19) follows from Lemma 8 and the fact that T generates U(T ). Finally by construction we have ϕ| T = ǫ, and since T generates U(T ), there is only one such homomorphism ϕ.
A category of Lie algebras equivalent to RLT S
In this section we determine a sub-category of T L equivalent to RLT S.
The functor A
The universal property of the algebra U(T ) gives rise to the existence of a left adjoint to the forgetful functor T : T L → RLT S discussed in Section 3. That adjoint is the functor A : RLT S → T L which sends every regular Hom-LTS, (T , [ , , ],α T ) to the algebra U(T ) and every Hom-LTS homomorphism (to a Hom-LTS (S, [ , , ],α S )) ϑ : T → S to the morphism ϕ : U(T ) → U(S) defined by (18) for the imbedding ǫ = ι S • ϑ. These formulas also imply that the functor | 2017-09-25T20:29:22.000Z | 2017-09-25T00:00:00.000 | {
"year": 2017,
"sha1": "a23a42cc7d84ee928b72051c49e2eb916593eed0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "a23a42cc7d84ee928b72051c49e2eb916593eed0",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
265469782 | pes2o/s2orc | v3-fos-license | Preoperative patient’s expectations and clinical outcomes after rheumatoid forefoot deformity reconstruction by joint sacrificing surgery
Objective To study the clinical and radiologic factors related with overall patient satisfaction of joint scarifying reconstruction on severe rheumatoid forefoot deformity (RFD). Methods Forty cases of RFD were retrospectively enrolled. A questionnaire on the factors for patient’s expectations and satisfactions of the greater and lesser toes was administered, including repression of relapse in deformity (D), pain reduction (P), improvement in shoe wearing (S), barefoot activity (B), and appearance (A). Overall satisfaction were assessed using the 5-digit-scale. Hallux valgus angle, 1, 2 intermetatarsal angle, and other radiologic parameters were measured. Pearson’s correlation and multiple linear regression analyses were used to evaluate the relationships between these factors and overall satisfaction. Results Overall satisfaction was 4.0±0.82. Postoperative radiologic parameters were corrected in adequate range. Visual analog scale (VAS) was reduced from 7.2±2.1 to 2.2±1.8. For the greater toe, patient’s expectations (D, P, S, B, and A) were 4.2, 4.1, 3.0, 2.5, 2.7 and satisfactions were 4.2, 4.0, 3.4, 3.5, 3.3, respectively. For the lesser toes, patient’s expectations (D, P, S, B, and A) were 3.9, 4.1, 3.4, 3.0, 2.8, and satisfactions were 3.4, 4.0, 3.4, 3.6, 2.9, respectively. Satisfactions with P and B, and reduction amounts of VAS were significantly correlated with overall satisfaction. Conclusion Although forefoot reconstruction with a joint sacrificing procedure is non-physiological, it could be a good surgical option for severe RFD. Each patient’s expectations and satisfactions with this procedure could vary. Thus, it seems important to inform patients preoperatively that expectation could be fulfilled well or less.
INTRODUCTION
Foot discomfort is a major concern for patients with rheumatoid arthritis (RA), and nearly 90% of patients complain of foot pain during the course of disease [1,2].These discomforts can occur during all periods of RA and have a huge impact on the patient's daily life [3].Inflammation of the forefoot joints and soft tissue causes forefoot deformity despite proper medical treatment [1,4], and pathologic features around the metatarsophalangeal joint (MTPJ) and interphalangeal joint (IPJ) of the digits, such as hallux valgus, lesser toe deformity and metatarsalgia, are observed in many RA patients [5].Forefoot joint damage in RA changes pressure distribution patterns due to changes in the anatomical and biomechanical aspect of the MTPJ and increases peak pressure under the forefoot, leading to increased pain during barefoot walking [6].There have been many ad-Sung-Jae Kim et al. vances in pharmacotherapy, including disease-modifying antirheumatic drugs (DMARDs) and biological agents, but it is not yet known whether the drug can actually prevent the deformation [7].Even if pharmacological treatment is performed to alleviate symptoms and maintain function of the feet, 5% to 22% of patients eventually get surgical treatment [8][9][10].
Several surgical techniques have been described for RA forefoot deformity surgery, and they aim to relieve pain, correct deformity, increase footwear options and restore walking function [11].They vary by type of procedure on the first MTPJ and lesser toes.Joint preserving surgery can be performed only in cases with mild to moderate deformity without significant arthritic changes of the MTPJ [11,12], thus preserving the MT heads which are important weight-bearing structures.However, for RA patients with severe joint destruction, deformity and bone loss, joint sacrificing procedures are required to relieve symptoms.Regardless of surgical technique, patient's preoperative expectations have been shown to strongly relate to their ultimate satisfaction [12].Discussion with physicians about patient's expectations from surgery is important to prevent or reduce postoperative dissatisfaction.Currently, there does not appear to be any studies on patient's satisfaction with rheumatoid forefoot deformity (RFD) correction.While there are previous studies on hallux valgus deformity alone, there are no studies that have performed joint sacrificing surgery including lesser toe for RFD.
So we designed this study to find out how the surgery affects patients expectations and satisfactions.This study aimed to investigate the clinical expectations, satisfactions and radiologic factors related with overall satisfaction.We performed a perioperative assessment on all patients before surgery and consulted with the rheumatology department to ensure that about those matters and also the preoperative disease activity would not affect perioperative patient's status.
MATERIALS AND METHODS
Surgery was performed when preoperative disease activity did not significantly affect the patient's systemic condition.
Research on expectations and satisfactions was conducted through a questionnaire.The questionnaire asked patients about
One patient was male, and 34 patients were female.The mean 2).D and P were the highest patient's expectations, followed by S, B and A in descending order.P and D had the highest satisfaction, followed by B, S and A in descending order.The number of cases in each of the 5-digit-scale for each patient's expectations and satisfactions for items were counted (Table 3).
The mean overall satisfaction score postoperatively was 3.95 (2 to 5), that meaning most patients scored average to high for the questionnaire.The number of cases in each of the 5-digitscale were counted (Table 4).The mean reduction of overall VAS was 5.1 (7.3 to 2.2; p<0.001) (Table 1).Radiologic results showed a significant decrease of HVA after surgery, 47.7±9.7 (30 to 70) to 14.0±4.2(6 to 21; p<0.001), and IMA, 16.3±4.5(10 to 25) to 9.7±3.6 (4 to 17; p<0.001) (Table 1).During routine outpatient visits, 1st MTPJ were all united in the greater toe, and 2nd to 5th MTP joints all were resected properly without any significant remnants in the lesser toes.In two cases, postoperative wound dehiscence occurred, which was treated with periodic wound dressings in outpatient.Within two weeks, their wounds were healed clearly without further surgical intervention.No other complications were showed.
The mean DAS-28-ESR score was 4.56 (3.09 to 6.09) (Table 1).The average time from RA diagnosis to surgery was 13 years and 9 months.In medications taken for RA before surgery, 28 patients were taking NSAIDs, 30 patients were taking DMARDs, and 18 patients were taking corticosteroid.
In for the lesser toes, and the difference between preoperative and postoperative VAS (ΔVAS) (p=0.003) were significantly related to overall satisfaction (Table 5).
Factors correlating significantly with overall satisfaction, were satisfactions with P and B in both the greater and lesser toes, and reduction amounts of VAS scores following surgery.Additionally, we performed multiple linear analyses to identify the independent factors associated with overall satisfaction.Satisfaction with P for the greater toe was significantly associated to overall satisfaction (p=0.042;Table 6).
DISCUSSION
It is overt that the foot, in particular the forefoot, is a major part in the surgery of inflammatory joint disease.Synovitis of the MTPJ of foot is often the common findings of RA and results in forefoot pain.It is often the initial symptom of RA.It is reported that within the first three years of RA, approximately Patient's expectations and its outcomes after rheumatoid forefoot deformity surgery 65% of the patients have MTPJ involvement [1,13,14].Furthermore, It is estimated, with disease progression, two thirds of patients have MTPJ pain of hallux, and subluxation or dislocation of lesser MTP joints.Pain, deformity, and dysfunction of the forefoot, not responding to conservative treatment, eventually causes these patients to undergo surgical treatment [8,9].
It is well known that there is a difference between the patient's expectations undergoing surgery and the surgical goals of surgeons in the orthopedic field [15][16][17].Before surgery, physicians should discuss the patient expected results of the surgery with the patient and understand the patient's expectations as accurately as possible.If patients and surgeons have similar expectations or explain the difference between postoperative satisfactions and preoperative patient's expectations, patient satisfaction may be higher than if they have different expectations or not explain.A previous study on preoperative patient's expectations in hallux valgus patients without RA, and they reported that the patients most expected improvement in foot function, followed by relief of pain at the bunion site, comfortable shoe wearing, and decreased pain at the lesser toes [18].
In our study, patients had a mean DAS-28-ESR score of 4.58, indicating that they had, on average, moderate RA activity.
There are several studies about relationship between disease activity and surgical treatment in patients with RA.However, most of these study have been conducted in patients who have undergone large joint surgery or artificial joint replacement surgery.On the contrary, our procedure was a small joint surgery and did not constitute an artificial joint replacement.There are no study about high RA disease activity have been associated with postoperative complications or infections in foot and ankle surgery.We beilieved that patient's symptoms were the result of deformity of the forefoot caused by the chronic course of the disease rather than an inflammatory response.So we did not use the preoperative DAS-28-ESR score as a criteria for making a surgical decision [19].
We analyzed the correlation between preoperative DAS-28-ESR score and patients overall satisfaction and found no significant correlation.We believe that this would be resulted from their symptoms were the result of deformity of the forefoot caused by the chronic course of the disease rather than an inflammatory response.
For the great toe, repression of relapse in deformity was the paients' most expected factor of RA forefoot deformity surgery, followed by pain, comfortable shoe wearing, barefoot activity and appearance.For lesser toes, pain was the most expected factor, and followed by repression of relapse in deformity, comfortable shoe wearing, barefoot activity and appearance (Table 2).This showed that the factors that patient's most expected improvement for both the greater toe and lesser toes were pain and repression of relapse in deformity.
Traditionally, joint sacrificing procedures such as 1st MTPJ fusion or resection arthroplasty in 1st MTPJ and resection arthroplasty of the lesser toes are performed for patients with severe RFD.However, with recent advances in RA drug medication, many surgeons have conducted joint preserving surgeries in patients with minimal erosion of the MTP joint and have reported satisfactory results [20].In the greater toe, 50% to 95% good correction with osteotomy has been reported in rheumatoid hallux valgus [12,21].In the lesser toes, the Weil or other metatarsal head preserving osteotomies may have the advantage of preserving the plantar attachment and MTPJ function to bear weight [22].Niki et al. [11] evaluated a combination of joint preserving procedure for rheumatoid forefoot deformities in 30 patients.They reported no cases of nonunion, deformity recurrence, or callosity.However, joint preserving procedures could not be performed to correct severe deformities as in our study.
In contrast, in the greater toe, many procedures are performed for severe rheumatoid forefoot deformities, such as resection arthroplasty or arthrodesis as joint sacrificing procedures.Torikai et al. [23] found that significant improvement in the HVA could not only be achieved through resection but also with arthrodesis.In addition, Horita et al. [24] reported that arthrodesis of the MTP joint should be indicated for severe hallux valgus with an HVA >50°.In the lesser toes, resection arthroplasty has been commonly used in patients with RA and is essentially unchanged from the original description by Hoffman in 1911.
Dai et al. [25] reported that resection arthroplasty of the lesser metatarsals, combined with arthrodesis of the first MTP joint, achieved significant improvements in pain relief, deformity correction, and footwear comfort.In our study, the joint sacrificing, Modified Dwyer procedures were conducted for all patients with severe rheumatoid forefoot deformities and overall satisfaction showed improved clinical and radiological results.
In the Pearson correlation analysis, statistically significant clinical factors were satisfactions of P and B for the greater toe, P and B for the lesser toes, and ΔVAS.Above these, satisfaction of P for the greater toe significantly affected overall satisfaction in multiple linear regression analysis.We believe that all items about satisfaction were positively correlated with overall postoperative satisfaction to a moderate or low degree.Although postoperative overall satisfaction is influenced by a variety of factors, there are factors correlated significantly with overall satisfaction.
Satisfactions with P and B in both the greater and lesser toes, and reduction in VAS scores following surgery is significantly correlated.
Clinical factors, particularly satisfactions with S and A for the greater and lesser toes, were not significantly related to overall satisfaction.We believe that the reason was the joint sacrificing procedure have disadvantages in aspect of S and A for the greater and lesser toes.
Piqué-Vidal and Vila [26] studied severity of hallux valgus deformities according to angular measurements in 301 radiographs.In this study, preoperative radiologic parameters of all patients showed almost severe deformities.And postoperative radiologic parameters of all patients were mostly corrected within acceptable range.Nonunion, malunion, or other complications on radiologic examinations may affect clinical outcomes.
However, in this study, 1st MTP joints were all united in the greater toe, and 2nd-5th MTP joints all were resected without any significant remnants in the lesser toe, and no complications were found.So we believe that those consistent good results in each patient could not make any significant differences in overall satisfaction.
The limitations of this study are as follows.First, this was a retrospective study.Second, the number of patients who underwent the surgery as rather small.However, RA generally has a low prevalence of approximately 1%, and it is difficult to recruit severe RA patients who underwent surgery by the joint sacri-Patient's expectations and its outcomes after rheumatoid forefoot deformity surgery ficing surgery with long term follow-up.Nevertheless, this is a long term study about the joint sacrificing procedure.This study analyzes not only the patient's expectations of preoperative patients but also satisfactions after surgery.Our results show that joint sacrificing procedure in RA is one of the surgical methods that can be tailored to acceptable patient satisfaction.These results will be useful for orthopedic surgeons and rheumatologists when discussing surgical treatment with patients for severe RFD.
CONCLUSION
Although forefoot reconstruction with a joint sacrificing procedure is non-physiological, it could be a good surgical option
FUNDING
None.
A
retrospective study of orthopaedic surgery was performed to identify patients who underwent surgical treatment with complete preoperative and minimum 2-year postoperative patient-reported outcome measures.The study was approved by Board of Hanyang University Medical Center (Study number: HYUH 2018-07-015).Patients who underwent RA forefoot deformity reconstruction, joint sacrificing surgery performed by a single surgeon (I.H.S) at the foot and ankle department at a single institution between January 2000 and April 2016 were enrolled.All the patients were diagnosed as RA, confirmed by rheumatologists according to the American College of Rheumatology RA criteria and the indications for surgery included metatarsalgia associated with severe RFD.All patients had severe hallux valgus deformity with or without arthritic change on the first MTPJ and severe dislocation of more than three lesser MTPJ along with claw toe deformity (Figure 1).Patients with a history of foot or ankle procedures were excluded.All patients provided written informed consent.Surgical treatment was conducted using the modified Dwyer procedure, the joint sacrificing technique.The original Dwyer procedure undergoes arthrodesis of the first MTPJ and proximal IPJ of the lateral four toes, proximal phalanx and metatarsal head resection of the 2nd, 3rd, 4th and 5th rays and their respective tendon interposed to provide a tenodesis effect.The modified version is the same as conventional Dwyer procedure except the IPJ of the lateral four toes and proximal phalanx are preserved if possible and manual reduction of the IPJ along with extensor digitorum longus lengthening and extensor digitorum brevis tenotomy is performed concomitantly.
expectations and its outcomes after rheumatoid forefoot deformity surgery their preoperative and postoperative concerns and interests, and selected the five most frequently identified items, which patients commonly ask to surgeon before surgery.Questionnaires were administered at preoperative and postoperative outpatient visits or by telephone.A questionnaire of preoperative patient's expectations and postoperative satisfactions for greater toe and lesser toes were given, based on a 5-digit-scale (5: very high, 4: high, 3: average, 2: low, 1: very low) of common factors for patient's expectations and satisfactions of forefoot surgery, including repres-sion of relapse in deformity (D), pain reduction (P), improvement in shoe wearing (S), barefoot activity (B) and appearance (A).Overall satisfaction, which also based on a 5-digit-scale (5: very high, 4: high, 3: average, 2: low, 1: very low) and pain were assessed using the visual analog scale (VAS).Radiologic evaluation was performed using the standing anterior-posterior and lateral view of the foot.For evaluation of forefoot deformity, the hallux valgus angle (HVA) and 1, 2 intermetatarsal angle (IMA) were measured preoperatively and postoperatively.During routine outpatient visits, the degree of fusion of the first MTPJ, maintenance of metatarsal parabola and other anatomical alignment are measured.Frequency analysis was performed on preoperative patient's expectations, postoperative satisfactions.To compare changes in of VAS scores and radiologic parameters of HVA and IMA, preoperative and postoperative values were compared using paired t-tests.Pearson's correlation analysis and multiple linear regression analysis were used to evaluate the relationships between clinical and radiologic factors, including VAS, IMA, HVA, items about satisfaction, preoperative Disease Activity Score 28-erythrocyte sedimentation rate (DAS-28-ESR) score and overall satisfaction.Preoperative the DAS-28-ESR score based on 28 joints (DAS-28) is calculated from four components: number of tender joints, number of swollen joints, VAS score of the patient's global health, and ESR.DAS-28-ESR score was measured to validate RA disease activity.Statistical analysis was performed using the SPSS Statistics software (version 24.0;IBM Corp., Armonk, NY, USA).Statistical significance was set at p<0.05.
the Pearson correlation analysis, All clinical factors and radiologic factors except preoperative DAS-28-ESR score were positively correlated with overall postoperative satisfaction to moderate and low degrees.Satisfactions with P (p=0.002) and B (p=0.023) for the greater toe, P (p=0.035) and B (p=0.002) for severe RFD.Because clinical outcomes and patients overall satisfaction have shown to be improved in most cases.When comparing patients' expectations and postoperative satisfactions with the joint sacrificing procedure, both the greater and lesser toes showed better satisfaction on items, 'reduction in pain' and 'barefoot activity' than on other items, 'shoe wearing' and 'appearance' .Since many clinical factors could affect the overall satisfaction and patient's expectations of individual patient were diverse, it seems to be important to inform patients preoperatively with a detailed explanation of what could and could not be improved much after surgery.
Table 1 .
Demographic data, radiologic measurements and clinical outcomes of patients
Table 3 .
Frequency analysis of patient's expectation and satisfaction
Table 6 .
Multilinear regression analysis between clinical factors and overall satisfaction VAS: visual analog scale, ΔVAS: difference between preoperative VAS and postoperative VAS.Statistically significant (p<0.05). | 2023-11-29T16:13:17.932Z | 2023-11-27T00:00:00.000 | {
"year": 2023,
"sha1": "03dc2a6afa4c5017ac047bbd25d3b832c87bd78b",
"oa_license": "CCBYNC",
"oa_url": "https://www.jrd.or.kr/journal/download_pdf.php?doi=10.4078/jrd.2023.0044",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cd2b6e0ada17871036ba5ef3f7e6820eec2aad56",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
236750966 | pes2o/s2orc | v3-fos-license | The Process of Students' Mathematical Connection in Solving Mathematical Problems in terms of Learning Styles
The purpose of this study was to describe the process of students' mathematical connections in solving mathematical problems in terms of learning styles. The type of research used is descriptive research with a qualitative approach. The research subjects were six students consisting of two students who have a visual learning style, two students who have an auditory learning style, and two students who have a kinesthetic learning style. Student learning style data were collected through learning style questionnaires, while student connection process data were collected through mathematical connection test sheets and interviews. Data credibility is done by triangulating sources and methods. Data from the six subjects consisting of two students for each of these learning styles were described and categorized from the same, different, and specific views. The subjects obtained from the results of the learning style questionnaire were compared using the results of the mathematical connection test and interviews. Data analysis was guided by four steps to solve Polya's problems and was carried out in three stages, namely data condensation, data presentation, and conclusions. The results showed that there are differences in the mathematical connection process carried out by students who have visual, auditory, and kinesthetic learning styles in the step of understanding the problem. However, there are similarities in the mathematical connection process carried out by students who have visual and kinesthetic learning styles at the step of compiling a completion plan and checking again. while students who have a different auditory learning style from students who have visual and kinesthetic learning styles in implementing the completion plan and rechecking. The process of mathematical connection of students who have an auditory learning style at the step of checking back cannot be seen, because these students do not take this step when solving problems. Researchers suggest that teachers need to accustom students to connecting mathematics both internally and externally and pay attention to the emphasis of the material given to students so that students' mathematical connections are more developed. because the student did not take this step when solving the problem. Researchers suggest that teachers need to accustom students to connecting mathematics both internally and externally and pay attention to the emphasis of the material given to students so that students' mathematical connections are more developed. because the student did not take this step when solving the problem. Researchers suggest that teachers need to accustom students to connecting mathematics both internally and externally and pay attention to the emphasis of the material given to students so that students' mathematical connections are more developed.
Introduction
Mathematical connections are very important for students, especially in learning mathematics, because it makes students understand that mathematics is a science that can relate to other disciplines such as science, economics, social, culture, and so on, and relates to problems in everyday life (Romli, 2016). The mathematical connection acts as a basis for thinking mathematically for students, so that students are able to connect the relationships concepts that have been obtained for use in real contexts (Latipah & Afriansyah, 2018;Sari, Sudirman, & Chandra, 2018). Based on the research results, the level of mathematical connection of students in Indonesia is in the low category (Anita, Mathematical connection refers to the ability to connect mathematical facts, concepts, principles, operations and procedures both internally and externally, so as to solve problems (Hidayah, Kurniaasih, & Rohmad, 2019;Latipah & Afriansyah, 2018). Internally, the ability of mathematical connections is seen as the ability to connect mathematical concepts, while externally, the ability of mathematical connections is seen as the ability to connect mathematics with other disciplines or everyday life (NCTM, 2000;Siagian, 2016). The object of study in mathematics consists of facts, concepts, operations, principles and procedures (Apriyono, 2018;Fauzi & Priatna, 2019). The process of mathematical connection is a process of thinking in identifying and connecting mathematical ideas (internally), and connecting mathematics with other disciplines or daily life (externally) (Diana et al., 2017;Poladian & Zheng, 2016). This research focuses on the connection between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics consists of three aspects, namely identifying the relationship between facts, concepts, and mathematical principles on the problem to be solved, using the relationship between facts, concepts, mathematical principles to create models or formulas needed to solve problems, and making relationships between one concept with another in solving problems (Aini et al., 2016;Apriyono, 2018;Romli, 2016;Sari et al., 2018). The two components in the mathematical connection are used as indicators of a mathematical connection. The researcher intends to focus on these two components in the hope that the researcher is better able to reveal thoroughly related students' mathematical connection activities.
The problem solving used in this study is in accordance with the Polya stages. According to Polya (1973), problem solving in mathematics consists of four main steps, namely understanding the problem, compiling a plan, implementing a plan, and checking again. Polya's steps are used in this study because each Polya's problem solving activity is systematic and simple, making it easier for students to solve problems. Solving problems is one way to be able to see and build students' mathematical connections (Ariati & Hartati, 2017;Hadi & Radiyatul, 2014). Students can use all known information and relate the information to obtain relevant answers (Tasni & Susanti, 2017). Through the connection process, students can translate questions into a mathematical form and connect mathematical concepts and procedures (Sari et al., 2018). Based on Polya's four steps, understanding, aspects, and indicators of mathematical connections, the researcher prepared indicators for the mathematical connection process of students in solving mathematical problems which consisted of four stages, namely understanding the problem (writing down known information on the problem by translating it into mathematical language and writing what which is asked in the problem by presenting the problem addressed to the problem), compiling a solution plan (identifying the known and questioned information relationships with facts, concepts, and mathematical principles, using the relationship of facts, concepts and principles to write the mathematical model or formula needed in solve problems, and suggest mathematical operations and procedures that will be used to solve problems) (Aini et al., 2016;Diana et al., 2017;Romli, 2016;Sari et al., 2018).
Each student has a different way of thinking and creativity in observing, processing, and processing information to solve problems (Ariati & Hartati, 2017;Maftuh, 2018;Sundayana, 2016). The way a person observes, processes, and processes information to solve a problem through a different perspective is called a learning style (Argarini, 2018;Ghufron & Risnawita, 2012;Suparman, 2010;Zulyanty et al., 2017). There are three types of learning styles, namely visual learning styles, auditory learning styles, and kinesthetic learning styles (DePorter & Hernacki, 2013). Visual learning style is a type of learning style that relies on the ability of the visual senses to seek and process information. Auditory learning style is a type of learning style that relies on the sense of hearing to seek information and process information. Kinesthetic learning styles are types of learning styles that rely on physical involvement such as touch or movement. Learning styles are one of the factors that differentiate the way students solve problems (Argarini, 2018;Budiarti & Jabar, 2016;Maftuh, 2018;Sundayana, 2016;Zulyanty et al., 2017). Differences in learning styles will affect the way the problem is solved, that is, each visual, audio, and kinesthetic learning style has differences in the problem solving process (Richardo et al., 2014). The difference in how to solve problems from students who have these three learning styles is also used as a reference to see the effect of mathematical connection abilities on student achievement, so in this case, learning styles become one of the internal factors of students' mathematical connections ( (Baiduri et al., 2020). The mathematical connection skills of grade VIII junior high school students in algebra and geometry in terms of student gender, namely male and female students (Apriyono, 2018). The difference between this study and this research is that in this study, the selection of subjects from male and female gender is based on equal ability levels seen from the math ability test and report card scores, whereas in this study the subject selection from three learning styles was based on the results of the student learning style questionnaire. Another research is about the effect of mathematics anxiety on the mathematical connection ability of junior high school students (Anita, 2014). The study shows that math anxiety has a negative effect on mathematical connection abilities. The difference between this study and this research is that in this study, the research approach used is quantitative with data processing using the multiple regression-correlation method of the mathematics anxiety questionnaire and the mathematical connection ability test in the form of description questions, whereas in this study the research approach used was qualitative with triangulation of methods and sources. Other research, namely the connection of mathematics and self-confidence, shows that contextual learning with manipulative mathematics is better at improving students' mathematical connection skills, so there is a moderate correlation between mathematical connections and self-confidence (Hendriana et al., 2014). The difference between this research and this research is that the research was in the form of a quasi-experimental study with a pre-test post-test control group design involving 67 ninth grade students, while this study was a descriptive qualitative study, using triangulation of sources and methods for data credibility involving 2 subjects from each of the three categories of learning styles. As for __________________________________________________________________________________________ 1454 previous research that links the connection with problem solving. Research on the mathematical connection process of students with high and low abilities in solving flat shape problems, which shows that students with high mathematical abilities have a more complete mathematical connection process in problem-solving steps than students with low math abilities who do not take the look back step (Aini et al., 2016). The difference between this study and this research is that the subject taking in the study was carried out by means of a preliminary test, whereas in this study, the subject taking was carried out using a learning style questionnaire. Another research is about the mathematical connection process of junior high school students in solving story problems, which shows that the mathematical connection process of students is shown by the ability to translate questions into mathematical form and the ability to connect mathematical concepts and procedures (Sari et al., 2018). The difference between this research and this research is that the research is a case study type by selecting one student who has a unique completion process as the subject and the connection test sheet related to the application of algebra, whereas in this study, two subjects were selected from each of the three categories of learning styles and The connection test sheet deals with the area of the trapezoid and the triangle and the perimeter of the rectangles. Other research on the mathematical connection process of students with reflective cognitive style in solving algebraic problems based on the SOLO taxonomy, which shows that students with reflective cognitive types have a range from rational to abstract expansion of the connection process in the SOLO taxonomy (Diana et al., 2017). The difference between this study and this research is that the research subjects are three subjects from one category, namely the reflective cognitive style, whereas in this study, two subjects came from each of the three categories of learning styles. Thus, this study focuses more on the mathematical connection process carried out by students who have visual, auditory, and kinesthetic learning styles in solving mathematical problems, therefore, the purpose of this study is to describe the process of students' mathematical connections in solving mathematical problems. in terms of three learning styles, namely visual learning styles, auditory learning styles, and kinesthetic learning styles.
Types and Research Approaches
The type of research used is descriptive. It is descriptive with a qualitative approach, because it is based on the research objectives, namely to describe the process of students' mathematical connections in solving mathematical problems in terms of learning styles. The approach used in this study is a qualitative approach. A qualitative approach is an approach that leads to a detailed description of events, activities, beliefs, problems, and individual or group understanding of something, resulting in generalizations (Anggito & Setiawan, 2018).
Research Subjects
The subjects of this study were six students from class VIII, totaling 21 students. The research subject was taken based on three student learning styles, consisting of two visual learning style students, two auditory learning style students, and two kinesthetic learning style students. Subjects totaling six students aim to obtain more in-depth data about the process of students' mathematical connections in solving math problems based on learning styles.
Data Collection
The data needed in this study were obtained by using three data collection techniques, namely questionnaires, tests, and interviews. The questionnaire used in this study was a learning style questionnaire, with the aim of obtaining data about the learning styles possessed by students so that with this data the researcher could determine the research subject. The test Research Article Vol.12 No.6 (2021), [1415][1416][1417][1418][1419][1420] conducted by the researcher aims to determine the process of students' mathematical connection in solving math problems in terms of student learning styles. The test given to students is in the form of description questions that aim to find out the students' answers in writing and are given after determining the subject. The interview used by the researcher was a free guided interview with sub questions related to the mathematical connection test (Arikunto, 2014). This interview aims to strengthen information about the student's mathematical connection process from the results of students' mathematical connection tests.
Data Credibility
The credibility of the data in this study was carried out through triangulation of sources and methods (Denzin & Yvonna, 2009;Sugiyono, 2015). Triangulation of sources in this study was carried out by examining the data obtained through the source, namely two students from each of the three learning styles, namely two students who had a visual learning style, two students who had an auditory learning style, and two students who had a learning style. kinesthetic. Data from two students from each of these learning styles were described and categorized from the same, different, and specific views. Method triangulation in this study was conducted by comparing the data obtained, namely two students from each of the three learning styles as subjects obtained from the learning style questionnaire results compared using the results of the mathematical connection test and interviews.
Instruments
There are three instruments used to collect data in this study, namely a learning style questionnaire, a mathematical connection test sheet, and an interview guide. The type of learning style questionnaire used is a closed questionnaire consisting of 42 statements with five answer choices (Budiarti & Jabar, 2016;Maftuh, 2018;Sundayana, 2016). The reason the researcher adapted the questionnaire from the study was because the number of items was sufficient to collect data and the language used in the statement was easier to understand. The test sheet is a mathematical connection question in the form of a description of the area of the trapezoid and the triangle and the perimeter of the rectangle. Researchers develop mathematical connection test questions that refer to the 2014 PISA (Program for International Student Assessment) model questions (Kogure, 2013). The reason the researcher made the test questions by referring to the PISA model questions was because these questions required students to use their connection skills to solve math problems. Researchers used interview guidelines as a reference for conducting interviews to determine the process of students' mathematical connections in solving mathematical problems.
Data Analysis
Data analysis used in this research is data condensation, data presentation, and conclusion drawing (Sugiyono, 2015). Data condensation in this study refers to the process of selecting important things from the data, summarizing, and removing unnecessary ones. The presentation of the data uses narrative text obtained from the test results referring to Polya's four problem solving steps and the results of interviews to describe each of the mathematical connection processes of students who have visual, auditory, and kinesthetic learning styles in solving math problems. Drawing conclusions is the final stage of all the data that has been obtained and data analysis activities as a result of the research. Conclusion in this study aims to describe the process of students' mathematical connections in solving given mathematical problems in terms of student learning styles ( • Identify known and questioned information relationships with mathematical facts, concepts, and principles • Using relationships of facts, concepts and principles to write mathematical models or formulas needed to solve problems • Prescribes the operations and mathematical procedures that will be used to solve the problem 3 Carry out the completion plan • Making connections between concepts and procedures for solving problems • Apply the relationship between concepts and procedures and mathematical operations to solve problems • Using procedures by performing arithmetic operations according to the planned strategy 4 Looking back the completion • Re-checking the suitability of facts, concepts, principles, operations, and procedures used in solving problems • Re-check the accuracy of the procedure or steps used • Re-check the accuracy of the calculation results obtained
Findings
As explained in the introduction, the purpose of this study is to describe the process of students' mathematical connections in solving mathematical problems in terms of three learning styles, namely visual learning styles, auditory learning styles, and kinesthetic learning styles.
The Process of Mathematical Connections of Students Who Have Visual Learning Styles in Solving Problems
The process of mathematical connection of students in understanding the problem is to write down the information that is known from the question in the form of a parallelogram image along with the information related to the problem in the question, namely the area of land for oranges and banana land. The connections that students make in the step of understanding the problem lead to connections between ideas in mathematics. In Figure 1, it can be seen that in understanding the problem, students write down the information that is known to the problem in the form of a parallelogram picture along with the length of each side of the parallelogram, as well as a picture of the position of the bounding line and the length. The results of the following interview also show that in understanding the problem, students write down the information that is known to the question in the form of a parallelogram picture along with the length of each side of the parallelogram, as well as a picture of the position of the bounding line and its length. Students also raised the problems that were asked in the questions, namely the area of land for oranges and land for bananas. The process of mathematical connection of students in preparing a settlement plan is by using the Pythagorean formula to first find the length of the base of the right triangle. The connections made in the planning step lead to connections between ideas in mathematics. The process of connecting students in carrying out the settlement plan is by using the triangle area formula to calculate the area of the banana land and the trapezoid formula to calculate the area of land for oranges. The connections made in the step of executing the plan of completion are connections between ideas in mathematics and mathematical connections in everyday life. The mathematical connection process of students who have a visual learning style in checking again is by checking the suitability between the results of the answers from the pictures, the formulas used, and the results of the calculations obtained. The connections made by students who have a visual learning style at the reexamination step lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a visual learning style in solving problems is shown in Figure 2.
The Process of Mathematical Connections of Students Who Have an Auditorial Learning Style in Solving Problems
The process of mathematical connection of students in understanding the problem is only by presenting known information, namely the length of the base and the height of the triangle and parallelogram, as well as problems with the questions verbally, because of the difficulty in translating the questions in the form of pictures. The connections that students make in the step of understanding the problem lead to connections between ideas in mathematics. The results of the following interviews show that in understanding the problem, students provide known information, namely the length of the base and the height of the triangle and parallelogram as well as the problems in the questions.
P : Why don't you write down the information that is known and what is asked for in the questions? A : Because I have a hard time, Mom, if you read long writing, let alone draw. I'm also not used to writing down what is known and what is being asked in the question ma'am. P : Try to state what is asked from the question?
A : The area of land for oranges and land for bananas Bu.
The process of mathematical connection of students in preparing a settlement plan is by using the formula for the area of a triangle and the area of a parallelogram. The connections that students make in the step of developing a plan for completion lead to connections between ideas in mathematics. The process of mathematical connection of students in implementing the settlement plan is to calculate the area of land for bananas and land for oranges using the formula for the area of a triangle and the area of a parallelogram by substituting the base length and height. The connections that students make in the step of implementing the completion plan lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. In Figure 3, it can be seen that in implementing the completion plan, students first calculate the area of a right triangle using the area formula of a triangle, then calculate the area of a parallelogram using the parallelogram area formula. Students only substitute the length of the base of the triangle and parallelogram and its height to operate on the area formula for the triangle and the area formula for the parallelogram. It can be seen that in implementing the solution plan, the students first calculate the area of a right triangle using the area formula of the triangle, then calculate the area of a parallelogram using the parallelogram area formula. Students only substitute the length of the base of the triangle and parallelogram and its height to operate on the area formula for the triangle and the area formula for the parallelogram. It can be seen that in implementing the solution plan, the students first calculate the area of a right triangle using the area formula of the triangle, then calculate the area of a parallelogram using the parallelogram area formula. Students only substitute the length of the base of the triangle and parallelogram and its height to operate on the area formula for the triangle and the area formula for the parallelogram. because the student does not re-check the results obtained. The connections made by students who have an auditory learning style are only carried out in three steps of problem solving, namely understanding the problem, compiling a resolution plan, and carrying out the solution plan. The process of mathematical connection of students who have an auditory learning style in solving problems is shown in Figure 4.
The Process of Mathematical Connections of Students Who Have Kinesthetic Learning Styles in Solving Problems
The process of mathematical connection of students in understanding the problem is to write down the information that is known from the question in the form of a parallelogram image along with the information related to the problem in the question, namely the area of land for oranges and banana land. The connections that students make in the step of understanding the problem lead to connections between ideas in mathematics. In Figure 5, it can be seen that the students mentioned the information they knew by drawing Pak Ahmad's land, which was accompanied by the length of each side of the land, as well as the position of the bounding rope with its length. Research Article Vol.12 No.6 (2021), 1415-1420 The following interview results also show that the students understood the problem by drawing Pak Ahmad's land, accompanied by the length of each side of the land, as well as drawing the position of the bounding rope with its length. Students also raise the problems that are asked in the questions. The process of mathematical connection of students in preparing a settlement plan is by using the Pythagorean formula to first find the length of the base of the right triangle. The connections that students make in the step of developing a plan for completion lead to connections between ideas in mathematics. The process of mathematical connection of students in implementing the settlement plan is by calculating the area of a right-angled trapezoid to find the area of land for oranges by subtracting the area of a parallelogram from the area of a right triangle. The connections that students make in the step of implementing the completion plan lead to connections between ideas in mathematics and mathematical connections in everyday life. The process of mathematical connection of students in rechecking, namely by checking the suitability between checking the answer results from the picture, formula used, and the calculation results obtained. The connections that students make in the reexamination step lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a kinesthetic learning style in solving problems is shown in Figure 6. The connections that students make in the reexamination step lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a kinesthetic learning style in solving problems is shown in Figure 6. The connections that students make in the reexamination step lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a kinesthetic learning style in solving problems is shown in Figure 6. mathematical principles to create the models or formulas needed to solve problems. The __________________________________________________________________________________________ 1462 connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a kinesthetic learning style in solving problems is shown in Figure 6. mathematical principles to create the models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of mathematical connection of students who have a kinesthetic learning style in solving problems is shown in Figure 6.
Discussion
The process of mathematical connection of students who have visual and kinesthetic learning styles in understanding the problem is to write down the information that is known from the question in the form of a parallelogram image along with its information relating to the problem in the area of land for oranges and banana land. The connection process of students who have an auditory learning style in understanding the problem is only by presenting known information, namely the length of the base and the height of the triangle and parallelogram, as well as problems with the questions verbally, because of the difficulty in translating the questions in the form of images. The connections made by students who have visual, auditory, and kinesthetic learning styles lead to connections between ideas in mathematics, namely identifying the relationship between facts, concepts, and mathematical principles on the problem to be solved. There are similarities in the mathematical connection process in understanding problems between students who have visual and kinesthetic learning styles, and there are differences in the mathematical connection process in understanding problems between students who have visual and kinesthetic learning styles and students who have auditory learning styles, which means that the individual way of thinking in the process of receiving, organizing and analyzing data from information that has been obtained through different perspectives, so that students have their own way of expressing mathematical concepts (Fauzi & Priatna, 2019;Ghufron & Risnawita, 2012).
The process of mathematical connection of students who have visual and kinesthetic learning styles in preparing the completion plan is by using the Pythagorean formula to find the base length of a right triangle first. The process of connecting students who have an auditory learning style in preparing a settlement plan is by using the formula for the area of a triangle and the area of a parallelogram. The connections made by students who have visual, auditory, and kinesthetic learning styles lead to connections between ideas in mathematics, namely using relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. There is a similarity in the process of mathematical connections in preparing the completion plan between students who have visual and kinesthetic learning styles (Argarini, 2018;Richardo et al., 2014).
The connection process of students who have a visual learning style in carrying out the completion plan is by using the triangle area formula to calculate the area of the banana land and the trapezoid formula to calculate the area of land for oranges. The process of students who have a kinesthetic learning style in carrying out the completion plan is by calculating the area of a right-angled trapezoid to find the area of land for oranges by subtracting the area of a parallelogram from the area of a right triangle. The mathematical connection process of students who have an auditory learning style in carrying out the settlement plan is by calculating the area of banana and orange land using the formula for the area of a triangle and the area of a parallelogram by substituting the base length and height. The connections made by students who have a visual, auditorial, learning style and kinesthetic leads to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics is done by making a relationship between one concept and another in solving problems. Mathematical connections in everyday life are carried out by translating problems related to everyday life into the language of mathematics and applying the relation of concepts to mathematical arithmetic procedures and operations to solve problems related to everyday life. There are differences in the process of mathematical connections in implementing the completion plan between students who have visual, auditory, and kinesthetic learning styles ( The mathematical connection process of students who have visual and kinesthetic learning styles in checking again is by checking the suitability between checking the answer results from the picture, the formula used, and the calculation results obtained. The connections made by students who have visual learning styles and students who have kinesthetic learning styles lead to connections between ideas in mathematics and mathematical connections in everyday life. The connection between ideas in mathematics can be seen from the use of relationships between facts, concepts, mathematical principles to create models or formulas needed to solve problems. The connection of mathematics in everyday life can be seen from the application of the relationship between concepts and procedures and operations of mathematical calculations to solve problems related to everyday life. The process of connecting students who have an auditory learning style does not occur because these students do not re-check the results obtained, so that the connections made are also not visible. There is a similarity in the process of mathematical connections in re-examining completion between students who have visual and kinesthetic learning styles, and there are differences in the mathematical connection process in re-checking the completion between students who have visual and kinesthetic learning styles and students who have auditory learning styles, which means one's way.
Conclusion
From the description that has been explained, it is concluded that students who have a visual learning style and show their mathematical connection process in solving mathematical problems in the four steps of solving the problem, namely understanding the problem, compiling a plan for completion, and executing the solution plan, and re-checking the solution. The connections made by students who have visual and kinesthetic learning styles in solving problems lead to connections between ideas in mathematics and mathematical connections in everyday life. However, the mathematical connection process carried out by students who have visual and kinesthetic learning styles is different. Students who have an auditory learning style show their mathematical connection process in solving mathematical problems in only three steps of problem solving, namely understanding the problem, compiling a resolution plan, and implementing a settlement plan. The process of mathematical connection of students who have auditory learning styles in checking back does not occur, because these students do not recheck the results obtained. The connections made by students who have an auditory learning style in the three steps of solving these problems lead to connections between ideas in mathematics and mathematical connections in everyday life. Thus, the process of mathematical connection between students who have different visual, auditory, and kinesthetic learning styles in solving math problems. because the student does not re-check the results obtained. The connections made by students who have an auditory learning style in the three steps of solving these problems lead to connections between ideas in mathematics and mathematical connections in everyday life. Thus, the process of mathematical connection between students who have different visual, auditory, and kinesthetic learning styles in solving math problems. because the student does not re-check the results obtained. The connections made by students who have an auditory learning style in the three steps of solving these problems lead to connections between ideas in mathematics and mathematical connections in everyday life. Thus, the process of mathematical connection between students who have different visual, auditory, and kinesthetic learning styles in solving math problems.
Suggestions
The results showed that the learning style influenced the mathematical connection process carried out by students in solving math problems. Thus, the researcher suggests that teachers need to pay attention to the learning styles possessed by students and familiarize students to connect mathematics internally, namely connecting ideas in mathematics and externally, namely connecting mathematics in everyday life. This study is limited to only examining the process of mathematical connections in solving mathematical problems with a review of learning styles. Therefore, in the next research, the researcher expects the addition of other variables related to the mathematical connection process. | 2021-08-03T00:04:07.431Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "6bfae882aea28800f0977e957cfb67470b74310a",
"oa_license": "CCBY",
"oa_url": "https://turcomat.org/index.php/turkbilmat/article/download/2684/2293",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "901ddce15b6c0788655d401d52dcd5828f0027ab",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
225326575 | pes2o/s2orc | v3-fos-license | Triticum dicoccum Schubler wheat: A potential source for wheat bio-fortification program
Malnutrition is a major threat to the world, especially for zinc (Zn) and iron (Fe). Breeding of wheat with increased grain Zn and Fe levels is a cost-effective, sustainable solution to malnutrition issues. Modern wheat varieties have limited variation in grain Zn and Fe. Among the wheat species, T. dicoccum exhibits high micronutrient variability that can be conveniently explored to improve other cultivated wheat species. Hence, the magnitude of variability for grain nutrients was studied in dicoccum wheat germplasm accessions of the local collection in Peninsular India. Grain Zn concentration ranged from 35.2ppm to 54.0ppm, while Fe concentration ranged from 33.8ppm to 48.5ppm. Wide variability was also reported for protein content (14.8% to 16.9%) and sedimentation value (22.8ml to 41.3ml). Moderate phenotypic and genotypic coefficients of variation were observed for the number of grains per spike, thousand-grain weight, and yellow pigment. The heritability and genetic advance over mean were moderate to low for grain nutrients. In the tested material, there is a possibility to improve both Zn and Fe simultaneously as indicated by correlation values. Thus, the present study provides valuable genetic resources for grain quality parameters improvement which are associated with the quality of the endproducts in Indian wheat.
Introduction
Micronutrient malnutrition, affects over 40 percent of the world's population, especially in many developing nations (Welch and Graham, 2002) [27] . Around one-third of humans in all age groups and populations, particularly in developing countries, especially women and children, are severely affected by a deficiency of key micronutrients, such as iron (Fe), zinc (Zn), and vitamin A (Ghandilyan et al., 2006) [6] . Traditional efforts to address the micronutrient deficiency problem have concentrated on micronutrient supplementation and food fortification (White andBroadley, 2005 andGhandilyan et al., 2006) [29,6] . These methods, however, have not proved sustainable, especially in developing countries where people are unable to afford high micronutrient content animal and fishery products. Instead, most people in these regions are eating cereals as the staple food that only offers a limited amount of micronutrients. A solution to mineral malnutrition called "biofortification" has been suggested in recent years (Singh et al., 2005) [21] . Bio-fortification is a process of increasing bioavailable concentrations of essential elements in the edible portions of crops through agronomic intervention or genetic selection (White and Broadley, 2005) [29] . Wheat is an important food crop grown in developed as well as developing nations (Joshi, 2007) [10] . The three species of wheat Triticum aestivum, Triticum durum, and Triticum dicoccum are cultivated in the country. The bread wheat is the most important species as it covers more than 90 percent of the wheat cultivated area. In general, bread wheat exhibits narrow genetic variability for grain nutrients. Tetraploids considered as one of the most promising donors to improve Zn and Fe concentrations of wheat (Cakmak et al., 2000) [4] . Durum wheat has also higher protein content than bread wheat with grain protein representing a sink for Zn and other micronutrients. On the contrary, durum wheat has limited variability. Interestingly, Triticum dicoccum proved to be a very good source of mineral nutrients content (Cakmak et al., 2004) [3] . This natural variation can be utilized to biofortify bread and durum wheat for Zn and Fe. In addition to this, consumer preference for richness, diversity, and high-quality food products has increased interest in dicoccum wheat and its products (Annapurna, 2000) [1] . The products of dicoccum are more tasty and soft, have potential baking, parboiling, and popping quality. It also possesses higher content of lysine, crude fiber, and minerals concentration than bread and durum wheat (Zaharieva et al., 2010) [30] . As a consequence of these nutritional characteristics, the demand for dicoccum grain has increased over the last decade. Genetic enhancement of crop cultivars with elevated levels of these micronutrients would be cost-effectively sustainable in solving global micronutrient malnutrition problem. Consequently, under ICAR funded project, the work was initiated to identify new genetic sources in the local collection of dicoccum germplasm for grain nutrients that are being unexplored. Further, such information may be great information to set the future path for the bio-fortification breeding program of cultivated species of wheat.
Materials and methods 2.1 Field Experiment
The present study included pre-tested 56 dicoccum wheat germplasm accessions, out of which, 34 local germplasm accessions, 13 advanced breeding lines, and nine checks (Table-1) were evaluated in alpha lattice design with four blocks and two replications. Each block consisted of 14 genotypes with two rows per genotype and 3-meter length with spacing of 20 cm between rows. The experiment was conducted at three distinct environments namely, All India Coordinated Wheat Improvement Project, University of Agricultural Sciences, Dharwad, Agriculture Research Station, Agricultural Research Station, Kalloli and Ugar Sugars Ltd., Ugar Khurd which are characterized by different agro-climatic zone, soil types, nutritional status and locations for studying the genetic variability for yield.
Observations recorded
Yield, yield attributing traits viz., days to fifty percent flowering, days to maturity, number of productive tillers per meter row, plant height (cm), spike length (cm), number of grains per spike and thousand-grain weight (g), quality parameters like protein content (%), yellow pigment (ppm) and sedimentation value (ml) and micronutrient such as iron and zinc content (ppm).
Procedure for estimation of quality parameters and micronutrients (Fe and Zn) 2.3.1 Protein content (%):
Protein content of the grain was analyzed by a non-destructive method using near-infrared transmittance based protein analyzer
Micronutrient analysis of grains:
Zn and Fe content was determined by using Atomic Absorption Spectrophotometer and expressed the concentration in ppm.
Sedimentation value (ml):
The sedimentation value of the grain was estimated by sodium dodecyl sulfate (SDS) test by following the standard analytical procedure as described by Mishra and Gupta (1995) [15] .
Yellow pigment (ppm):
The yellow pigment in wheat grain was analyzed by the procedure as described by Mishra and Gupta (1995) [15] .
Estimation of micro and macronutrients in the soil before sowing and after harvesting
The soil from each experimental plot was collected at the depth of 0 to 30 cm from three locations separately in separate bags and then 10 gm of soil from each treatment used for estimation of micro and macronutrients before sowing and after harvest of the crop. The data obtained from three locations were pooled and subjected to the biometrical analysis that included heritability and genetic advance in percent mean. Genotypic coefficient of variation (GCV), phenotypic coefficient of variation (PCV), broad-sense heritability (h² bs ) and genetic advance over a mean (GAM) were estimated by the formula suggested by Burton and De Vane (1953), Johnson et al. (1955) and Hanson et al. (1956) [2,9,7] . The estimate of GCV and PCV were classified as low, medium, and high (Sivasubramanian and Menon, 1973) [22] . The heritability was categorized as suggested by Robinson et al. (1949) [19] . Further, genetic advance in percent mean was classified by adopting the method of Johnson et al. (1955) [9] .
Result and discussion
The hulled wheat, Triticum dicoccum, is one of the first cereals to be domesticated. By the early 20 th century, higheryielding wheat varieties had almost everywhere replaced the emmer. Triticum dicoccum represents a very promising genetic source to increase concentrations of Zn and Fe in modern wheat cultivars. It is also a rich source of genetic diversity for some agronomically and nutritionally valuable traits, especially for amino acids and proteins (Cakmak et al. 2004) [3] . Variability study is limited in dicoccum wheat (Triticum dicoccum) in general, micronutrients in specific. Variability particularly decides the effectiveness of selection (Subhashchandra et al., 2009) [23] . Higher variation paves the way for crop improvement. Genetic information such as heritability and genetic advance over mean for various quality and yield contributing traits will be of great value to allow the breeder to use the best genetic stock to improve the breeding program (Kyosev and Desheva, 2015) [11] .
It is interesting to note that germplasm exhibited wide variation for all the grain nutrients and agronomic traits indicating the existence of useful genetic variability among the entries studied (Table 2). Grain Zn varied by 35.21 to 54.04 ppm with a mean of 44.9 ppm while grain Fe content ranged from 33.79 to 48.53 ppm with a mean of 41.12 ppm (Table 3). Around 12 percent of the genotypes recorded higher Zn content (Fig.1a) than the best check while only 2 percent for Fe content (Fig.1b). W.r.t. to protein content, more than 50 percent of genotypes were superior to check (Fig.1c). Further, the tested materials found to be the most potential donors for sedimentation value which is an indirect measure of gluten strength (Fig.1d) and yellow pigment (Fig.1e) most desired parameter from the consumer preference point of view. These results suggest that the studied germplasms may be a good source for grain nutrients to improve varieties with agronomic performance. However, none of the entries were having a good agronomic background. The highest Zn containing genotypes such as DDK 50388 and DDK 50344 while, DDK 50366 are being used as potential donors for tetraploid wheat improvement in UAS, Dharwad.
Coefficient of Variation
The range in mean values does not reflect the total variance in the material studied. Hence, the actual variance has to be estimated for the characters to know the extent of existing variability. So, the coefficient of variation (PCV and GCV) which is calculated by considering the respective means have been used for the comparison. High values of these parameters indicate wider variability and vice versa. The quality traits like micronutrients like iron and zinc, sedimentation value, protein content, yellow pigment and other yield attributes viz., spike length, number of productive tillers per meter row length and grain yield, exhibited low to moderate PCV and low GCV indicating their less amenability for selection in advanced generations. This narrow variability is of course due to the selection of pretested material for high nutrients in the previous season. [16,14,8,25,26] . The influence of the environment was significant in an expression of micronutrients and protein content and yield attributes such as spike length, the number of grains per spike and grain yield as revealed by wider differences between PCV and GCV. These findings are following the findings of Gashaw et al. (2010) [5] , Tsegaye et al. (2012) [25] , and Nukasani et al. (2013) [17] . Shimelis et al. (2016) [20] found low values for GCV and the higher difference in magnitude with the corresponding protein content in durum wheat and he suggested that trait is difficult to improve. Overall, the coefficient of variation indicated a moderate amount of variability for most of the traits except for a few traits (Table 3).
Heritability and genetic advance over mean
Broad sense heritability gives an idea about the portion of observed variability attributable to genetic differences. According to Johnson et al. (1955) [9] , heritability estimates along with genetic gain would be more useful than the former alone in predicting the effectiveness of selecting the best individuals. Therefore, it is essential to consider the predicted genetic advance over mean along with heritability estimate as a tool in the selection program for better efficiency.
In the current study, high heritability coupled with high genetic advance over mean was recorded for quality traits like yellow pigment and yield attribute like thousand-grain weight. This indicates that there was a low environmental influence on the expression of these characters and these attributes were extremely heritable hence, one can practice selection in early generations. High heritability and genetic advance over mean for these traits were earlier reported by Tsegaye et al. (2012) [25] in durum wheat genotypes, while Mecha et al. (2016) [14] in bread wheat genotypes. High heritability coupled with moderate genetic advance observed for a quality parameter like sedimentation value and yield attributes like the number of tillers per meter row length and grains per spike. Similar results were reported for the number of tillers per meter row length by Wahidy et al. (2016) [26] , Mecha et al. (2016) [14] , for sedimentation value by Nukasani et al. (2013) [17] , and Hokrani et al. (2013) [8] , for grains per spike by Mecha et al. (2016) [14] . However, the results are in contrast with observations made by Nazar et al. (2006) [16] and moderate heritability for the number of grains per spike was reported by Tanzeen et al. (2009) and the moderate heritability for the trait number of tillers per row meter length was reported by Manal (2009) [13] . Low to moderate heritability and low to moderate genetic advance were noticed for important quality parameters like protein content, micronutrients (iron and zinc content of the grains), and yield traits like spike length and grain yield. These results are as per Shimelis et al. (2016) [20] . Therefore, selection based on phenotypic observation alone may not be very effective for these traits. It is worthier to mention here that both Zn and Fe can be simultaneously improved as revealed by correlation value (Fig 2). A strong association of the protein with Zn and Fe was also shown by earlier worker Ortiz-Monasterial et al. (2007) [18] indicating grain protein may be a sink for Zn and Fe (Table 3).
Conclusions
It is concluded from the present study that the dicoccum germplasms can serve as the most potential donors for all the quality parameters including micronutrients. Further, few promising accession can be registered as national genetic stocks or identified as varieties. This is a kind of study, which indicates the possibility of exploration of the unrealized potential of ancestral species like Khapli wheat to address the global issue of malnutrition through bio-fortification.
Acknowledgment
Suma S. Biradar gratefully acknowledges the financial support provided by Crop Project on Biofortification in selected crops for nutritional security (wheat) funded Indian Council of Agricultural Research (ICAR), New Delhi. | 2020-10-28T18:56:09.696Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "0b8b40b4fde0f4d670a5f7a8603dfd154892f380",
"oa_license": null,
"oa_url": "https://www.chemijournal.com/archives/2020/vol8issue5/PartT/8-4-509-430.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6f7da4aca63531a4d8fe89dd8077c855609451b5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
202719457 | pes2o/s2orc | v3-fos-license | Plasmon-exciton polaritonics shed light on quantum dot dark-state dynamics
The strong coupling of quantum emitters to plasmonic cavities has emerged as an exciting frontier in quantum plasmonics and optics. Here we report an extensive set of measurements of plasmonic cavities hosting one to a few semiconductor quantum dots (QDs). Scattering spectra demonstrate that these devices are at or close to the strong coupling regime. Using Hanbury Brown and Twiss (HBT) interferometry, we demonstrate non-classical emission from the QDs, allowing us to directly determine their number in each device. Surprisingly, PL spectra measured from QDs coupled to the plasmonic devices are narrower than scattering spectra and show smaller values of the apparent Rabi splitting. Using extended Jaynes-Cummings model simulations, we find that the involvement of a dark state of the QDs explains these experimental findings. Indeed, the coupling of the dark state to the plasmonic cavity makes its emission bright enough to appear as a strong separate peak in the PL spectrum. The calculations also show that a slow decay component in the HBT correlation curves can be attributed to the relaxation of the dark state. The coupling of quantum emitters to plasmonic cavities thus emerges as a means to probe and manipulate excited-state dynamics in an unconventional manner and expose complex relaxation pathways.
Introduction
Manipulating and controlling the interaction of photons with individual quantum emitters has been a major goal of quantum photonics in recent years (1-3). Such control can be realized by engineering the local photonic environment of the quantum emitter, e.g. by placing it inside an optical cavity (4). By coupling the excited state(s) of the emitter to the electromagnetic (EM) field of the cavity, one can achieve various exotic light-matter coupled states (1, 2), single-photon emission sources (5,6), photonic switches (7,8) and more.
Furthermore, in recent years it has been shown that strong coupling to electromagnetic modes can be used to modify photophysics and chemical reaction dynamics (9,10).
The ability of an optical cavity to couple to a quantum emitter can be quantified in terms of their coupling rate, g, which depends, among other factors, on the quality factor of the cavity (Q) and the effective volume of its EM mode (V). The coupling rate can be compared with the rates of loss, including the rate of photon escape from the cavity (κ) and the intrinsic emission rate of the quantum emitter (γ). This comparison leads to two interaction regimes. In the weak coupling regime, the spontaneous emission of an emitter gets enhanced by the cavity, but the states of the emitter and the cavity do not change (11,12). In contrast, in the strong coupling regime, these states combine, forming new hybrid states (1, 2). These so-called polaritons are separated energetically by essentially twice the coupling rate, manifested in optical spectra as the Rabi splitting. Achieving strong coupling at the limit of a single quantum emitter is essential for the observation of many quantum effects and is of great importance for optical applications such as quantum information processing (13)(14)(15)(16)(17) and quantum communication (18,19).
Plasmonic cavities formed by metallic surfaces offer unique possibilities to achieve strong coupling with single quantum emitter even at room temperature, as they can focus light to deep sub-wavelength regimes (20). Although the Q of plasmonic cavities is relatively low due to the ultrafast relaxation of surface plasmons (21), the mode volume is sufficiently small to reach the strong coupling limit with single emitters. This situation was realized in the last couple of years in our lab and others' (22)(23)(24)(25). In particular, we demonstrated strong coupling between individual semiconductor nanocrystals (quantum dots, QDs) and plasmonic bowtie antennas, which we observed as vacuum Rabi splitting in scattering spectra of the coupled systems (22). Two additional recent studies have also demonstrated the strong coupling of individual QDs and plasmonic cavities (23,24).
To further understand the physics of strong coupling in plasmonic devices, we performed an extensive set of measurements of QDs within plasmonic cavities at or close to the strong coupling regime. Using Hanbury Brown and Twiss (HBT) interferometry, we directly demonstrated non-classical emission from one to three QDs within our devices. By comparison of spectra and HBT correlation functions to simulations based on an extended Jaynes-Cummings model, we inferred the unanticipated involvement of a dark state of the QDs in the relaxation dynamics of the coupled nanosystem.
Results
Scattering and Photoluminescence (PL) spectra of individual coupled plasmonic devices. We constructed plasmonic bowties with semiconductor QDs in their gaps. We used electron-beam lithography to fabricate silver bowties on 18 nm SiO2 membranes. CdSe/ZnS quantum dots (QDs) were positioned into the gap region of bowties using interfacial capillary forces (22) (Figure 1a). Scattering spectra of individual QD-bowtie hybrids were measured using dark-field (DF) microspectrometry (22,26), while PL spectra were measured from the same devices following excitation with a CW laser at 532 nm. Figure 1 were 200 and 230 meV, respectively. Fits of the scattering spectra to a coupledoscillator model (27,28), presented in Figure S1, provided values for the coupling rate, g, which are 52.6±0.3 and 56.5±0.8 meV, respectively. The splitting was also observed in PL spectra (Figure 1c & e, red) recorded from the same cavities. However, scattering spectra looked significantly and consistently broader than PL spectra, even though it could have been expected that mixing of the QD and bowtie levels would yield similar spectral widths. This discrepancy was also manifested in the values of the splitting between peaks obtained from PL measurements, which were only 100 and 130 meV, respectively.
Overall, we measured scattering and PL spectra from 23 bowtie cavities loaded with QDs.
(Additional spectra are shown in Figure S2.) Analysis of the whole ensemble of spectra is shown in Figure 2. Histograms of the splitting values obtained from scattering and PL spectra are shown in Figure 2a and Figure 2b, respectively. In DF scattering spectra, we observed splitting values (ꭥDF) as high as 350 meV, while in PL spectra the maximal splitting (ꭥPL) observed was 160 meV. A correlation plot of ꭥPL versus ꭥDF is shown in Figure 2c. It is evident that the correlation is very weak, suggesting that ꭥPL does not depend on various parameters like bowtie gap size, number of QDs etc., to the same extent as ꭥDF.
HBT interferometry. To study the quantum properties of the light emitted by the coupled devices, we turned to HBT interferometry. We first measured the second-order photon correlation curves (g 2 (t)) of light emitted from individual QDs on a glass substrate. An example of such a correlation curve is shown in Figure 3a. (Additional examples are presented in the SI, Figure S3.) The antibunching observed in the correlation curve at zero delay, with a value lower than 0.5, verifies that the measurement is indeed from a single QD.
However, in some cases the number of QDs within the laser spot was larger than one. We therefore fitted the measured correlation curves with Eq. 1, in order to obtain both the lifetime of the emitting exciton and the number of QDs. Figure S4, and Figure S4d shows the distribution of lifetimes obtained from fits to the correlation curves, ranging from 3 ns to 12 ns. Surprisingly, there seems to be only a minor shortening of the lifetimes (by a factor of ~5) compared to QDs on glass. To verify this result, we also performed direct time-resolved PL measurements of several devices, the results of which are shown in Figure S5.
The lifetimes extracted from these measurements were also shortened by just a factor of ~5 from the lifetimes of bare QDs. This finding is highly unexpected, as the mixing of the QD exciton with the plasmon in the cavity should have opened a fast relaxation channel with a lifetime closer to that of the plasmon (29). A recent study of an ensemble of QDs deposited on a plasmonic hole array also reported only a modest shortening of the excited-state lifetime (30). Interestingly, Ebbesen and coworkers found a similar deviation from the expected shortening of the PL lifetime in a different system consisting of molecules coupled to a microcavity (31).
Jaynes-Cummings simulations illuminate the experimental observations. Three surprising observations emerge from the experiments reported above. First, PL spectra of QDs coupled to the plasmonic cavities are narrower than scattering spectra. Second, the splitting between peaks observed in PL spectra is only weakly (if at all) correlated with the splitting in scattering spectra. Finally, the PL lifetime seems to be only mildly shortened compared to that of QDs on glass. All three observations deviate from expectations for strongly or nearly-strongly coupled QD-plasmonic devices. Indeed, the formation of polaritonic states due to coupling should lead to scattering and PL spectra of similar width, to similar values of splitting seen in both spectra, and to PL lifetimes on the femtosecond time scale, close to the ultrafast decay times of the plasmonic cavities.
As a way to reconcile the experimental observations with our understanding of the physics of strong coupling, we hypothesized that a dark state of the QDs might be involved in the observed excited state dynamics of the coupled systems. Long-lived dark excitonic states of different origins have been demonstrated in QDs (32,33). We assumed that the weak coupling of such a dark state to the plasmonic cavity might significantly alter the dynamics of the system and the PL spectra. To simulate such dynamics and examine their potential effect on the experimental observations, we turned to a quantum mechanical framework based on an extended Jaynes-Cummings Hamiltonian, with Lindblad terms to introduce incoherent pumping (for the PL spectra) and relaxation channels. The quantum emitter was modeled as an electronic system composed of three levels: a ground state, a level with a large decay rate, mimicking the lowest bright excitonic state, and another level, positioned slightly lower in energy and possessing a much smaller decay rate, mimicking the dark state. A scheme of the plasmon and quantum emitter energy levels used in this model is shown in Figure 4a, and the relevant parameters are given in Table 1. From dynamic simulations based on this model, we calculated scattering and PL spectra as well as second-order photon correlation functions. A representative set of spectra and the associated correlation function are shown in Figure 4.
Importantly, simulated second-order photon correlation curves (Figures 4c and S6) were found to decay with two distinct lifetimes, a very fast one, on the femtosecond time scale, and a much slower one, on the nanosecond time scale. The fast lifetime is connected with the strong coupling regime, due to the involvement of the plasmonic decay channels, but is too short to be observed in our experiments. Hence, only the long-time component of the correlation curve is registered experimentally, and it can be attributed to the decay of the dark state into two possible channels. The first decay channel is due to population transfer to the bright excitonic mode of the QD, from which fast emission brings the system back to the ground state. The second decay channel involves enhanced emission due to weak coupling of the dark state to the plasmonic mode.
The calculated PL spectra are significantly narrower than the scattering spectra, and show a reduced spectral splitting between emission peaks, in qualitative agreement with the experimental observations. The PL spectrum is indeed composed of two peaks: the highenergy peak is a consequence of the emission from the bright exciton (strongly coupled to the plasmon), while the narrower low-energy peak is due to emission from the dark exciton (weakly coupled to the plasmon). This peak assignment was deduced from the emission spectra by exploring different coupling regimes for the excited states involved in our model.
The spectral feature associated with the dark-exciton peak, for instance, completely disappeared when we set the coupling between the cavity and the dark exciton equal to zero (as shown in Figure S7).
The theoretical model thus clarifies the origin of the surprising observations in our measurements, attributing them to the involvement of a dark state in the excited-state dynamics. Indeed, while a minority of the experimental data sets showed spectra features that somewhat departed from the main trend, likely due to the intrinsic variability of the fabricated systems, most data sets were well explained by the model.
Discussion and conclusion
We reported here a vast set of measurements of QDs embedded within plasmonic cavities, which allowed us to expose unique excited-state dynamics involving both polaritonic states and dark states. Coupling values of ~75 meV were deduced from light-scattering spectra by fitting to a coupled-oscillator model (see histogram of g values in Figure S8). Based on two common conditions for strong coupling (34) (see Supplementary Text), our devices were found to be either at the strong coupling limit or close to it. In addition to scattering, we also obtained the PL spectrum of each device, and for most devices we also recorded the second- Theoretical and experimental studies of QDs have revealed different types of dark states.
Exchange interactions lead to the splitting of the band-edge exciton with the appearance of a dark state as the lowest energy level and a bright state above it (32). Experimental work provided direct evidence for this splitting and showed that the dark and bright states are separated by less than 10 meV (35). This energy difference is too small to account for our observations. On the other hand, the occurrence of trapped surface states whose transitions are significantly red shifted compared to the bright exciton (33) can account for the hierarchy of energies used in our model. Therefore, it is reasonable to suggest that the low-energy narrow emission line in our PL spectra is due to a dark state that gets enhanced significantly through interaction with the plasmonic cavity. The theoretical simulations support this assertion; the interaction of the dark state with the plasmonic cavity is shown to enhance the dark state emission by at least a factor of ~1000.
Our findings, based on joint experimental and theoretical observations, demonstrate that it is possible to obtain strong coupling between one or just a few QDs and a plasmonic cavity at room temperature. Further, we find that coupling to the plasmonic cavity leads to unexpectedly rich excited-state dynamics and can allow a dark state to become bright enough to be readily observed in both PL spectra and second-order photon correlation curves. Our results pave the way for the manipulation of excitations within room-temperature strongly coupled devices, a necessary step for future applications such as the construction of quantum devices operating under ambient conditions and the modulation of chemical reactivity at the single-molecule level.
Methods
Fabrication of silver bowties. SiN grids (TEM windows) were cleaned with plasma (O2~ 3.5 sccm and Ar~1.5 sccm) at 150 W. The cleaned grids were spin-coated with polymethylmethacrylate (PMMA) at 4000 r.p.m for 45 seconds to get a 60 nm thick layer of the polyemer, followed by baking at 180 C for 90 seconds. The PMMA coated grids were then transferred to a Raith E_line Plus electron beam lithography chamber for electron beam exposure of PMMA in a series of pre-defined bowtie shapes, using an accelerating voltage of 30 kV and a current of 30 pA. The overall design of each fabricated grid involved matrices of bowties that were separated by 10 µm from each other to avoid any potential interaction between them. Each bowtie was composed of two 80 nm equilateral triangles, so that its plasmon resonance overlapped with the QD emission frequency (see Figure S9 for the scattering and PL spectra of an empty bowtie and a QD).The exposed PMMA was developed in a solution containing methyl isobutyl ketone and isopropyl (IPA) alcohol in 1:3 ratio for 30 seconds, followed by dipping in isopropyl alcohol (stopper) for 30 seconds and drying in a N2 gas flow. Subsequently, 3 nm chromium was deposited as an adhesion layer, which was then followed by evaporation of a 20 nm silver layer within an electron-beam evaporator (Odem Scientific applications). Following metal deposition, a liftoff process was carried out using a REMOVER PG solvent stripper to obtain a set of silver bowties on the SiN grid.
Incorporation of QDs into the gap regions of bowties.
The resist ZEP (a 1:1 copolymer of α-chloromethacrylate and α-methylstyrene) was spin-coated on the bowtie sample at 3000 r.p.m. for 45 seconds, and the sample was then baked for 180 seconds at 180 C. By using alignment marks, the electron beam was positioned at the bowtie gaps with an overlay accuracy of few nm to generate holes in the resist. The exposed regions were developed in amyl acetate and isopropanol. In order to drive QDs into the holes, we followed a method developed by Alivisatos and colleagues (36). The sample was placed vertically in an aqueous solution of QDs, and the solvent was allowed to evaporate slowly, exerting a capillary force along the receding line of contact, which drove the QDs into the holes. The number of QDs in the gap region could be partly controlled to be one, two or many by tuning the concentration of QD solution and diameter of the holes. A schematic of the bowtie fabrication and QD trapping process is shown in Figure 1a Dark-field and PL microspectrometry. Scattering spectra were measured using a homebuilt setup based on an inverted microscope and equipped with a 75 W Xenon lamp (Olympus), a dark-field condenser, a 100x oil immersion objective of a tunable numerical aperture (from 0.9 to 1.3), a 150 mm spectrograph (SpectraPro-150, Acton) and an air-cooled CCD camera (Newton, Andor Technologies). A NA of 0.9 was typically used in these experiments. Photoluminescence measurements were performed on the same setup using a NA of 1.3. The excitation source was a 532 nm laser, whose polarization was selected to be parallel to the long axes of the bowties. All the spectra were smoothed with a Savitzky-Golay filter.
Time
The QD is coupled to a single mode of a plasmonic cavity of energy ℏ pl , described via the Hamiltonian pl : where ( † ) is a bosonic annihilation (creation) operator. The plasmon-exciton coupling is included via the Jaynes-Cummings coupling term: pl−QD = ℏ B ( † |g〉〈e B | + |e B 〉〈g|) + ℏ ( † |g〉〈e D | + |e D 〉〈g|), where B ( D ) is the Jaynes-Cummings constant coupling the bright (dark) level to the plasmon.
The total Hamiltonian, , of the system thus becomes To obtain the observables of the system, we solve the Liouville-von Neumann equation for the system's density matrix, : which includes incoherent Lindblad operators, added to account for losses and pure dephasing. These operators take the following form: where is a generic system operator to be specified, and † stands for Hermitean conjugate.
In particular, we add the following Lindblad terms: Furthermore, we assume that the process of population transfer from the dark state to the bright state (and vice versa) is thermally activated and hence we get: where ℎ ( ; ) is the Bose-Einstein Distribution at temperature (we assume = 300 K) and energy and DB 0 is the spontaneous decay rate of the bright state, |e B 〉, into the dark state, |e D 〉.
We assume that the bright state, |e B 〉, as well as the dark state, |e D 〉, are incoherently pumped via terms γ Bg ℒ |e B 〉〈g| [ ] and γ Dg ℒ |e D 〉〈g| [ ], respectively. This accounts for pumping of the QD's states via another higher-energy bright state that is directly excited by an incident monochromatic laser.
Spectra: We calculate the absorption, ( ), scattering, ( ), and emission, ( ), spectra using the following formulas, valid close to the plasmonic resonance: Here we assume that the system absorbs, scatters and emits light predominantly via the plasmonic cavity and neglect any direct absorption, scattering or emission of the quantum dot.
Second-order correlation function:
The second-order photon correlation function 2 ( ) is evaluated in the framework of cavity-quantum electrodynamics from the QRT as: (c) A simulated g 2 (t) features a double-exponential decay. Inset shows a zoom of the fast (fs) decay of the system excitations that is not resolved on the ns time scale.
Supplementary Information
Supplementary Figures Fig. S1: Coupled-oscillator model fits of scattering spectra: STEM images (panels a&c) and dark-field scattering spectra (panels b&d) of two bowties containing QDs shown in where ωe and γe are the emitter resonance frequency and decay rate respectively, ωp and γp are the plasmon frequency and plasmon decay rate, respectively and g is the coupling rate.
The obtained values of g are 52.6±0.3 and 56.5±0.8 meV for panels b and d, respectively. In these fits, as well as the fits in Figure S2 In the latter case the PL spectrum features one broadened asymmetric peak arising from the onset of plasmon-bright-exciton strong coupling. The remaining model parameters used to generate the spectra in (a,b) are summarized in Table 1 of the main text. | 2019-09-23T09:02:14.000Z | 2019-09-23T00:00:00.000 | {
"year": 2019,
"sha1": "e49a5e1100f0afb612043333d6edda982f59b22b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e49a5e1100f0afb612043333d6edda982f59b22b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
14764598 | pes2o/s2orc | v3-fos-license | Use of Chinese Herb Medicine in Cancer Patients: A Survey in Southwestern China
Chinese herb medicine (CHM) is the most commonly reported traditional Chinese medicine (TCM) modality. This study aimed to assess the prevalence and associated factors of CHM use in cancer patients in southwestern China. Cancer patients from eleven comprehensive cancer centers were asked to complete a structured questionnaire. Of 587 available replies, 53.0% used CHM. Multiple logistic regression analysis showed that educational level, stage of disease, duration of cancer since diagnosis, marital status, and previous use of CHM were strongly associated with CHM use after cancer diagnosis. The source of information about CHM was mainly from media and friends/family. CHM products were used without any consultation with a TCM practitioner by 67.5% of users. The majority used CHM to improve their physical and emotional well-beings and to reduce cancer therapy-induced toxicities. About 4.5% patients reported side effects of CHM. This survey revealed a high prevalence of CHM use among cancer patients. However, these patients did not get sufficient consultation about the indications and contradictions of these drugs. It is imperative for oncologists to communicate with their cancer patients about the usage of CHM so as to avoid the potential side effects.
Introduction
Complementary alternative medicine (CAM) is becoming more and more popular all over the world. The use of CAM has increased steadily over the past two decades, and undoubtedly it has gained medical, economic, and sociological importance [1]. Traditional Chinese medicine (TCM) is an available option in many cancer centers in Asia [2], Western countries [3], and Africa [4]. TCM has been practiced in China for more than 2,000 years, and Chinese herb medicine (CHM) is the most commonly used category of TCM [5,6]. It is based on the Chinese philosophy of Yin-Yang and Five Elements [7,8]. It emphasizes the holistic principles and emphasizes harmony with the universe. The basic theories of TCM include five-zang organs and sixfu organs, qi (vital energy), blood, and meridians [9][10][11]. The introduction of Western medicine in the 17th century brought about significant changes in the development of TCM. Western medicine started to dominate the market. Currently, Western medicine and TCM are the two mainstream medical practices in China. Generally speaking, medical doctors in urban areas are more likely to use Western medicine, while TCM is practiced mainly in rural areas.
Cancer is a major disease in China with great social and economic burden [12]. It is the leading death cause in urban China and the second one in rural China [13]. TCM and Western medicine are different in their etiological concepts and therapeutic approaches about cancer. According to Chinese medicine theory, cancer is the manifestation of a qi disturbance which may be treated by mobilizing qi. In Western medicine, cancer is defined as uncontrolled growth of malignant cells which may be treated with surgery, chemotherapy and radiotherapy. Although there is still much debate about the efficacy of CHM, more and more data have demonstrated that CHM has the potential to improve tumor response to chemotherapy as well as patient's survival rates [14][15][16]. Chinese herbs have an anticancer effect by inducing apoptosis of cancer cells, enhancing the immune system, inducing cell differentiation, and inhibiting telomerase activities and growth of tumors [17,18]. In 2 Evidence-Based Complementary and Alternative Medicine addition, a growing number of researches have indicated that Chinese herbs might reduce the toxicities of adjuvant therapies [19,20]. In light of the aging population and ever increasing incidence of cancer, it is of great importance to investigate the CHM use in cancer patients.
Information on contemporary CHM use, attitudes, and beliefs is valuable for clinicians, decision makers, and patient educators who should respond to the growing interests among patients, particularly in comprehensive cancer centers. However, there have been no data of this sort in southwestern China. Therefore, we performed a survey about CHM use in China. Cancer patients in southwestern China generally represent the cancer patients in the underdeveloped area of China. Chinese people who live in the rest of the world always regard CHM as a choice of CAM therapies for many medical conditions. Research on the use of CAM, particularly the use of CHM among Chinese cancer patients, will not only provide insight in the current CHM use in cancer patients in China, the respective data could also be a source of information for future empirical and clinical studies. Therefore, we conducted this survey about the prevalence, influencing factors, reasons, source of information, and side-effects of CHM use in southwestern China.
Participants and Settings.
A descriptive survey design was used to collect data through a questionnaire about CHM therapies. All directors of comprehensive cancer centers in southwestern China were approached for possible collaboration. Eleven agreed to participate in the survey while 2 refused. Data were collected in the outpatient clinics of the 11 comprehensive cancer centers from June 2010 to August 2010. Both metastatic and nonmetastatic cancer patients who were at least 18 years of age were approached for possible inclusion in the study. Based on his/her interest and/or experience in CAM, a responsible physician was selected from each of these collaborating centers, who was called an investigator. Each investigator was responsible for introducing the study to all potential participants and then determining their eligibility for inclusion. As part of the consent process, patients were informed that they could withdraw from the study at any time and skip any survey question. To increase accuracy, patients recorded their responses directly onto the questionnaire. Questionnaires were returned to an investigator and coded with a unique identification number to ensure confidentiality. The investigator attempted to contact every patient in the clinic. They maintained a daily record of the accrual process, including the number of patients who could not be screened for eligibility because of the busy clinic environment and of those screened, and the reasons for ineligibility and nonparticipation.
Questionnaire.
In the present study, we used a modified version of questionnaire developed by Swisher et al. [21]. The topics in this questionnaire covered the prevalence, influencing factors, reasons, source of information, and side effects about CHM use. After a draft questionnaire was prepared, the questionnaire was reviewed by both Chinese medicine experts (N = 5) and Western medicine practitioners (N = 5), and then a pilot test was performed in a small group of patients in the outpatient clinic of the Department of Abdominal Cancer, West China Hospital, Sichuan University, after which the questionnaire was finalized. The original questionnaire we used was written in Chinese. The questionnaire attached of the supplementary material available online at doi: 10.1155/2012/769042 has been translated into English. The final questionnaire consisted of two parts, that is, the first part on the demographic information of participants (age, gender, educational level, household income, marital status and religions); and the second part on participants' clinical condition and use of CHM (activity of daily life, duration of cancer since diagnosis and stage of the cancer, methods of obtaining information about CHM, previous history of CHM use before cancer diagnosis, reasons for using CHM, and whether they used CHM after consulting a Chinese medical practitioner, as well as CHM's adverse effects). CHM users were defined as patients who had used CHM at least once since they were diagnosed with cancer, and those who never used CHM after cancer diagnosis were considered as nonusers. And CHM included raw herbal medicine (zhong yao cai), sliced herbal medicine (zhong yao yin pian), and patent medicine (zhong cheng yao).
Data Analysis.
Differences in demographic and clinical characteristics between CHM users and nonusers were assessed using χ 2 test. Factors associated with CHM use were identified via multiple logistic regression analysis (P < 0.05). The analysis provided an odds ratio and a 95% CI for each variable while simultaneously controlling for the effects of other variables. Data were analyzed with SPSS 13.0 software.
Demographic and Clinical Characteristics of Study Participants.
A total of 1,835 patients attended the clinics of the 11 cancer centers during the study period; however, 591 patients left the clinic before being invited to participate or screened by the investigators due to the busy clinic environment. Therefore, a total of 1,244 patients were screened for eligibility by the investigators, and 902 (72.5%) were eligible for participation. Of the 342 patients who were ineligible, reasons for exclusion included the following: without a cancer diagnosis (n = 166), younger than 18 years of age (n = 58), or unable to participate because of medical problems (n = 118).
Of the 902 eligible patients, 638 patients consented to participate of whom 51 were later excluded because they did not complete (n = 22) or return (n = 29) the questionnaire. Finally, a total of 587 patients consisting of 355 (60.5%) male and 232 (39.5%) female patients completed the survey with a response rate of 92.0% (587/638). Their mean age was 55.68 years; most participants (n = 533; 90.8%) were married, most earned <24,000 RMB annually (n = 378; 64.4%). Details of the demographic and clinical characteristics of all study participants are summarized in Table 1.
CHM Use.
Current CHM use was reported by 53.0% (n = 311) of the responders. Comparisons by χ 2 test showed (Table 2). Patients with a higher educational level, a disease duration between 6 to 60 months, or got married were less likely to use CHM.
Sources of Information about CHM.
Approximately 50.5% of patients used CHM based on the recommendations from media (e.g., TV programs, newspapers, internet, radios), followed by family members or friends (48.4%). Only 32.5% of patients obtained information about CHM from the TCM practitioners. Other sources of information (8.6%) included personal knowledge and other patients who
Reasons for Using CHM.
The most commonly reported reason for using CHM among the CHM users was a desire to improve physical and emotional well-beings and to reduce cancer therapy-induced toxicities (65.7%). They also used CHM because they wanted more control in the decisions about their medical care (46.2%) and believed that it was nontoxic (38.9%).
Side Effects.
Fourteen patients (4.5%) reported sideeffects of the CHM therapies they had used. These sideeffects included gastric upset and nausea, stomachache and diarrhea. Three experienced serious side-effects, including renal failure and cardiac arrhythmias, possibly because they used CHM for a long time without consulting their doctors. Of them, one patient used Long Dan Xie Gan Decoction (zhong cheng yao) for a long time, which might be associated with his renal failure; two used large doses of Chansu (prepared from the skin and venom glands of the toad) which might be related with their cardiac arrhythmias.
Reasons for Not Using CHM.
Patients who did not use CHM were asked to indicate the reason. The majority (n = 229; 83.0%) reported that they were already satisfied with the efficacy of the conventional treatment. Other reasons included lack of information about CHM, inconvenience of CHM use, no such recommendations from their physicians, and inability to pay for CHM.
Discussion
Through the joint efforts of several generations of practitioners in TCM and integrated medicine of oncology, we have made some achievements in TCM cancer treatment, in terms of treatment concepts, methods, and laboratory and clinical research [22,23]. Indeed, previous studies have shown that some CHMs like Scutellaria baicalensis and honokiol, as well as decoctions like Dang-Gui-Bu-Xai-Tang and anticancer number one, have potential anticancer effects [24][25][26][27]. However, very few studies have been conducted to investigate the CHM use in cancer patients in China. Therefore, we conducted such a survey in southwestern China. Based on the limited data available, the prevalence of CHM use (53.0%) during cancer treatment in southwestern China seems to be comparable to that in other Chinese regions [28,29], but much higher than that in other countries. For example, a survey in Turkey involving 615 adult cancer patients showed that 291 patients (47.3%) had ever used CAM which consisted mainly of herbal agents (95%) [30]. A survey in Norway including 120 cancer patients revealed that 37% to 38% of patients had used herbs during chemotherapy [31]. However, a survey in England including 1134 cancer patients indicated that only 19.7% of patients had ever used herbs [32]. Natural health products were used by 26.5% of prostate cancer patients in a survey conducted in Canada [33], which is also much lower when compared with China. The different prevalence of herb use between China and other counties may be mainly due to a different socioculture and medical system in different ethnic groups. A study analyzed the types and the prevalence of CAM used by American women with breast cancer in four ethnic groups and found that Chinese-American women with breast cancer favored herbal therapies much more and were less likely to use megavitamins than whites or blacks. The widespread use of CHM in the Chinese population might not be surprising since TCM has been practiced in China for more than 2,000 years, which has led to the so-called "TCM culture". Also, use of CHM therapies may be related to the availability of such therapies in a given geographical setting [34].
Multivariate analysis revealed a close association between CHM use and educational level, stage of disease, duration of cancer since diagnosis, marital status, and previous history of CHM use. In the present study, patients with a higher educational level were less likely to use CHM. But previous studies reported that a high educational level was a potential predictor of herb medicine use [35][36][37], which is not consistent with our findings. Due to the success of economic reform in the past 30 years, the concept of westernization is popular among Chinese people with higher education levels. However, Chinese philosophy of life is deeply engrained and rooted within the Chinese population. Many Chinese patients' perception of health, illnesses, and their treatment options were influenced by traditional Chinese cultural health beliefs, especially the elderly and those in rural areas, whose educational level is often lower.
Previous studies found that patients with advanced stage of disease were more prone to use CHM [34,37]. Similar results were obtained in our study. This is possibly because they have little hope from conventional treatments and often experience serious adverse effects of conventional therapies, then turning to CHM as an additional intervention to improve the quality of their lives. The majority of patients (65.7%) in our study believed that CHM could improve their well-being and ameliorate side effect caused by cancer itself or its treatment.
Cancer patients obtained the information about CHM through a wide variety of sources before they began CHM therapy. Media, including internet, TV, newspaper, and radios, is the most important source of information. This may be problematic, as some media may exaggerate treatment effect, even provide misleading information about herb medicines [38]. Friends and family members are also important information sources. Since they may have experienced the efficacy of CHM, they recommend CHM to cancer patients in his/her family. It is interesting to note that the role of physicians as a source of information is pretty much ignored, which is consistent with the findings from previous studies [36,37]. These results perhaps reflect the disapproval of CHM therapies by the medical community or the lack of information to the medical community about the available and effective CHM therapies. Patients should obtain information about CHM from more reputable organizations, and be encouraged to receive CHM therapies whose effectiveness and safety have been well validated.
Based on thousands of years of clinical experience, CHM is generally safe when taken under the guidance of a skilled physician. Adverse effects and toxicities usually occur due to a lack of knowledge about CHM. In our study, some patients suffered side effects because they used CHM by themselves without sufficient information about these medicines. Since most of sliced herbal medicines and patent medicines are classified as over-the-counter (OTC) drugs in China. Many cancer patients self-take them without consulting a qualified Chinese medical practitioner.
CAM use in cancer patients is increasing throughout the world and herb medicine comprises an important part of CAM. CHM has been used for thousands of years in China and abundant experiences in the treatment of cancer have accumulated. Besides, many cancer patients in other countries (e.g., American, England, Norway, Australia, Canadian, and Thailand) also began to use herb medicine [30-32, 37, 39, 40]. Since most of the cancer patients who used herb medicine for their cancer treatment did not get enough education about the indications and contradictions of these herbs, the therapeutic effects of these drugs may be reduced and many side-effects may occur.
Conclusion
This survey revealed a high prevalence of CHM use among cancer patients in southwestern China. However, most patients did not receive sufficient consultation about the indication and side-effects of these drugs. Thus, it is imperative that oncologists educate their cancer patients about the potential benefits and side effects of CHM therapies, remind them of the potential risks of self-taking CHM, and suggest them to use CHM under the supervision of qualified Chinese medical practitioners. | 2018-04-03T00:56:55.315Z | 2012-09-11T00:00:00.000 | {
"year": 2012,
"sha1": "b7fce8d8e003e719c1029854a47bdb7cae1fc73d",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ecam/2012/769042.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bc5ff6c31d14ed5c2d1260b581d9a13bceb2a6fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
7536658 | pes2o/s2orc | v3-fos-license | DNA Polymerase V Allows Bypass of Toxic Guanine Oxidation Products in Vivo*
Reactive oxygen and nitrogen radicals produced during metabolic processes, such as respiration and inflammation, combine with DNA to form many lesions primarily at guanine sites. Understanding the roles of the polymerases responsible for the processing of these products to mutations could illuminate molecular mechanisms that correlate oxidative stress with cancer. Using M13 viral genomes engineered to contain single DNA lesions and Escherichia coli strains with specific polymerase (pol) knockouts, we show that pol V is required for efficient bypass of structurally diverse, highly mutagenic guanine oxidation products in vivo. We also find that pol IV participates in the bypass of two spiroiminodihydantoin lesions. Furthermore, we report that one lesion, 5-guanidino-4-nitroimidazole, is a substrate for multiple SOS polymerases, whereby pol II is necessary for error-free replication and pol V for error-prone replication past this lesion. The results spotlight a major role for pol V and minor roles for pol II and pol IV in the mechanism of guanine oxidation mutagenesis.
The genome is continually damaged by spontaneously generated and environmental chemical agents. Many of the lesions formed in DNA strongly inhibit replicative polymerases. If repair of these lesions fails or is not fast enough, cells can biochemically adapt to "tolerate" the toxic DNA lesions, as evidenced by replication of the damaged DNA (1). Human cells possess at least 15 DNA polymerases, including 5 that are referred to as translesion synthesis (TLS) 3 polymerases and are specifically involved in the replication of damaged DNA. Four of these enzymes, pol , pol , pol , and REV1, belong to the Y family of DNA polymerases, whereas pol belongs to the B family (2). Escherichia coli possess three polymerases, pol II, pol IV, and pol V, whose expression levels are up-regulated in response to DNA damage. Extensive research on the structure and function of these polymerases and their homologues suggests that these polymerases allow replication to progress in the presence of DNA lesions that are strongly inhibitory to the replicative polymerase, pol III (1). The presence of the three SOSinducible polymerases allows E. coli to be used as a model system for DNA replication past damage products.
pol II (pol B) (3) is a B family DNA polymerase and has 3Ј 3 5Ј-exonuclease activity. Consequently, this enzyme replicates normal DNA with high fidelity and has an error rate less of than 10 Ϫ6 (4). pol II participates in error-free replication restart (5,6) and can also carry out TLS past abasic sites (3,7), 3,N 4 -ethenocytosine (8), and N-(deoxyguanosin-8-yl)-2-acetylaminofluorene (9). pol IV (DinB) (10) and pol V (UmuD 2 ЈC) (11) are members of the Y family of DNA polymerases, which have less sterically restrictive active sites than normal replicative polymerases (12) and lack exonuclease activity (1). The error rates of these polymerases are much higher than the other E. coli polymerases and are on the order of 10 Ϫ4 to 10 Ϫ5 for pol IV (13) and 10 Ϫ3 to 10 Ϫ4 for pol V (13,14) in vitro. pol IV does not appear to profoundly affect the chromosomal mutation rate but functions in adaptive mutagenesis by inducing Ϫ1 frameshifts (15). pol IV can also participate in TLS past several lesions, including a tetrahydrofuran abasic site, N 2 -benzo(a)pyrene-dG, and N 2 -furfuryl-dG (16 -19). pol V is the major TLS polymerase (20) and has been shown to bypass lesions such as abasic sites, UV photoproducts of DNA, and bulky adducts (13,19,(21)(22)(23) and also is involved in untargeted mutagenesis (14).
Because guanine (G) possesses the lowest redox potential of the four DNA nucleobases (24), agents such as peroxynitrite (ONOO Ϫ ) preferentially oxidize G as compared with the other natural nucleobases (25). ONOO Ϫ is a powerful oxidizing and nitrating agent (26) that forms from the diffusion-limited reaction of the endogenous radicals nitric oxide ( ⅐ NO) and superoxide (O 2 . ) (27). Two types of inflammatory cells secrete these radicals during the immune response, namely activated macrophages (28) and neutrophils (29). Previous studies in our laboratory have shown that several DNA lesions derived from ONOO Ϫ oxidation of G are substrates for the SOS response system in E. coli (30,31). These findings strongly suggest involvement of the E. coli polymerases II, IV, and/or V and prompted us to investigate the roles of these polymerases in the bypass of G oxidation products. We present here a systematic examination of the effects of the SOS-inducible DNA polymer-ases on the bypass and mutagenicity of 7,8-dihydro-8-oxoguanine (8-oxoG), guanidinohydantoin (Gh), spiroiminodihydantoin (Sp) (an in vivo product of DNA oxidation (32)), oxaluric acid (Oa), urea (Ua), and 5-guanidino-4-nitroimidazole (NI), which are formed either by oxidation of G or 8-oxoG as shown in Fig. 1 (reviewed in Ref. 33). The results of these experiments serve to examine the mechanisms at the molecular level for how ONOO Ϫ may cause genotoxicity and mutagenesis.
EXPERIMENTAL PROCEDURES
DNA sequences (5Ј-GAA GAC CTX GGC GTC C-3Ј, where X is G or a base analogue) containing 8-oxoG, Gh, Sp1, Sp2, Oa, NI, or a tetrahydrofuran abasic site were prepared as described previously (34 -37). Sp1 and Sp2 refer to the faster and slower eluting peaks, respectively, during purification by anion-exchange high pressure liquid chromatography (see Supplemental Material) (38). The oligodeoxynucleotide containing Ua was prepared by hydrolysis of the Oa-containing oligodeoxynucleotide (30). Each oligonucleotide was ligated into a singlestranded bacteriophage M13mp7L2 vector as described (39). Details are published as Supplemental Material.
All strains were characterized by PCR and DNA sequencing of the gene regions of interest. E. coli genomes were isolated from 1 ml of an overnight LB culture grown in the presence of 25 g/ml chloramphenicol (pol IV Ϫ and GW8017 strains), 70 g/ml spectinomycin (pol II Ϫ strain), or both chloramphenicol and spectinomycin (pol II Ϫ /pol IV Ϫ /pol V Ϫ strain). PCR amplification of the E. coli genomic DNA was carried out using Taq DNA polymerase (Invitrogen). The 50-l PCR mixture for wild-type polB, dinB, and umuDC gene regions and ⌬dinB and ⌬umuDC gene regions included 25 mM KCl, 10 mM Tris-HCl (pH 8.3), 3.5 mM MgCl 2 , 2% Me 2 SO, 0.5 M each primer, 0.05 mM each dNTP, 0.05 unit/l Taq, and 2 g of genomic DNA. The amplification cycle consisted of denaturation at 94°C for 0.5 min, annealing at 56°C for 0.5 min, and extension at 72°C for 2 min. After 30 cycles, samples were incubated at 72°C for 10 min and stored at 4°C until further use. The 50-l PCR mixture for the polB⌬1 gene region included 25 mM KCl, 10 mM Tris-HCl (pH 8.3), 1.5 mM MgCl 2 , 2% Me 2 SO, 0.5 M each primer, 0.05 mM each dNTP, 0.05 unit/l Taq, and 0.5 g of genomic DNA. The amplification cycle consisted of denatur-ation at 94°C for 0.5 min, annealing at 56°C for 0.5 min, and extension at 72°C for 3.5 min. After 30 cycles, samples were incubated at 72°C for 10 min and stored at 4°C until further use. The primers 5Ј-CTG GAA CGA AGT GTA TTA CGG GTT TCG-3Ј and 5Ј-TAT GGT ACT GGA TGG CAA AGC ATT CG-3Ј were used for amplification of the polB and polB⌬1 regions. The primers 5Ј-CAT GGG GAT AAA GTG GTG CAG C-3Ј and 5Ј-CTG GCA CTT AAG AGA TAT CCT GCG G-3Ј were used for amplification of the dinB region. The primers 5Ј-CCA CGT GAG CC AAG ATA AGA GAA CG-3Ј and 5Ј-TAC CTG ATT GTC GCA GTG CTG G-3Ј were used for amplification of the umuDC region. The PCR products were analyzed by 1% agarose gel electrophoresis in 1ϫ TBE buffer. To 5 l of each PCR product, solution was added at 5 l of H 2 O and 2 l of 6ϫ Ficoll loading buffer (25% Ficoll), and 10 l were loaded into each well. The gel was run at 130 V for 1:40 h and stained with 1ϫ SYBR Gold (Molecular Probes) in 1ϫ TBE for 1 h.
DNA sequencing was performed using the following primers. For strains containing wild-type polB, 5Ј-TAT CGA TGC AGC GCA AAT GCC-3Ј, 5Ј-CGA TCA TCC GCA CCT TTC TGA TTG-3Ј, 5Ј-CAA TCA GAA AGG TGC GGA TGA TCG-3Ј, and 5Ј-TAT GGT ACT GGA TGG CAA AGC ATT CG-3Ј were used as primers for DNA sequencing. For strains containing polB⌬1, 5Ј-CTG GAA CGA AGT GTA TTA CGG GTT TCG-3Ј was used in place of 5Ј-TAT CGA TGC AGC GCA AAT GCC-3Ј. For strains containing wild-type dinB and ⌬dinB, the same primers used for PCR were used for DNA sequencing. For strains containing wild-type umuDC and ⌬umuDC, 5Ј-GAA AGT TGG AAC CTC TTA CGT GCC-3Ј and the same primers used for PCR were used for DNA sequencing. DNA sequencing was performed by the Massachusetts Institute of Technology Biopolymers Laboratory.
Genotoxicity and mutation analyses were performed using the competitive replication of adduct bypass (31,39) and restriction endonuclease and postlabeling analysis of mutation frequency (43) methods. Details are published as Supplemental Material. The bypass and point mutation data are reported as the average of three independent experiments, and the frameshift mutation data are reported as the average of two independent experiments. DecisionSite 8.1 (Spotfire, Inc.) was used to present the bypass and point mutation data in heat map form. In the bar graphs of figures, error bars represent a 95% confidence interval of the mean.
RESULTS
To evaluate the role of the SOS-inducible E. coli polymerases in TLS past G oxidation products, M13 viral genomes containing single 8-oxoG, Oa, Gh, Sp1, Sp2, Ua, or NI lesions were constructed and then replicated in wild-type, pol II-deficient, pol IV-deficient, pol V-deficient, and pol II/pol IV/pol V-deficient E. coli strains under uninduced and SOS-induced conditions for a total of 10 cellular experimental systems (Fig. 1). The translesion bypass efficiencies and mutational properties of the lesions were determined using the recently developed competitive replication of adduct bypass (31,39) and restriction endonuclease and postlabeling analysis of mutation frequency methods (43), which precisely and accurately report the average values for thousands of TLS events. The resultant data set consists of over 1500 data points and therefore represents a very comprehensive study of the bypass and mutational properties of G oxidation products. Fig. 2 summarizes the lesion bypass and mutagenesis data in a single heat map, which serves as a convenient framework for viewing the main findings of the work that are described in detail below.
Translesion Bypass Efficiency-In the uninduced wild-type strain, Gh, Sp1, Sp2, Ua, and NI exhibited poor bypass efficiencies but were excellent substrates for the SOS system (Fig. 3). As a control demonstrating a blocking lesion and showing induction of the SOS system, the bypass efficiency of a tetrahydrofuran synthetic abasic site was determined and found to increase from 1.0 to 19%, or about 20-fold. The bypass efficiency of Gh increased about 3-fold from 19 to 65%; Sp1 increased about 4-fold from 16 to 69%; Sp2 increased about 6-fold from 8.2 to 50%; Ua increased about 6-fold from 5.2 to 34%; and NI increased about 8-fold from 4.5 to 38%. These results and our earlier studies (30, 31) rank the relative SOS responsiveness of eight structurally diverse DNA lesions. The bypass efficiencies of 8-oxoG and Oa were identical within experimental error to G (100%) and did not differ among the 10 replicative environments evaluated ( Fig. 3 and supplemental Figs. S1-S3).
With the exception of 8-oxoG and Oa, which were not blocks to replication, deletion of pol V produced dramatic changes in the bypass efficiencies of the lesions (with and without SOS induction) (Fig. 3). The bypass efficiencies were markedly lower in pol V-deficient cells. Importantly, the bypass efficiencies for the blocking lesions failed to increase upon induction of the SOS system in the absence of pol V. These results indicate a pivotal role for pol V in the bypass of these oxidative lesions under both uninduced and SOS-induced conditions. In the pol V-deficient strain, Gh was bypassed with an efficiency of about 10% regardless of SOS status. The bypass efficiencies of the S p diastereomers decreased to less than 1% and of Ua and NI to about 2% upon removal of pol V. Results from the pol II/pol IV/pol V-deficient strain (supplemental Fig. S1) mirrored those from the pol V-deficient strain. Taken together, these results demonstrate that pol V is essential for the SOS-dependent bypass of these G oxidation products. In the case of Gh, removal FIGURE 1. Formation of G oxidation products and overview of the experimental design for determining TLS properties. A, reaction of ONOO Ϫ with G forms the structurally diverse set of oxidation and nitration products studied in this work (R ϭ 2Ј-deoxyribose within DNA). B, oligonucleotides sitespecifically containing G or a DNA lesion were ligated into single-stranded M13 viral genomes. Uninduced or SOS-induced E. coli with the backgrounds given in the figure were transformed with the G-or lesion-containing genomes. Analysis of the progeny phage allowed determination of the lesion bypass efficiency and mutation frequency. APRIL 27, 2007 • VOLUME 282 • NUMBER 17 of all three SOS polymerases caused only about a 50% decrease in bypass efficiency when the SOS system was not induced as compared with wild-type cells, suggesting involvement of an SOS-independent polymerase (pol I or pol III).
pol V Bypasses Oxidation Products in Vivo
Deletion of pol II or pol IV had no statistically significant effect on the bypass efficiencies of Gh, Sp2, Ua, or NI but halved the bypass efficiency of Sp1 from 16 to 8.2% in uninduced pol II cells (Table 1 and supplemental Fig. S2) and from 16 to 10% and 69 to 60% in uninduced and SOS-induced pol IV-deficient cells, respectively (Table 1 and supplemental Fig. S3). These results suggest a moderate involvement of pol II and pol IV in the bypass of Sp1.
Mutation Type and Frequency-The mutational signature of each lesion was determined using the restriction endonuclease and post-labeling analysis of mutation frequency method (43). This assay allows both frameshift and point mutations to be quantified.
Frameshift Mutations-In both uninduced and SOS-induced wild-type cells, frameshift mutations were negligible for all lesions (less than 1%; data not shown). Frameshifts (Ϫ1 and Ϫ2) on the order of several percent were observed upon replication of Sp1, Sp2, and NI in the pol V-deficient and the pol II/pol IV/pol V-deficient strains. A previous study suggested pol V suppresses frameshift mutagenesis in favor of base substitutions (44). NI also induced about 2% Ϫ1 frameshifts in the pol II-and pol IV-deficient strains. Further details are published as Supplemental Material. In assessing the significance of the observed frameshift mutations, it is important to consider that these mutations were generated under conditions of very low lesion bypass.
Point Mutations-In wild-type uninduced cells, the 8-oxoG hyperoxidation products Gh, Sp1, Sp2, Oa, and Ua were almost completely miscoding as shown in Fig. 4, respectively; however, the mutational signatures differed within this group as observed previously (30,38). Throughout this section, the percentages refer to the percentage of total bypass events that are mutagenic. Gh bypass induced 37% G 3 T and 59% G 3 C mutations (Fig. 4A), whereas Sp1 caused 83% G 3 T and 15% G 3 C mutations (Fig. 4B), and Sp2 caused 43% G 3 T and 54% G 3 C mutations (Fig. 4C). Oa generated 99% G 3 T mutations (Fig. 4D), and Ua produced 55% G 3 T, 35% G 3 C, and 9% G 3 A mutations (Fig. 4E). In comparison, replication of 8-oxoG was only 2% mutagenic, and the mutations were essentially all G 3 T (supplemental Fig. S4). NI was 64% mutagenic and induced 30% G 3 T, 18% G 3 C, and 16% G 3 A mutations (Fig. 4F). Induction of the SOS system had a slight to negligible effect on the mutational signatures of Gh, Sp1, Oa, and Ua. This observation could indicate pol V-dependent bypass either in uninduced conditions or in an SOS-induced subpopulation that exists in the absence of external induction. Interestingly, the mutational signature of Sp2 shifted from 43 to 58% G 3 T mutations and from 54 to 39% G 3 C mutations (Fig. 4C). The mutation frequency of NI decreased to 48%, and the mutational signature shifted to fewer G 3 A mutations (9%) and G 3 C mutations (5%) (Fig. 4F).
The most striking effect in mutational signatures was observed upon removal of pol V. Again, for the well bypassed lesions 8-oxoG (supplemental Fig. S4) and Oa (Fig. 4D), small effects were observed. The mutational signature of Gh, although unchanged among uninduced strains, shifted from 45% G 3 T and 50% G 3 C in SOS-induced wild-type cells to 35% G 3 T and 64% G 3 C mutations in SOS-induced pol V-deficient cells (Fig. 4A). This result suggests a preference of pol V to insert A opposite the lesion, thereby inducing G 3 T mutations. The data further suggest pol V inserts T opposite the lesion to a small extent because removal of pol V eliminates the G 3 A mutations. Also, because the bypass efficiency of Gh is unchanged at about 10% in the pol V-deficient strain versus the pol II/pol IV/pol V-deficient strain (supplemental Fig. S3), the data indicate a relatively significant contribution to mutational signature by a fourth polymerase, either pol I or pol III. Deletion of pol V had a small effect on the mutagenicity of Sp1 bypass, decreasing it from 99% in uninduced and SOS-induced pol V-proficient cells to 93 and 94% in uninduced and SOS-induced pol V-deficient cells, respectively, while still inducing principally G 3 T mutations (Fig. 4B). Large differences in the mutational signature of Sp2 were observed upon elimination of pol V (Fig. 4C). In both uninduced and SOS-induced wild-type cells, the overall mutation frequency decreased from 99 to 86% when compared with pol V-deficient cells, and the mutational signature shifted from similar amounts of G 3 T and G 3 C mutations to mostly G 3 T mutations. These data suggest pol V preferentially inserts G opposite Sp2 and that insertion of A opposite the lesion results from a different polymerase activity. With the exception of pol V, the participation of any polymerase, including those not induced by the SOS system, in the genesis of the large amount of G 3 T mutations cannot be excluded. A large shift in mutational signature was observed for Ua upon deletion of pol V (Fig. 4E). Although substantial amounts of G 3 A, G 3 T, and G 3 C mutations were observed in wild-type cells, G 3 T transitions comprised nearly all the mutations in the pol V-deficient strains and were greater than 90%. These data show that pol V preferentially inserts both T and G opposite Ua and that pol V is not responsible for the majority of G 3 T mutations. Removal of pol V caused a large decrease in the amount of G 3 T mutations induced by NI with a corresponding decrease in mutation frequency. In uninduced cells, the overall mutation frequency of NI decreased from 64% in the wild-type strain to 40% in the pol V-deficient strain and to 51% in the pol II/pol IV/pol V-deficient strain (Fig. 4F). In SOSinduced cells, the mutation frequency decreased from 48 to 21% when pol V was removed and decreased slightly to 45% when all three SOS-inducible polymerases were removed. Concurrently, the amount of G 3 T mutations induced in pol V-deficient cells relative to wild-type cells decreased from 30 to 7% in uninduced APRIL 27, 2007 • VOLUME 282 • NUMBER 17
JOURNAL OF BIOLOGICAL CHEMISTRY 12745
cells and from 35% to 4% in SOS-induced cells, indicating involvement of pol V in generating G 3 T mutations from NI. Results from the pol II/pol IV/pol V-deficient strain resembled those from the pol V single knock-out strain. Together, the mutation data for these lesions indicate that it is the nature of the oxidized base that largely influences which nucleotide pol V incorporates opposite the lesion. In general, elimination of pol II or pol IV had minor effects on mutational signature as compared with elimination of pol V. The effects were especially small on the two well bypassed lesions included in this study, 8-oxoG (supplemental Fig. S4) and Oa (Fig. 4D). Replication of Sp2 in SOS-induced pol IVdeficient cells reduced the amount of G 3 T mutations from 57 to 49% and increased the amount of G 3 C mutations from 38 to 48% as compared with SOS-induced pol II-deficient cells indicating a possible role for pol IV in the insertion of nucleotides opposite Sp2 (Fig. 4C). These differences for Sp2 may also apply in comparison with SOS-induced wild-type cells, but the variance inherent in the bypass assay precluded this determination. In uninduced cells, deletion of pol II increased the amount of G 3 T mutations induced by Ua from 55 to 69% and decreased the G 3 C mutations from 35 to 22%. Similarly, deletion of pol IV increased the G 3 T mutations from 55 to 64% and decreased the G 3 C mutations from 35 to 28% (Fig. 4E). pol II and pol IV thus affect the mutational signature of Ua in uninduced cells; however, induction of the SOS system eliminated these effects, which were presumably overwhelmed by the increased levels of the more influential pol V. Although deletion of pol II or pol IV caused an increase in the amount of the dominant mutation induced by Sp2 and Ua, an intriguing exception was noted with NI, which had a very different mutational signature in the absence of pol II (Fig. 4F). Relative to wild-type cells, NI was overall more mutagenic, and the amount of G 3 T mutations induced increased from 30 to 45% in uninduced cells and from 35 to 62% in SOS-induced cells. Because the elimination of pol II had the effect of increasing the overall mutation frequency from 64 to 81% in uninduced cells and from 48 to 82% in SOS-induced cells, these results suggest pol II participates in an error-free replication mechanism of NI. are two initial products formed from oxidation of G. 8-OxoG, a commonly used biomarker of oxidative stress, is even more readily oxidized than G (46), and oxidation of this lesion produces a variety of DNA hyperoxidation products (33), including Gh, Sp1, Sp2 (47), cyanuric acid (48), Oa (37), Ua (30), 2-aminoimidazolone, and oxazalone (49). These hyperoxidation products were shown previously to be potently mutagenic in vivo as determined by M13 virally based mutagenesis studies (30,31,34,38). To date, Sp and 8-oxoG have been detected in vivo (32,50). Given the abundance of 8-oxoG found within the cell (on the order of one lesion per 10 6 bases (51)) and its propensity for oxidation, it seems plausible that other 8-oxoG products also may exist in vivo. Additionally, 8-nitroguanine forms directly from G and is used as a biomarker of inflammation (52). Given that 8-nitroguanine and NI form in double-stranded DNA in approximately equal proportions in vitro and appear to arise mechanistically from the same neutral G radical precursor (53), NI may form in vivo under the conditions that also generate 8-nitroguanine.
DISCUSSION
To explore potential mechanisms of carcinogenesis as a result of inflammation, we have examined the role of the SOSinducible E. coli polymerases in the bypass of DNA lesions generated by ONOO Ϫ oxidation of DNA. Fig. 2 summarizes our results, which indicate that for poorly bypassed lesions, pol V was responsible for the vast majority of TLS and was essential for SOS-dependent bypass of the oxidation products in this study. Furthermore, the results show that pol V clearly bypassed Gh, Sp1, Sp2, Ua, and NI in an error-prone manner and that the mutagenicity of NI, in comparison with the wildtype results, increased upon removal of pol II and decreased upon the removal of pol V. The bypass results from wild-type, pol V-deficient, and pol II/pol IV/pol V-deficient cells demonstrated that pol I and/or pol III can bypass Gh, Ua, and NI to a small extent and Sp1 and Sp2 to a negligible extent (but still measurable in terms of determining mutation frequency). The data also indicated a role for pol II and pol IV in the bypass of Sp1 (Table 1). In the case of pol IV knock-out, a decrease in bypass efficiency is observed both in the absence and presence of SOS induction indicating a relatively significant contribution of this polymerase to the TLS past Sp1. In contrast, elimination of pol II decreased the bypass efficiency under uninduced conditions, but no effect from this polymerase was observed upon induction of the SOS system indicating that the increased levels of pol IV and pol V are sufficient to overcome the deficiency created by the lack of pol II.
The Sp diastereomers also exhibited vastly different mutational signatures and responded differently to the knock-out of pol V. The stereochemical difference between Sp1 and Sp2 is the logical explanation for the observed differences in bypass efficiency and mutational signature. Although two groups recently reported assignment of the absolute stereochemistry of the Sp diastereomers, they arrived at opposite conclusions leaving the true stereochemical assignment as an issue that awaits resolution (54,55). Jia et al. (56) performed a computational study of duplexes containing the Sp diastereomers. The authors noted that the orientation in the duplex of one isomer is opposite that of the other isomer and that the duplex containing the R isomer is always lower in energy than the duplex containing the S isomer. Furthermore, both isomers possess a hydrogen bonding face similar to thymine, whereas the R isomer forms a more stable base pair with G than the S isomer. These computational results set a precedent for differences in bypass efficiency and coding specificity between Sp1 and Sp2 based on stereochemistry. Indeed, in this study Sp2 showed more of a propensity to pair with G than did Sp1, suggesting an assignment of S and R for Sp1 and Sp2, respectively.
Fundamentally, the bypass of a DNA lesion consists of two parts as follows: insertion of a nucleotide opposite the lesion and extension of the nascent strand (57). Experiments with Klenow fragment suggest that the extension step of TLS is ratelimiting for Gh (35,58,59), Ua (60), and NI (61). Thus, it is possible that the requirement of pol V for bypass of these lesions reflects significant participation in extension rather than insertion. However, pol V undoubtedly had a significant role in the insertion step for Sp2, Ua, and NI based on the large pol V Bypasses Oxidation Products in Vivo differences in mutational signature that occurred with these lesions upon inactivation of pol V. In the case of NI, inactivation of pol II caused a significant increase in the amount of A inserted opposite the lesion and a significant decrease in the amount of C incorporated opposite the lesion, while causing no decrease in the bypass efficiency. This result suggests pol II is involved principally during the insertion step of TLS and preferably inserts C opposite the lesion, which leads to no mutation. Inactivation of pol V causes a large increase in the amount of C inserted opposite NI and a significant decrease in the amount of A inserted opposite the lesion. Accordingly, this result demonstrates that pol V preferentially inserts A opposite the lesion leading to G 3 T mutations. Given that similar amounts of C and A are incorporated opposite NI in wild-type cells, and the relative amounts change dramatically and with similar magnitude in the absence of pol II or pol V, these results suggest a similar contribution of both pol II and pol V in the nucleotide insertion step of NI bypass. Coupled with the strong dependence of NI bypass efficiency only on pol V, the results suggest pol V is required for extension of the nascent strand after incorporation of a nucleotide opposite NI by either pol II or pol V (Fig. 5).
Separation of polymerase roles has been proposed to occur for TLS past DNA lesions in eukaryotic systems. For pyrimidine-pyrimidone (6-4) photoproducts, abasic sites, thymine glycol, and N-2-acetylaminofluorene adducted G, a nucleotide is inserted opposite the lesion by a specific polymerase, such as pol , pol , REV1, or pol ␦, whereas a separate polymerase, such as pol , performs the extension step (2). Although pol V has no human homologue, recent work suggests this protein shares considerable structural similarity and substrate specificity with pol (62), which therefore may be able to bypass the lesions discussed here. pol has been shown both to insert and extend opposite at least one G oxidation product, 8-oxoG, with similar efficiency to G and does so in an error-free fashion (63). Future studies using purified bypass polymerases should allow more precise roles to be defined for these proteins in the bypass of DNA oxidation damage. Ultimately, in vivo experiments using mammalian cells and the G oxidation products described in this work will be necessary to understand the mechanisms by which mutations may be induced by these lesions in humans.
Oxidative stress is ubiquitous in aerobic organisms and is implicated as a cause of many human diseases, particularly cancer (64 -66). One origin of oxidative stress is inflammation, and a clear correlation exists between inflammation and cancer risk (67). Because immune system cells use reactive oxygen and nitrogen species to damage foreign biological matter, there is a need to understand how collateral damage of host tissues may result in both toxicity and gene damage. Equally important is the fact that invading organisms mount a robust defense against immune system cells (68), and the SOS polymerases are likely critical parts of that network. Whereas polymerases are induced in the invaders as a countermeasure to DNA damage, the human homologues and orthologues of the SOS polymerases may be an important link between DNA oxidation and malignant transformation because they can directly mutate the genome by synthesizing past normally replication-inhibitory lesions. Should high rates of replication error among these lesions also be found to occur in mammalian cells, it may prove therapeutic to inhibit the activity of the offending DNA polymerase. In fact, recent work shows that elimination of pol dramatically increases the sensitivity of cells to killing by molecules that generate ⅐ NO (69). Inhibiting the bypass of oxidative DNA lesions could allow more time for repair processes such as base excision repair or recombination to occur. In the absence of effective repair mechanisms, cell death may occur given the high toxicity exhibited by these lesions. Either outcome could reduce the mutagenic and potentially carcinogenic effects of these DNA damage products. | 2018-04-03T03:41:17.944Z | 2007-04-27T00:00:00.000 | {
"year": 2007,
"sha1": "2917570be8489c7354731c490660f51178e6dc21",
"oa_license": "CCBY",
"oa_url": "http://www.jbc.org/content/282/17/12741.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "ef4175752baa784c7bef2d78bf6b740ac7957579",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14953646 | pes2o/s2orc | v3-fos-license | Asymmetric nuclear matter : a variational approach
We discuss here a self-consistent method to calculate the properties of the cold asymmetric nuclear matter. In this model, the nuclear matter is dressed with s-wave pion pairs and the nucleon-nucleon (N-N) interaction is mediated by these pion pairs, $\omega$ and $\rho$ mesons. The parameters of these interactions are calculated self-consistently to obtain the saturation properties like equilibrium binding energy, pressure, compressibility and symmetry energy. The computed equation of state is then used in the Tolman- Oppenheimer-Volkoff (TOV) equation to study the mass and radius of a neutron star in the pure neutron matter limit.
I. INTRODUCTION
The search for an appropriate nuclear equation of state has been an area of considerable research interest because of its wide and far reaching relevance in heavy ion collision experiments and nuclear astrophysics. In particular, the studies in two obvious limits, namely, the symmetric nuclear matter (SNM) and the pure neutron matter (PNM) have helped constrain several properties of nuclear matter such as binding energy per nucleon, compressibility modulus, symmetry energy and its density dependence at nuclear saturation density ρ 0 [1, 2, 3] to varying degrees of success. Of late, the avaliability of flow data from heavy ion collision experiments and phenomenological data from observation of compact stars have renewed the efforts to further constrain these properties and to explore their density and isospin content (asymmetry) variation behaviours [4,5,6,7].
One of the fundamental concerns in the construction of nuclear equation of state is the parametrization of the nucleon-nucleon (N-N) interaction. Different approaches have been developed to address this problem. These methods can be broadly classified into three general types [8], namely, the ab initio methods, the effective field theory approaches and calculations based on phenomenological density functionals. The ab initio methods include the Brueckner-Hartree-Fock (BHF) [9,10,11] approach, the (relativistic) Dirac-Brueckner-Hartree-Fock (DBHF) [12,13,14,15,16] calculations, the Green Function Monte-Carlo (GFMC) [17,18,19] method using the basic N-N interactions given by boson exchange potentials. The other approach of this type, also known as the variational approach, is pioneered by the Argonne Group [20,21]. This method is also based on basic two-body (N-N) interactions in a non-relativistic formalism with relativistic effects introduced successively at later stages. The effective field theory (EFT) approaches are based on density functional theories [22,23] like chiral perturbation theory [24,25]. These calculations involve a few density dependent model parameters evaluated iteratively. The third type of approach, namely, the calculations based on phenomenological density functionals include models with effective density dependent interactions such as Gogny or Skyrme forces [26] and the relativistic mean field (RMF) models [27,28,29,30]. The parameters of these models are evaluated by carefully fitting the bulk properties of nuclear matter and properties of closed shell nuclei to experimental values. Our work presented here belongs to this class of approaches in the non-relativistic approximation.
The RMF models represent the N-N interactions through the coupling of nucleons with isoscalar scalar σ mesons, isoscalar vector ω mesons, isovector vector ρ mesons and the photon quanta besides the self-and cross-interactions among these mesons [29]. Nuclear equations of state have also been constructed using the quark meson coupling model (QMC) [31] where baryons are described as systems of non-overlapping MIT bags which interact through effective scalar and vector mean fields, very much in the same way as in the RMF model. The QMC model has also been applied to study the asymmetric nuclear matter at finite temperature [32].
It has been shown earlier [33,34], that the medium and long range attraction effect simulted by the σ mesons in RMF theory can also be produced by the s-wave pion pairs. This "dressing" of nucleons by pion pairs has also been applied to study the properties of deuteron [35] and 4 He [36]. On this basis, we start with a nonrelativistic Hamiltonian density with πN interaction. The ω−repulsion and the isospin asymmetry part of the NN interaction are parametrized by two additional terms representing the coupling of nucleons with the ω and the ρ mesons respectively. The parameters of these interactions are then evaluated self-consistently by using the saturation properties like binding energy per nucleon, pressure, compressibility and the symmetry energy. The equation of state (EOS) of asymmetric nuclear matter is subsequently calculated and compared with the results of other independent approaches available in current literature. The EOS of pure neutron matter is also used to calculate the mass and radius of a neutron star. We organize the paper as follows: In Section II, we present the theoretical formalism of the asymmetric nuclear matter as outlined above. The results are presented and discussed in Section III. Finally, in the last section the concluding remarks are drawn indicating the future outlook of the model.
II. FORMALISM
We start with the effective pion nucleon Hamiltonian where the free nucleon part H N (x) is given by the free meson part H M (x) is defined as and the πN interaction [33] is provided by In equations (2) and (4), ψ represents the non-relativistic two component spin-isospin quartet nucleon field. The single particle nucleon energy operator ǫ x is given by ǫ x = (M 2 − ∇ 2 x ) 1/2 with nucleon mass M and the pion-nucleon coupling constant G. The isospin triplet pion fields of mass m are represented by ϕ.
We expand the pion field operator ϕ i (x) in terms of the creation and annihilation operators of off-mass shell pions satisfying equal time algebra as with energy ω x = (m 2 − ∇ 2 x ) 1/2 in the perturbative basis. We continue to use the perturbative basis, but note that since we take an arbitrary number of pions in the unitary transformation U in equation (7) as given later, the results would be nonperturbative. The expectation value of the first term of H int (x) in eq. (4) vanishes and the pion pair of the second term provides the isoscalar scalar interaction of nucleons thereby simulating the effects of σ-mesons. A pion-pair creation operator given as is then constructed in momentum space with the ansatz function f (k) to be determined later. We then define the unitary transformation U as and note that U , operating on vacuum, creates an arbitrarily large number of scalar isospin singlet pairs of pions. The "pion dressing" of nuclear matter is then introduced through the state where U constitutes a Bogoliubov transformtion given by We then proceed to calculate the energy expectation values. We consider N nucleons occupying a spherical volume of radius R such that the density ρ = N/( 4 3 πR 3 ) remains constant as (N, R) → ∞ and we ignore the surface effects. We describe the system with a density operatorρ N such that its matrix elements are given by [33] and We obtain the free nucleon energy density In the above equation, the spin degeneracy factor γ = 2, the index τ runs over the isospin degrees of freedom n and p and k τ f represents the Fermi momenta of the nucleons. For asymmetric nuclear matter, we define the neutron and proton densities ρ n and ρ p respectively over the same spherical volume such that the nucleon density ρ = ρ n + ρ p . The Fermi momenta k τ f are related to neutron and proton densities by the relation k τ f = (6π 2 ρ τ /γ) 1 3 . We also define the asymmetry parameter y = (ρ n − ρ p )/ρ. It can be easily seen that ρ τ = ρ 2 (1 ± y) for τ = n, p respectively. Using the operator expansion of equation (5), the free pion part of the Hamiltonian as given in equation (3) can be written as The free pion kinetic energy density is given by where ω(k) = √ k 2 + m 2 . Using ǫ x ≃ M in the nonrelativistic limit, the interaction energy density h int can be written from equation (4) as Using the equations (7), (8) and (9), we have from equation (15) The pion field dependent energy density terms add up to give h m (= h k + h int ) which is to be optimized with respect to the ansatz function f (k) for its evaluation. However, this ansatz function yields a divergent value for h m . This happens because we have taken the pions to be point like and have assumed that they can approach as near each other as they like, which is physically inaccurate. Therefore, we introduce a phenomenological repulsion energy between the pions of a pair given by where the two parameters a and R π correspond to the strength and length scale, repectively, of the repulsion and are to be determined self-consistently later. Thus the pion field dependent term of the total energy density becomes The expectation value of the pion field dependent parts of the total Hamiltonian density of eqn. (1) alongwith the modification introduced by the phenomenological term h R m becomes . with the integrals I τ (τ = n, p) given by and ω = ω(k).
We now introduce the energy of ω repulsion by the simple form where the parameter λ ω corresponds to the strength of the interaction at constant density and is to be evaluated later. We note that equation (21) can arise from a Hamiltonian density given in terms of a local potential v R (x) as where, when density is constant, we in fact have The isospin dependent interaction is mediated by the isovector vector ρ mesons. We represent the contribution due to this interaction, in a manner similar to the ω-meson energy, by the term where ρ 3 = (ρ n − ρ p ) and the strength parameter λ ρ is to be determined as described later. Thus we finally write down the binding energy per nucleon E B of the cold asymmetric nuclear matter: where ε = (h f + h m + h ω + h ρ ) is the energy density. The expression for ε contains the four model parameters a, R π , λ ω and λ ρ as introduced above. These parameters are then determined self-consistently through the saturation properties of nuclear matter. The pressure P , compressibility modulus K and the symmetry energy E sym are given by the standard relations: The effective mass M * is given by
III. RESULTS AND DISCUSSION
We now discuss the results obtained in our calculations and compare with those available in literature. The four parameters of the model are fixed by self-consistently solving eqs. (24) through (27) for the respective properties of nuclear matter at saturation density ρ 0 = 0.15 fm −3 . While pressure P vanishes at saturation density for symmetric nuclear matter (SNM), the values of binding energy per nucleon and symmetry energy are chosen to be −16 MeV and 31 MeV respectively. In the numerical calculations, we have used the nucleon mass M = 940 MeV, the meson masses m = 140 MeV, m ω = 783 MeV and m ρ = 770 MeV and the π − N coupling constant G 2 /4π = 14.6. In order to ascertain the dependence of compressibility modulus on the parameter values, we vary the K value over a range 210 MeV to 280 MeV for the symmetric nuclear matter (y = 0) and evaluate the parameters. It may be noted that this is the range of the compressibility value which is under discussion in the current literature. For K values in the range 210 MeV to 250 MeV, the program does not converge. The solutions begin to converge for compressibility modulus K around 258 MeV. We choose the value K= 260 MeV for our calculations. In Table I we present the four free parameters of the model for ready reference. For this set of parameter values the effective mass of nucleons at saturation density is found to be M * /M = 0.81. In the Fig. 1, we present the binding energy per nucleon E B calculated for different values of the asymmetry parameter y as a function of the relative nuclear density ρ/ρ 0 . The values y = 0.0 and 1.0 correspond to SNM and PNM respectively. As expected, the binding energy per nucleon E B of SNM initially decreases with increase in density, reaches a minimum at ρ = ρ 0 and then increases. In case of PNM, the binding energy increases monotonically with increasing density in consistence with its well known behaviour. In Fig. 2(a), we compare the E B of SNM as a function of the nucleon density with a few representative results in the literature, namely, the Walecka model [27] (long-short dashed curve), the DBHF calculations of Li et al. with Bonn A potential (short-dashed curve) (data for both the models are taken from [13]) and the variational A18 + δv + UIX* (corrected) model of Akmal at al. (APR) [21] (long-dashed curve). While the Walecka and Bonn A models are relativistic, the variational model is nonrelativistic with relativistic effects and three body correlations introduced successively. Our model produces an EOS softer than that of Walecka and Bonn A, but stiffer than the variational calculation results of the Argonne group. It is well-known that the Walecka model yields a very high compressibility K. However, its improvised versions developed later with self-and cross-couplings of the meson fields have been able to bring down the compressibility modulus in the ball park of 230±10 MeV [7]. Our model yields nuclear matter saturation properties correctly alongwith the compressibility of K = 260 MeV which is resonably close to the empirical data. In Fig. 2(b), we plot E B as a function of the relative nucleon density for PNM. Similar to the SNM case, our EOS is softer than that of Walecka and Bonn A models, but stiffer than the variational model. We use this EOS to calculate the mass and radius of a neutron star of PNM as discussed later.
The density dependence of pressure of SNM and PNM are calculated using the eqn. 25. These results are plotted (solid blue curves) in Figs. 3(a) and (b). Recently, Danielewicz et al. [4] have deduced the empirical bounds on the EOS in the density range of 2 < ρ/ρ 0 < 4.6 by analysing the flow data of matter from the fireball of Au+Au heavy ion collision experiments both for SNM and PNM. These bounds are represented by the color-filled and shaded regions of the two figures. These bounds rule out both the "very stiff" and the "very soft" classes of EOSs produced, for example, by some variants of RMF calculations and Fermi motion of a pure neutron gas [4]. As shown in these figures, the EOS of SNM and PNM generated by our model are consistent with both the bounds. 2: (a)The binding energy per nucleon EB as a function of relative nucleon density ρ/ρ0 for SNM. The results of present work (P.W.) are compared with the results of DBHF calculations with Bonn A potential [13], the variational calculations of the Argonne group [21] and the Walecka model [27]. The data for the Bonn A and Walecka model curves are taken from [13]. (b) Same as Fig-(2a), but for PNM.
The potentials per nucleon in our model can be defined from the meson dependent energy terms of eqs. (19), (21) and (23). Contribution to potential from the scalar part of the meson interaction is due to the pion condensates and is given by V s = (h int + h R m )/ρ as defined earlier. The contribution by vector mesons has two components, namely, due to the ω and the ρ mesons and is given by Figs. 4 (a) and (b), we plot V s and V v as functions of relative density ρ/ρ 0 calculated for PNM ( Fig. 4(a)) and for SNM ( Fig. 4(b)) respectively. The magnitudes of the potentials calculated by our model are weaker compared to those produced by DBHF calculations with Bonn A interaction [13] as shown in both the panels of Fig. 4. In Fig. 4(a), we show the contributions to the repulsive vector potential due to ω mesons (short-dashed curve), ρ mesons (long-dashed curve) and their combined contribution (long-short-dashed curve). The contribution due to ρ mesons rises linearly at a slow rate and has a low contribution at saturation density. This indicates that major contribution to the short-range repulsion part of nuclear force is from ω meson interaction.
Knowledge of density dependence of symmetry energy is expected to play a key role in understanding the structure and properties of neutron-rich nuclei and neutron stars at densities above and below the saturation density. Therefore this problem has been receiving considerable attention of late. Several theoretical and experimental investigations addressing this problem have been reported ( [3,8,39] and references therein). While the results of independent studies show reasonable consistency at sub-saturation densities ρ ≤ ρ 0 , they are at wide variance with each other at supra-saturation densities ρ > ρ 0 . This wide variation has given rise to the so-called classification of "soft" and "stiff" dependence of symmetry energy on density [38,39]. Fig. 5 shows a representation of the spectrum of such results alongwith the results of the present work (solid blue curve). While the Gogny and Skyrme forces (dark rib-dotted and dotted curves respectively with data taken from [8,39]) produce "soft" dependence on one end, the NL3 force (dot-dashed curve with data taken from [8]) produces a very "stiff" dependence on the other end. The analysis of experimental and simulation studies of intermediate energy heavy-ion reactions as reported by Shetty et al. [39] (red triangles and long-short-dashed red curve repectively), results of DBHF calculations of Li et al. and Huber et al. [13,29,40] (rib-dashed and magenta ribbed curve), variational model [3,21] (short-dashed curve), RMF calculations with nonlinear Walecka model including ρ mesons by Liu et al. [30] (long-dashed green curve) as shown in Fig. 5 suggest "stiff" dependence with various degrees of stiffness. The experimental results (represented by the red triangles with data taken from Shetty et al. [39]) are derived from the isoscaling parameter α which, in turn, is obtained from relative isotopic yields due to multifragmentation of excited nuclei produced by bombarding beams of 58 Fe and 58 Ni on 58 Fe and 58 Ni targets. Shetty et al. have shown that the results of multifragmentation simulation studies carried out with Antisymmetrized Molecular Dynamics (AMD) model using Gogny-AS interaction and Statistical Multifragmentation Model (SMM) are consistent with the above-mentioned experimental results and suggest (as shown by the red long-short-dashed curve) a moderately stiff dependence of the symmetry energy on density. Our results (represented by the solid blue curve) calculated using eqn. (27) are consistent with these results at subsaturation densities but are stiffer at supra-saturation densities. More observational or experimental information is required to be built into our model to further constrain the symmetry energy at higher densities. In Fig.5, the curve due to Huber et al. [40] (with data taken from [29]) correspond to their DBHF 'HD' model calculations which involves only the σ, ω and ρ mesons. Similarly the long-dashed green curve due to Liu et al. [30] is from the basic non-linear Walecka model with σ, ω and ρ mesons. Our formalism is the closest to these two models with the exception that in our model the effect of σ mesons is simulated by the π meson condensates. It is also noteworthy that our results are consistent with these results for densities upto 2ρ 0 .
The wide variation of density dependence of symmetry energy at supra-saturation densities has given rise to the need of constraining it. As discussed by Shetty et al [39], a general functional form E sym = E 0 sym (ρ/ρ 0 ) γ has emerged. Studies by various groups have produced the fits with E 0 sym ∼ 31 − 33 MeV and γ ∼ 0.55 − 1.05. A similar parametrization of the E sym produced by our EOS with E 0 sym = 31 MeV yields the exponent parameter γ = 0.85. We next use the equation of state for PNM derived by our model in the Tolman-Oppenheimer-Volkoff (TOV) equation to calculate the mass and radius of a PNM neutron star. The mass and radius of the star are found to be 2.25M ⊙ and 11.7 km respectively.
IV. CONCLUSION
In this work we have presented a quantum mechanical nonperturbative formalism to study cold asymmetric nuclear matter using a variational method. The system is assumed to be a collection of nucleons interacting via exchange of π pairs, ω and ρ mesons. The equation of state (EOS) for different values of asymmetry parameter is derived from the dynamics of the interacting system in a self-consistent manner. This formalism yields results similar to those of the ab initio DBHF models, variational models and the RMF models without invoking the σ mesons. The compressibility modulus and effective mass are found to be K = 260 MeV and M * /M = 0.81 respectively. The symmetry energy calculated from the EOS suggests a moderately "stiff" dependence at supra-saturation densities and corroborates the recent arguments of Shetty et al. [39]. A parametrization of the density dependence of symmetry energy of the form Multifragmentation Expt. [39] P.W. AMD (Gogny-AS) [39] AMD (Gogny) [39] DBHF (Bonn A) [29] DBHF ( ) [29,40] APR [3,21] RMF (NL ) [30] Skyrme [8] NL3 [8] FIG. 5: Symmetry energy Esym calculated from the EOS (as in Eq. 27) (P.W.) (solid blue line) is plotted as a function of density along with results of other groups. The data for experimental points and the results of the antisymmetrized molecular dynamics (AMD) simulations with Gogny-AS and Gogny interactions are taken from Shetty et al [39], DBHF (Bonn A) results are taken from [29], RMF (NLρ) data are from [30], the variational model of Akmal et al. (APR) [21] results are from [3], DBHF (σωρ) model of Huber et al. [40] data are from [29], the Skyrme amd NL3 results are from [8]. Our result shows consistency with those of other groups and corroborates the moderately "stiff" dependence of Esym as advocated by Shetty et al. [39]. E sym = E 0 sym (ρ/ρ 0 ) γ with the symmetry energy E 0 sym at saturation density being 31 MeV produces γ = 0.85. The EOS of pure neutron matter (PNM) derived by the formalism yields the mass and radius of a PNM neutron star to be 2.25M ⊙ and 11.7 km respectively. | 2008-05-16T06:20:18.000Z | 2008-05-16T00:00:00.000 | {
"year": 2008,
"sha1": "9f1a12047986ca0ee1e5dc3ede92033eca0e0ecc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0805.2449v1.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9f1a12047986ca0ee1e5dc3ede92033eca0e0ecc",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232090177 | pes2o/s2orc | v3-fos-license | COVID‐19 and gynecological cancers: Asia and Oceania Federation of Obstetrics and Gynecology oncology committee opinion
Abstract Since the outbreak of COVID‐19, there have already been over 26 million people being infected and it is expected that the pandemic will not end in near future. Not only the daily activities and lifestyles of individuals have been affected, the medical practice has also been modified to cope with this emergency catastrophe. In particular, the cancer services have faced an unprecedented challenge. While the services may have been cut by the national authorities or hospitals due to shortage of manpower and resources, the medical need of cancer patients has increased. Cancer patients who are receiving active treatment may develop various kinds of complications especially immunosuppression from chemotherapy, and they and their carers will need additional protection against COVID‐19. Besides, there is also evidence that cancer patients are more prone to deteriorate from COVID‐19 if they contract the viral infection. Therefore, it is crucial to establish guidelines so that healthcare providers can triage their resources to take care of the most needed patients, reduce less important hospitalization and visit, and to avoid potential complications from treatment. The Asia and Oceania Federation of Obstetrics and Gynecology (AOFOG) hereby issued this opinion statement on the management of gynecological cancer patients during the COVID‐19.
Introduction
The COVID-19 pandemic has become a global problem with more than >26 million confirmed cases and >0.8 million deaths. 1 Although the situation is being settled in certain countries, there are still transmission from overseas countries. Cancer patients are found to be more susceptible for deterioration from COVID-19 and overall deaths than those without cancer. 2,3 Cancer survivors also remain an important atrisk population for COVID-19. Yet with the limited resources, cancer services can be significantly interrupted due to limited sanitary materials and manpower. Many academic societies have already published their guidelines on cancer care during the COVID-19 pandemic. Here, the Asia and Oceania Federation of Obstetrics and Gynecology (AOFOG) would like to make a general recommendation for gynecological cancer care in Asia.
General Principles
All patients, care providers and staff who have symptoms like fever, cough, or other respiratory symptoms and travel history must be screened for covid -19. Those who have travel history should be self-isolated according to national guidelines.
Patients and their carers should be educated with the knowledge about the signs and symptoms of COVID-19, as well as the general hygienic measures. 4 This includes, but not limited to, frequent and proper hand washing, wearing masks, proper handling of used masks, avoiding contacting eyes and noses, physical distancing and avoiding travels. Cancer patients and survivors should use stronger personal protection.
Patients who have symptoms should not attend oncology clinics or ward, but to consult their family doctors or Emergency Department and rule out COVID-19. Measures should be taken to restrict the duration of visiting hours and limit the number of people accompanying the patients. Communication with patients' family or friends should be maintained by phone or other video systems upon patients' agreement.
Medical care providers have to be equipped with qualified protection goggles, masks, surgical gowns and gloves. As they may need to help with critical care patients, resuscitation training may need to be refreshed and back-up duty roster should be in place. A national centralized reporting system should be available, and updated and accurate news should be disseminated to the public on time. However, as the situation is evolving from time to time and different centers have different capacity, the management has to be individualized and welldocumented.
New Case Referral
Patients should be triaged according to severity of symptoms, nature of the disease, availability of shared care with family physician, chance of cure and physical fitness of the patients. As there was evidence that cancer patients undergoing surgery and/or chemotherapy were at risk of developing severe complications of COVID-19, decision has to be made whether elective surgery or adjuvant chemotherapy for certain cancer patients especially those with stable disease can be postponed (see Sections 4 and 5). The AOFOG recommendation is modified from the Asian Society of Gynecologic Oncology (Table 1). 5
Surgery
Triage of the operations should be performed where resources are restricted, and should be based on factors such as patients' symptoms, biology of the diseases, expected life expectancy, intent of the operations, complexity of the operations and the likelihood of intensive care unit /high dependency unit requirement. 6 The decision should be fully discussed in multidisciplinary team and communicated to the patients and their family. One example of recommendations on the triage of operation by British Gynaecological Cancer is listed in Table 2. 7 The number of operation room staff should be kept to the minimum that can maintain the normal services, and an alarm or other system should be available that can call for help immediately during emergency situation. In addition, centers which have experience in sentinel lymph node biopsy should utilize this to replace full lymphadenectomy to shorten the operation duration, reduce intra-operatively bleeding and post-operative complications. Enhanced recovery pathway should be adopted to reduce hospital stay.
Minimally invasive surgery include robotic surgery can shorten patients' hospital stay, and can minimize spillage of body fluid and number of directly-exposed medical staff. 8 There is no evidence of aerosolization of the COVID-19 during minimally invasive surgery. However, it is recommended to take the below measures to prevent gas dispersal 8-11 : 1. Close the taps of the ports to avoid escape of gas during insertion.
2. Care should be taken not to make a big incision to avoid dislodgement of the port and hence air leakage during the operation. 3. Use the minimally required intra-abdominal pressure to 8 mmHg. 4. Connect one of the ports to an Ultra-Low Particulate Air suction device that can filter 99.999% of particles with penetration size of 0.05 μ, where the size of SARS-CoV-2 virus is 0.06-0.14 μ. 12 5. Avoid using ultrasonic sealing devices but to use electro-thermal bipolar cautery with the lowest required power. 6. Do not open the taps of any ports that are not used for insufflation or deflation. 7. Minimize the change of instrument if possible. 8. If the insufflation port needs to be changed to another port, close the insufflator, close the tap of the port and then reconnect the tubing from the Electivedelayed ≤10-12 week Early-stage /low-grade uterine cancer, microinvasive cervical cancer completely excised at loop excision original port to a new port. Turn on the insufflator first before opening the tap of the new port to avoid back-flowing of the gas into the insufflator. 9. Deflate the abdomen into the suction device first before retrieving specimens from the abdominal wound or removing the uterus out of the vagina to avoid sudden gas dispersal. 10. Release the pneumoperitoneum in a controlled manner at the end of the operation before removing the ports.
Chemotherapy, Radiotherapy, Targeted Therapy and Immunotherapy
If cancer facility has to be interrupted, prioritization should be considered. The UK has provided a guidance on the prioritization of systematic anti-cancer treatment and radiotherapy Tables 3 and 4). 13 Patients receiving certain anti-cancer treatment are at risk of neutropenia and immunosuppression. Lee et al. reported that among 281 patients who received chemotherapy within 4 weeks before their positive COVID-19 results, the use of chemotherapy in the past 4 weeks had no significant impact on the mortality from COVID-19 compared with those who did not receive recent chemotherapy (1.18, 95% confidence interval (CI) 0.81-1.72; P = 0Á380). 14 However, Zhang et al. showed that severe complications from COVID-19 was significantly associated with the use of anti-cancer therapy in the past 14 days among their 28 patients (hazard ratio [HR] 4.079, 95% CI 1.086-15.322, P = 0.037). 15 With limited data, it is legitimate to withhold anti-cancer treatment during active COVID-19 infection as further anti-cancer treatment may potentially lead to immunosuppression and aggravate COVID-19.
For those who have recovered from COVID-19, it is uncertain when is the best time to resume the anticancer therapy. The ASCO considers it is reasonable to resume anti-cancer treatment once transmissionbased precautions can be considered based on Centers for Disease Control and Prevention guideline. 16,17 For example, for those with laboratory-confirmed COVID-19, they should be at least 10 days from the date of their first diagnosis of COVID-19 or first appearance of symptoms, or two consecutive negative SARS-CoV-2 RNA results from their respiratory specimens collected ≥24 h apart.
The medical carers should educate the patients and their carers to watch out for symptoms of COVID-19, their cancer and complications, as well as flare-up of their underlying co-morbidities. They should also provide enough medications, reduce non-urgent hospital visits, consider replacing parental medications with oral drugs and use shorter treatment regimens. For example, for platinumsensitive recurrent ovarian cancer patients who are either breast cancer susceptibility gene (BRCA) which adds 20-50% chance of cure to surgery or radiotherapy alone or treatment given at relapse 3 • Curative treatment with a low (10-20%) chance of success • Adjuvant or neoadjuvant treatment which adds 10-20% chance of cure to surgery or radiotherapy alone or treatment given at relapse • Non-curative treatment with a high (more than 50%) chance of more than 1-year extension to life 4 • Curative treatment with a very low (0-10%) chance of success • Adjuvant or neoadjuvant treatment which adds less than 10% chance of cure to surgery or radiotherapy alone or treatment given at relapse • Non-curative treatment with an intermediate (15-50%) chance of more than 1-year extension to life 5 • Non-curative treatment with a high (more than 50%) chance of palliation or temporary tumor control and less than 1 year expected extension to life 6 • Non-curative treatment with an intermediate (15-50%) chance of palliation or temporary tumor control and less than 1 year expected extension to life mutated or whose tumor are homologous recombination deficient, PARP inhibitor can be considered instead of non-platinum chemotherapy based on the SOLO-3 and QUADRA trials. 18,19 For platinum-resistant/refractory recurrent ovarian cancer patients, one may choose 4-weekly liposomal doxorubicin, oral chemotherapy like cyclophosphamide or etoposide, instead of weekly gemcitabine, 5-day topotecan or weekly paclitaxel as second-line chemotherapy. The frequency of immunotherapy can be lengthened, such as pembrolizumab 400 mg every 6 weeks, nivolumab 480 mg every 4 weeks and atezolizumab 1680 mg every 4 weeks. 20 G-CSF should be administered promptly for those who are at risk of developing neutropenia.
Summary of Care for Gynecological Cancers
A summary based on recommendations from other groups is listed in Table 5. [20][21][22][23][24] Special High-Risk Groups Elderly patients are a major group in gynecological cancer. An Italian study showed that the average age of death from COVID-19 was 80 years old, and most of them had other co-morbidities such as diabetes and cardiovascular diseases. 25 Other high-risk patients, other than cancer, also includes those with organ transplantation, bone marrow/stem cell transplantation, hematological malignancies, severe lung diseases, immunocompromised conditions, pregnancy, obesity, diabetes, chronic cardiovascular, kidney and liver diseases. 26 Physical distancing of at least 2 m away from the others, staying at home, avoiding too many home visitors, ordering food and groceries through delivery services, frequent hand washing with soap and water for at least 20 s should be discussed with the high-risk group. 27 It is important to keep engaging the elderly with social relationship, and this can be maintained by teaching them how to use phone, video calls and internet.
Multidisciplinary Meeting
Multidisciplinary meeting should be continued on regularly basis, especially treatment may need to deviate from the usual practice and prioritization of treatment may need to be adapted. Instead of face-to-face meeting, online meeting should be considered using Zoom, Webex or equivalent. If face-to-face meeting is deemed necessary, it is advised to limit to one representative from each team. And importantly, patients and family members should be adequately informed about the benefit and risk of each intervention in order to make a consensus of the treatment plan.
Follow-Up
Patients who are in disease remission should be deferred from routine follow-up, and those with stable active disease should have less frequent hospital visits. Follow-up by phone or video should be considered. They should be given a contact number so that they can advance their appointment if they develop any symptoms.
Clinical Trials
The number of active trials should be limited and priority should be given to those trials that are curative intent, and those that offer drugs where there are limited effective therapies. 20,21 The local ethics committee and sponsors should be informed about the potential deviation of the study drugs and monitoring from study protocol. Toxicity review by video or phone, and mail delivery of oral medicine should be considered. Those who are positive for COVID-19 should stop the trial intervention and obtain the standard care of the COVID-19.
Disclosure
None declared. | 2021-03-03T06:23:24.453Z | 2021-03-01T00:00:00.000 | {
"year": 2021,
"sha1": "5be137bf5744f26b5eda3bf0f290d9e49b600652",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8013896",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d2b4cb51b1cd7dc3e2c3e045d442808fa9ce98cf",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
153313125 | pes2o/s2orc | v3-fos-license | Modified recipe to inhibit fruiting body formation for living fungal biomaterial manufacture
Living fungal mycelium with abolished ability to form fruiting bodies is a self-healing substance, which is particularly valuable for further engineering and development as materials sensing environmental changes and secreting signals. Suppression of fruiting body formation is also a useful tool for maintaining the stability of a mycelium-based material with ease and lower cost. The objective of this study was to provide a biochemical solution to regulate the fruiting body formation, which may replace heat killing of mycelium in practice. The concentrations of glycogen synthase kinase-3 (GSK-3) inhibitors, such as lithium chloride or CHIR99021 trihydrochloride, were found to directly correlate with the development of fruiting bodies in the mushroom forming fungi such as Coprinopsis cinerea and Pleurotus djamor. Sensitive windows to these inhibitors throughout the fungal life cycle were also identified. We suggest the inclusion of GSK-3 inhibitors in the cultivation recipes for inhibiting fruiting body formation and regulating mycelium growth. This is the first report of using a GSK-3 inhibitor to suppress fruiting body formation in living fungal mycelium-based materials. It provides an innovative strategy for easy, reliable, and low cost maintenance of materials containing living fungal mycelium.
Introduction
The development of fungal mycelium-based materials has beed fast over the past decade. Mycelium is the vegetative structure of fungi and is mainly composed of natural polymers. Living mycelium-based materials have a wide range of applications due to their self-assembly, self-healing, environmentally responsive nature, along with their moldable and tunable properties during growth. Dried mycelium is nontoxic, fire-resistant, mold-resistant, water-resistant, and biodegradable. It is also a great thermal insulator on top of its strength, durability and other beneficial features [1][2][3][4][5][6][7][8][9][10][11][12][13]. Under proper circumstances, the mycelium of typical mushroom-forming fungi aggregates to form mushrooms, which are the fruiting bodies spreading spores [14]. While the fruiting bodies cause conformational changes of the a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 mycelium-based materials, the spores can cause allergy and infection in the susceptible population. In current production of mycelium-based materials, moulded products are heated or treated with fungicide to kill the living cells to stop fruiting body formation [15]. Such rendered mycelium-based materials retain few of the benefits of living materials. Therefore, new approaches are needed to inhibit fruiting body formation while keeping the mycelium alive to produce living mycelium-based materials with desirable qualities.
Kinases mediate cellular and developmental responses to environmental and internal signals, and kinase cascades play crucial roles in many signaling transduction pathways [16,17]. Phosphorylation of protein kinases affects the activity, location, stability, conformation, and protein-protein interaction of kinases. One interesting and putatively central regulatory kinase is glycogen synthase kinase-3 (GSK-3). GSK-3 is a serine/ threonine kinase of the CMGC family that is highly conserved in eukaryotes. GSK-3 is activated by the constitutive phosphorylation at a C-terminal tyrosine residue, but inactivated by the regulatory phosphorylation at an N-terminal serine residue which causes a conformational change blocking the catalytic domain [18]. Protein kinase A (PKA), protein kinase B (PKB) and protein kinase C (PKC) can transiently inhibit GSK-3 in response to various external signals. In fungi, these kinases are essential growth regulators in response to environmental stimuli [19][20][21].
GSK-3 has attracted widespread attention as a critical therapeutic target whereby lithium chloride (LiCl) is an archetypal GSK-3 inhibitor. Lithium (Li) exerts its pharmacological effects on mood stabilization, neurogenesis, neurotrophicity, neuroprotection, anti-inflammation, among others [18]. Lithium compounds are also suggested to be added during cultivation of some edible mushrooms to produce food biofortified with lithium [22,23]. Recent evidence suggests that low, non-toxic concentrations of LiCl have anti-inflammatory effects [24]. CHIR99021 trihydrochloride is a highly selective GSK-3 inhibitor [25]. Cisplatin has been shown to activate GSK-3, which induces the phosphorylation of C-terminal tyrosine but reduces the phosphorylation of N-terminal serine [26].
Two mushroom species from the order Agaricales were used in this study. Coprinopsis cinerea, a model mushroom-forming fungus, belongs to the family Psathyrellaceae [27]. The typical life cycle of C. cinerea, which includes stages of basidiospores, mycelium, hyphal knots, initials, stage-1 and -2 primordia, and young and mature fruiting bodies, can be finished within two weeks under laboratory conditions [28]. Pleurotus djamor, also known as the pink mushroom or tropical oyster mushroom, belongs to the family Pleurotaceae and is appreciated as an edible and medicinal mushroom in many countries.
This study aimed to provide a biochemical approach to inhibit fruiting body formation from mycelium-based materals. We demonstrated that LiCl and CHIR99021 trihydrochloride inhibited fruiting body formation, whereas cisplatin accelerated fruiting body development.
Mycelia of the above strains were stored at 4˚C. A small agar piece from stock plates was then inoculated in freshly made agar plates for pre-culture. The pre-culture condition for C. cinerea was at 37˚C in the dark on yeast extract-malt extract-glucose (YMG) agar (4 g yeast extract, 10 g malt extract, 4 g glucose and 10 g agar per litre) [30] in 9 cm petri dishes, while that for P. djamor was at 28˚C in the dark on Potato Dextrose Agar (PDA, BD Difco) in 6 cm petri dishes. In each assay, a small agar piece with mycelium (0.8 cm diameter) from a 5-dayold pre-culture was inoculated in the middle of freshly made agar plates. C. cinerea was firstly cultured at 37 o C in the dark until mycelia grew over the whole agar surface, then transferred to 25 o C under a 12hours light /12hours dark cycle to induce fruiting body formation. P. djamor was cultivated at 28 o C in the dark until mycelia occupied the whole agar surface, and transferred to 25 o C under a 12hours light /12hours dark cycle. Triplicates were employed in each setup. Each 9 cm petri dish contained 34 g (±1 g) medium, and each 6 cm petri dish contained 10 g (±1 g) medium to standardize the nutrients and inhibitor/activator concentrations.
Effect of GSK-3 inhibitors and activator
Three methods, varying in time and position, were tested to deliver LiCl. One method was to mix 1.5 g/L, 2 g/L (for P. djamor), 3 g/L or 6g/L LiCl in the medium before autoclave sterilization, and the other methods were either to spread 1 mL sterilized LiCl solution (52.5 g/L, 105 g/L, 210 g/L) on the surface of agar before inoculation, or to add 1 mL sterilized LiCl solution (52.5 g/L, 105 g/L, 210 g/L) under the agar after the mycelia reached the edge of the petri dish. Adding 1 ml LiCl to 34 g agar made the final concentrations were approximate to the mixing methods. CHIR99021 trihydrochloride and cisplatin are not suggested to be autoclaved so 0.2-micron filters were used to remove bacteria in the solution. Then 1 mL CHIR99021 trihydrochloride solution (1 μM, 100 μM, 500 μM) or 1 mL saturated cisplatin solution (25 o C) was spread on the surface of agar evenly before inoculation.
Mycelial growth area determination
To examine the effect of LiCl on mycelial growth, LiCl powder was mixed in the YMG medium (0, 1.5 g/L and 3 g/L) before autoclave sterilization. Mycelial growth was recorded daily by marking the edge of the colonies on the plate bottom for six days after inoculating a C. cinerea pre-culture plug of 0.8 cm diameter. Three replicates were measured for each setup. Digital photos of the plate bottom with marks were taken, with a ruler in the same plane as the plates. The area occupied by mycelium was calculated using the Polygon Tool in the Analyzing Digital Images (ADI) software (https://www.umassk12.net/adi/).
Sensitive windows to LiCl
The effects of LiCl at different developmental stages of C. cinerea were tested to find the sensitive windows. Agar piece with mycelium was inoculated on the center of a cellophane sheet placed on a YMG agar plate [29]. One mL of 105 g/L LiCl solution or 1 mL water was added between the cellophane sheet and the agar surface at the stages of: initial, stage-1primordium, stage-2 primordium, and young fruiting body. The growth status was recorded till three days after the control group formed mature fruiting bodies.
Expression levels of GSK-3 target genes
The GSK-3 substrates were predicted by OrthoMCL V2.0.6 [31] with the default parameters (MCL inflation = 1.5; blastp E-value = 1e-5). A total of 83 GSK-3 target proteins reported in human and mouse were compared to the C. cinerea proteins, and 52 orthologues were identified (S1 Table). Among them, glycogen synthase (GS, CC1G_01973), eukaryotic translation initiation factor 1 (eIF1, CC1G_03881), and eukaryotic translation initiation factor eIF2 gamma subunit (eIF2-gamma, CC1G_09429) were picked for real-time PCR analysis, which also included GSK-3 (CC1G_03802) itself. Sequences of the primers used are listed in S2 Table. To examine the effect of LiCl on the expression levels of target genes of GSK-3, 1 mL water or LiCl solution (52.5 g/L and 105 g/L, equivalent to 1.5 g/L and 3 g/L in previous sections) was spread on the surface of agar and then covered by a cellophane sheet for easier harvest of the mycelium. Mycelium from C. cinerea strain #326 was inoculated on top of the cellophane sheet. Three biological replicates were employed for each setup. After a 4-day incubation at 37˚C in the dark, total RNAs were extracted using RNeasy Plant Mini Kit (Qiagen). The RNA concentration was measured by a NanoDrop Spectrophotometers (Thermo Scientific). RNA products (500ng) were used to synthesize cDNA using iScript gDNA clear cDNA Synthesis Kit (Bio-Rad). Quantitative real-time PCR (qPCR) was performed with three technical replicates on an Applied Biosystems 7500 Real-Time PCR system using SsoAdvanced universal SYBR Green Supermix (Bio-Rad) according to the standard protocol: 1 cycle at 95˚C for 30 seconds and 40 cycles at 95˚C for 15 seconds, annealing at 60˚C for 60 seconds. Beta-tubulin was used as an endogenous control for normalization. Negative control was employed for each primer pair to eliminate false positive results.
GSK-3 inhibitors and activator affect fruiting body development
As shown in Fig 1A, the effect of LiCl on C. cinerea fruiting body development was tested. While the control group developed into mature fruiting bodies, 1.5g/L LiCl treated group only developed into stage-1 primordia. The plates treated with 3g/L LiCl were arrested in the mycelium stage, and no initials or hyphal knots can be observed in the following 30 days. The mycelium treated with 6g/L LiCl stopped growing before reaching the edge of the petri dish. These results showed that LiCl of higher concentrations had stronger inhibitory effect on C. cinerea fruiting body development.
The three delivery methods of LiCl, either mixed in the agar, on the surface of agar, or under the agar, showed no differences and all efficiently inhibited the fruiting body development. LiCl is not sensitive to heat treatment and can be autoclaved to sterilize the solution. Any of these delivery methods can be chosen in large-scale manufacture of the materials.
One more selective GSK-3 inhibitor, CHIR99021 trihydrochloride, was tested on C. cinerea. As shown in Fig 1B, young fruiting bodies developed on the control plates treated with water and the plates with 1 μM CHIR99021 trihydrochloride. Stage-1 primordia were developed on the plates treated with 100 μM CHIR99021 trihydrochloride. Mycelia in the plates treated with 500 μM CHIR99021 trihydrochloride remained arrested in the mycelium stage in the following 30 days. These results showed an stronger inhibitory effect on C. cinerea fruiting body development by CHIR99021 trihydrochloride at higher concentrations.
To demonstrate that GSK-3 inhibitor is also important in fruiting body formation in other mushrooms, the effect of LiCl on P. djamor was tested (Fig 1C). LiCl was added to PDA medium before autoclave. While mature fruiting bodies developed on the control plates, the plates treated with 2g/L LiCl failed to develop fruiting bodies in the following 30 days. These results showed an inhibitory effect on P. djamor fruiting body development by LiCl.
With the aforementioned positive results that GSK-3 inhibitors can inhibit fruiting body development, we hypothesized that GSK-3 activity is associated with the fruiting body development. A GSK-3 activator, cisplatin, was then tested for its effect on C. cinerea fruiting body development (Fig 1D). The cisplatin treated group showed an accelerated development since the formation of hyphal knot, and the mature fruiting bodies appeared two days earlier than the control group, which only developed into young fruiting bodies. These results showed a promoting effect of cisplatin on C. cinerea fruiting body development.
These data support the observation that among fungal species of the order Agaricales, GSK-3 inhibitors inhibit fruiting body formation, whereas GSK-3 activator activates fruiting body formation.
LiCl promotes mycelium growth
Mycelial growth area of the biological triplicates was recorded daily. Fig 2A shows the average mycelial growth area of each group with error bars showing the maximum and minimum values. As the inoculum usually needs time to adapt to a new environment and absorb nutrients, the mycelium grew only slowly in the first two days. Differences appeared on day 3 and day 4 after inoculation, with both LiCl treated groups growing faster than the control group (p < 0.05, one way Student's t test). On day 5 and day 6, the mycelium from 1.5 g/L LiCl treated group still grew faster than the control group (p < 0.05), while the 3 g/L LiCl treated group had little difference with the control group. The results showed that proper concentrations of LiCl would accelerate the mycelium growth. This is an ideal property of GSK-3 inhibitor for large-scale production of mycelium-based material. The modified recipe with addition of a proper concentration of LiCl, can not only inhibit fruiting body formation but also speed up mycelium growth, and hence shorten the manufacture cycle and lower the cost.
LiCl affects gene expression levels
To investigate the LiCl effect on gene expression in the mycelium stage, the expression levels of GSK-3 and its target genes were tested. As shown in Fig 2B, real-time PCR results showed that 1.5 g/L LiCl reduced the expression of GSK-3 itself (CC1G_03802), as well as GS (CC1G_01973), eIF1 (CC1G_03881), and eIF2-gamma (CC1G_09429) significantly (p <0.001, two-way Student's t test). 3 g/L LiCl reduced the expression of GS and GSK-3 (p <0.05). The change in gene expression supports that LiCl affects GSK-3 and its downstream genes.
The stages before stage-1 primordium are sensitive to LiCl
The bottleneck to produce living mycelium-based material is to avoid the formation of fruiting bodies. Upon environmental stimuli including nutrient depletion, light/dark cycle, and cold shock, mycelia aggregate into hyphal knots, followed by fruiting body initials. Initials then develop into stage-1 and -2 primordia, young and eventually mature fruiting bodies. We explored all possible chances in existing production lines to introduce a GSK-3 inhibitor, specifically, LiCl, which is cheaper than the other GSK-3 inhibitors. The previous sections demonstrated that LiCl could be added from pre-inoculation to mycelium extension. Hyphal knot is a short stage that is difficult to define by naked eyes. So only stages after hyphal knot were tested. As shown in Fig 3, the addition of LiCl at stages of initial and stage-1 primordium led to the arrest of development. However, stage-2 primordium and young fruiting bodies continued to develop into mature fruiting bodies after LiCl treatment. So, the stages of mycelium, initial and stage-1 primordium are sensitive windows to LiCl.
Discussion
This study has demonstrated that LiCl and CHIR99021 trihydrochloride treatments inhibited fruiting body initiation in a concentration-dependent manner (Fig 1A and 1B). In many instances, one would prefer for a living fungal mycelium to refrain from developing into fruiting bodies so that the mycelium could be easily maintained without concerns of loss in its shape, form, or consistency. Compared to the current method of heat-killing fungal mycelium to prevent fruiting body formation, a living version of mycelium that simply does not form fruiting bodies is far more desirable, considering its living nature and thus self-healing potential. Therefore, we suggest the inclusion of LiCl or CHIR99021 trihydrochloride in recipes to manufacture living fungal mycelium-based materials that exhibit controlled fruiting body development. In other cases, promoting fruiting body development may be of interest. For instance, when the intended goal is to produce as many fruiting bodies (e.g., mushrooms and truffles) as possible in a defined time period, having an enhanced fruiting body development would be beneficial. Although cisplatin could accelerate fruiting body formation, more studies about its safety and influence on human health are needed.
LiCl affected the expression levels of GSK-3 and three substrates. In a previous study of C. cinerea strain #326 [29], the expression level of eIF1 (CC1G_03881) remained stable from mycelium to primordium, but increased significantly after stage-2 primordium; and the expression levels of GS (CC1G_01973) and GSK-3 (CC1G_03802) increased during the development from mycelium to primordium, while eIF2-gamma (CC1G_09429) decreased (S1 Fig). GSK-3, GS and eIF1 may be essential to fruiting body development, but LiCl treatment reduces their expressions.
Given that both LiCl and CHIR99021 trihydrochloride are inhibitors to GSK-3, their effects of fruiting inhibition might be mediated through the inactivation of GSK-3. GSK-3 has important role in cell-fate specification, leading to cell differentiation or apoptosis or development through a number of signaling pathways [32][33][34][35]. We proposed that GSK-3 could be the links between environmental stimuli and developmental responses, as a master-switch of fruiting body formation (Fig 4). While GSK-3 is constitutively active under favorable conditions, any unfavorable stimuli could inactivate it and turn off the fruiting body development. The activity of GSK-3 may directly or indirectly determine fruiting body development.
When the GSK-3 activity is limited by inhibitors, undifferentiated cells are proliferative. GSK-3 inhibitors have been shown to maintain mouse and human embryonic stem cells in undifferentiated status, while removing the inhibitors promotes differentiation into multiple cell lineages [36]. Some transcription factors phosphorylated by GSK-3 will be targeted by ubiquitination for proteasome-mediated degradation. Substrates with less GSK-3 phosphorylation and ubiquitination may have prolonged half-lives to enhance cell proliferation [37]. Therefore, GSK-3 inhibitors, which functions like unfavorable environmental signals, may inhibit fruiting body formation through interfering the cell differentiation. Deeper studies are needed to discover the mechanisms in the future.
GSK-3 might be targeted for producing living fungal myceliums with an enhanced or inhibited fruiting body development profile, by either a permanent means (e.g., GSK-3 knockdown or GSK-3 knockout fungal strain) or a transient means (e.g., application of an activator or inhibitor of GSK-3 present in the medium for fungi). While the former may be easier to maintain in the long term, efforts involved in the initial stage of establishing the genetically modified fungal strains are tremendously more significant both in cost and in time. In contrast, the latter offers the benefits of flexibility and low-cost use, when the GSK-3 activator or inhibitor can be readily removed at an appropriate time so that the fungus may resume its normal life cycle of different phases.
The concentration range is narrow for lithium salts enhancing mycelium growth. A high concentration of LiCl may inhibit the mycelium growth, especially in Trichoderma species, which is a common contamination of the edible mushrooms [38]. Thus, while LiCl can be applied to prevent fruiting in some mushroom-forming fungi, it can also inhibit contamination during manufacture in some scenarios. In addition to LiCl, other agents that specifically target GSK-3 may also prevent the development of fruiting bodies. Other known GSK-3 inhibitors include: maleimide derivatives, staurosporine and organometallic inhibitors, indole derivatives, paullone derivatives, pyrazolamide derivatives, pyrimidine and furopyrimidine derivatives, oxadiazole derivatives, thiazole derivatives, and miscellaneous heterocyclic derivatives [18,39].
According to the sensitive windows to LiCl, a basic production pipeline is designed for adding GSK-3 inhibitors for producing living mycelium-based materials. A production pipeline may include all or part of the following procedures: 1) cultivation substrates mixing and autoclave; 2) inoculation; 3) mycelium 1st growth; 4) molding and pressurize; 5) mycelium 2nd growth; 6) pressurize (optional); 7) mycelium 3rd growth (optional); and 8) air-dry to finalized product. LiCl or other GSK-3 inhibitors can be added at any time from procedure 1) to 7), by mixing in cultivation substrate before autoclave, adding on the surface before inoculation, or spraying to the mycelium after a period of growth.
In conclusion, LiCl, CHIR99021 trihydrochloride or other GSK-3 inhibitors can be applied in the manufacture of mycelium-based materials, which can shorten the production cycle, reduce the cost for maintenance of mycelium materials, and therefore achieve a higher level of cost-effectiveness. | 2019-05-15T14:31:20.273Z | 2019-05-13T00:00:00.000 | {
"year": 2019,
"sha1": "379875be0e9221e322097265b5803889c6fce48b",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0209812&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "379875be0e9221e322097265b5803889c6fce48b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
250693724 | pes2o/s2orc | v3-fos-license | Advances in multiphase flow measurements using magnetic resonance relaxometry
When it comes to the measurement of bitumen and water content as they are produced from thermally exploited reservoirs (cyclic steam stimulation or steam assisted gravity drainage) most of the current tools that are available in the market fail. This was demonstrated previously when our group introduced the first concept of a magnetic resonance based water-cut meter. The use of magnetic resonance as a potential tool for fluid cut metering from thermally produced heavy oil and bitumen reservoirs is revisited. At first a review of the work to date is presented. Our recent approach in the tackling of this problem follows. A patented process is coupled with a patented pipe design that can be used inside a magnetic field and can capture fluids up to 260°C and 4.2MPa. The paper describes the technical advances to this goal and offers a first glimpse of field data from an actual thermal facility for bitumen production. The paper also addresses an approach for converting the current discrete measurement device into a continuous measurement system. Preliminary results for this new concept are also presented.
INTRODUCTION
Low field NMR relaxometry is a technology that offers significant benefits in reservoir characterization through the so-called magnetic-resonance logging tools that are offered by the oil and gas service companies (Coates et al., 1999). These tools can offer measurements of porosity, permeability, mobile and bound fluids and potentially saturations, if they are properly calibrated. Recently, the service tools also provide information about fluid viscosity and other fluid properties, again after careful calibration of the tools. These calibrations can be done through measurements in bench-top systems that perform the same measurements at much higher signal-to-noise ratio.
Over the past ten years the Tomographic Imaging and Porous Media (TIPM) Laboratory has focused on research, development and commercialization of low field NMR technology. Although the interest was primarily in heavy oil and oil sands applications, some technology advancements apply to conventional oils as well. Initially, much of the work has centred on core analysis. However, since it was realized that NMR could be used for determination of fluid content in cores, the jump to measurement of fluid content in fluid streams was natural. Fluid characterization in conventional oil and gas reservoirs can be done with a multitude of logging techniques including magnetic resonance. The term magnetic resonance induced fluid typing is used nowadays from the service companies to tackle this problem.
The fluid stream characterization work in our laboratory aimed at several potential applications, with original targets in heavy oil and oil sands. First the work was directed in the determination of fluid content in oil sand process plants that process oil sand mining ore and froth (Kantzas, 2004). The work was then extended to measurement of water cuts in oil/water streams (Mirotchnick et al., 2003(Mirotchnick et al., , 2004. This was followed by work in the determination of fluid viscosities both in bulk streams, emulsions and in-situ (Bryan et al., 2005). Finally the research dealt with the measurement of fluid content, viscosity and mass transfer properties of heavy oil and bitumen mixtures with solvents (Goodarzi et al., 2007;Wen and Kantzas, 2004).
In co-operation with Canadian Natural Resources Limited (CNRL) a water-cut meter utilizing low field NMR technology was operated for approximately eight years on two different cyclic steam injection heavy oil well pads (Allsopp et al., 2001;Wright et al., 2004).
The only other recently published technology that tries to measure fluid production at thermal conditions is the nuclear fraction metering technology offered by Schlumberger. This technology consists of the combination of a venturi meter and a gamma-ray source. It was field tested for a few hours in a SAGD operation in Western Canada (Hompoth et al., 2008a(Hompoth et al., , 2008b. The technology shows promise but it cannot reach the specified temperature conditions imposed to us by the client operating companies.
PHYSICAL PRINCIPLES
The principle of proton NMR deals with the response of hydrogen bearing molecules to a sequence of magnetic field pulses. As the protons in hydrogen bearing molecules are exposed to a magnetic field they acquire energy. When the magnetic field stops the protons return to their (random) original state and emit this energy back to the environment. The released energy is recorded and amplified through specific hardware. This energy is detected as a decay curve that is then fitted to a multi-exponential spectrum of characteristic transverse relaxation times (T 2 ). The detailed physics of magnetic resonance can be found in numerous literature sources. Figure 1 shows a spectrum of a heavy oil / water emulsion. This spectrum is typical of the types of spectra encountered in the analysis of NMR extracted fluid data. It is, therefore, used as an example to explain how the NMR analysis is done. The x-axis denotes the relaxation times T 2 and the amplitude denotes the signal strength at each relaxation time. The dashed vertical line indicates the characteristic cut-off point that is used to separate heavy oil signal from water signal and is based on theoretical and empirical considerations. In this case the slower relaxation is attributed to water and the faster relaxation is attributed to heavy oil. This interpretation was possible after numerous spectra of pure substances were collected. The smaller peaks between the dominant oil and water peaks are attributed to water droplets present in an emulsified state.
The information used for analysis of the NMR spectra often includes a weight or volume measurement, and calibration standards for comparison to the measured spectra. The properties of the spectra that are very useful are the amplitude (A) of the full spectrum or a part of the spectrum that corresponds to a given phase, the location of the spectra peaks as indicated by the relaxation time T 2 , or in the case of complex spectra a characteristic average relaxation time. The most commonly used relaxation time for averaging purposes is the geometric mean relaxation time, T 2gm that is defined as: A spectrum contains significant information that is qualitative until the amplitude is translated into a mass or volume number in which case the spectrum information becomes quantitative. In order to reach this point, we require calibrations that will offer the amplitude that corresponds to a unit volume or a unit mass. For most of our applications the amplitude per unit mass is defined through the amplitude index (AI) as follows: As a result, if one knows the amplitude and amplitude index of a given compound, NMR spectra will be helpful to provide the actual amount of this compound as soon as we identify it in the spectrum. In many studies, instead of AI, it is customary to use another variable called the relative hydrogen index (RHI) that is defined as the ratio of AI values of a specific compound to that of water: Both AI and RHI have a sound scientific basis as they are related directly to the density of hydrogen atoms in the fluid. In practical applications they may be thought of as calibrations. The main difference is that AI values are instrument specific, but RHI values should be instrument independent and only depend on the composition of the stream at hand.
With respect to fluid analysis, the only parameter that will distort the NMR spectrum unexpectedly will be the presence of paramagnetic ions (such as iron or rust). Parameters that play havoc with conventional sensors such as salinity of brines and emulsion inversions are completely indifferent to NMR.
Temperature is an important factor that will shift the NMR peak locations of different fluids. Thus when measurements are made it is required that the temperature remains constant. Calibrations are then made to compare spectra at different temperatures to corresponding standards, and corrections are made to the predictive algorithms for fluid content and fluid property determination.
Before providing predictive equations for different circumstances, let us consider some relaxation times for different fluids. At room temperature bulk water relaxes at approximately 2300 ms whereas bitumen will relax with a T 2 of less than 10 ms.
In general, the more restricted the hydrogen bearing molecule, the faster its relaxation time. Restrictions to a molecule can occur either through an increase in viscosity, or through the confinement in small spaces (such as pores).
When dealing with emulsions of heavy oil or bitumen and water, it is often assumed that the T 2 of bitumen does not change much when the bitumen is emulsified because it is already quite fast, but the T 2 of water changes significantly when it is emulsified. This is taken into account when interpreting the spectra to conclude if we have a separated flow, oil in water or water in oil emulsion by the strong impact the location of the water signal has on the geometric mean of the spectra.
LABORATORY DATA
The first and most heavily studied NMR metering application was that of a two-phase (wateroil) compositional meter. Based on equation (2), if the total weight (or volume) of a sample is known, then given a constraint (or cut-off) for the separation of the two fluids, one can predict the water and oil cuts. In a simplified approach this can be done through: The cut-off approach separates the spectrum in two components, one corresponding to oil and one to water. Fig. 2 shows some early laboratory results. The correlation between NMR results and Dean-Stark results is approximately 99%. Although it was not expected to achieve such precise results in the uncontrolled field environment, it was considered encouraging enough for field applications. The combination of equations (2) and (4) can be manipulated to provide the composition of a water / oil / gas stream. In this case the results can be plotted either for individual phases or in a ternary plot as it can be seen from Fig. 3 The next important development in fluid characterization was the determination of viscosity on-line. The results of this work have been extensively published (e.g. Bryan et al., 2005). It was found that NMR could predict viscosities of single fluids and emulsions with reasonable accuracy without tuning and with excellent accuracy if the algorithm is tuned to specific oils rather than the original generic approach.
The predictive models for viscosity measurements are of the form: The coefficients α and β vary with different systems and depending on whether the model is tuned or not to a specific reservoir. (Kantzas, 2004) In a parallel project, the value of NMR technology was investigated for bitumen-solvent mixtures. For processes like vapour extraction (VAPEX), it is anticipated that the operator would like to know the bitumen content and produced fluid viscosity for the evaluation of the performance of VAPEX. To this end laboratory work focused on using liquid solvents (both paraffin and cyclic based). Hundreds of mixtures of oils and solvents were created in different solvent to oil ratios. The NMR spectra were obtained over time and their trends were observed. Several predictive algorithms were designed that are tuned to different solvent / oil pairs. Fig. 7 shows a typical example of these plots that can be used for bitumen content determination from NMR derived parameters. It must be noted that the applications for NMR technology are numerous but only the watercut metering technology has been field-tested to date. The specifics for field-testing and some preliminary results are discussed next.
SENSOR DESIGN
In order to apply the theoretical models for the creation of a field sensor, the standard laboratory designs to date were used as a starting point.
The sensor part consists of an assembly of permanent magnets and transmitter / receiver coils. The permanent magnets cannot be exposed to temperatures greater than 80 o C without being damaged. Thus, we have developed a heat-shielded magnet that overcomes this drawback and at the same time increases the maximum process temperature to 260 o C. The sampling pipe/tube itself has to be non-conductive and non-magnetic. Additionally, the pipe/tube also has to be able to handle the design conditions of 260 o C at 4.2MPa (600psi) for the process fluid.
One of the most critical steps in the program was to define a set of criteria to use as both a guideline for the program and as a means to determine success or failure.
Based in large part on CNRL's extensive experience in the field the following criteria were set: 1. The sensor should give clear water-cut readouts in % on a local digital or computer display. 2. There should be no "tweaking" of the instrument by the end of the test. 3. The sensor should be operating to give water cuts consistently without interferences. 4. The apparatus results should be comparable to spun cuts within accuracy better than +/-5 % of water cut readings. 5. The sensor should give repeatable results on the same sample (with +/-5 % of the water-cut reading). 6. The sensor results should be comparable to Dean Stark extraction within accuracy of +/-5 % of water cut reading.
Furthermore the instrument should be fitted with the hardware and software necessary for direct communication with the control system (PLC). The process of sampling and measuring should be automated. The measurement signal should be provided to the PLC by a 4-20 mA DC signal. Signals for measurement timing are passed between the instrument and PLC using 120 VAC on/off digital switches. Any other arrangement desired may also be used, as the proposed apparatus should be completely generic with respect to signals to or from the data logger or control systems. A special cabinet system to contain the instrument, its computer, and associated electronics for communication with the PLC was constructed. This cabinet had to protect the instrument, provide adequate ventilation and protect the environment from the instrument (Class 1, Div 2).
FIELD TESTING TO DATE
Example field data collected over a 24-hour period were presented elsewhere (Wright et al., 2004). It was demonstrated that the instrument sees changes in water cut with changes in well. Steady and reproducible results were obtained. It was further demonstrated that the test separator acts as a homogenizing buffer to the production data. Examination of the data shows that the tool is remarkably stable with measurements having a standard deviation of 2.2 %. Fig. 7 Comparison of water cut measured by NMR, Centrifuge, Dean Stark and the current online tool (Wright et al., 2004) Comparisons of the fit of the data with centrifuge cuts in the field and with subsequent Dean Stark extraction were considered satisfactory, considering the sample actually measured by the instrument is not the same as the sample sent to centrifuge, although it is as close as practically possible.
Comparison of NMR, Centrifuge, Dean Stark and Current online
An attempt was made to compare the performance of the original online tool against NMR water cut measurements and against Dean Stark and centrifuge analysis of grabbed samples. A sample for such a comparison is shown in Fig. 7. The striking thing in this graph is that despite the sampling issues the NMR consistently tracks the discrete samples where as the competing instrument is only in range part of the time and is often reporting a number which is completely wrong. The pattern shown here, where the older tool consistently reports numbers which are different from the actual water cut, is typical of the operation of this well pad and is the major reason that the well pad was originally selected to test the NMR.
The first part of our work demonstrated that NMR based fluid content and fluid measurement technology is now feasible. Properties such as fluid viscosity, solvent content and water-cut can be measured on-line with remarkable accuracy and precision. The components that required intellectual property protection were patented (Mirotchnik et al., 2003(Mirotchnik et al., , 2004. Additional field-testing will be required prior to full commercialization.
DESIGN IMPROVEMENTS AND NEW ALGORITHMS
A new company, Perm Instruments Inc., was created to commercialize patented and patentable technology developed in the Tomographic Imaging and Porous Media (TIPM) Laboratory. There has been a very strong interest by the oil companies that support our program to design a sensor that could measure oil, water, solvent, gas and solids in a flowing stream with immediate application in the heavy oil and oil sand development of Western Canada and at a cost comparable or less to that of three-phase test separators. Fig. 8 is a simplified version of how the system works today. The fluid that has to be analyzed is conveyed to the sampling location inside the magnet (Fig. 8) where it is exposed to a relatively uniform magnetic field.
Fig. 8 Schematic of water-cut meter concept and approach
Any arbitrary emulsion in the measurement zone will yield a relaxation spectrum, which will compose of a water peak and an oil peak. Since the measurement volume is known we can calculate the amount of water present from the spectrum, normalize it to the measurement volume in the pipe and obtain the water cut. As long as there is no appreciable amount of gas the instrument will work very well as is also shown in Fig. 7, from the published data with Canadian Natural (Wright et al., 2004).
This idea has been implemented in an industrial type machine (MRWCM) that is shown in Fig. 9. The magnet, electronics and other support hardware is enclosed in a pressurized system that allows exposure to explosive environments under the Canadian Standard Association (CSA) Class 1 Div 2 classification. The cabinet and supporting structure have a minimal footprint of 1.5 x 1.5 m 2 . As mentioned earlier in Section 4, the MRWCM design concept meant that the pipe that conveys the process fluids to the point of measurement needs to be specially designed. Conventionally, pressure piping are designed as per ASME B31.3, which specifies the design requirements. Since, the pipe also needs to be transparent to a magnetic field and be highly resistive, the only materials that could be practically used were different kinds of plastics. Hence, an unconventional design approach was dictated by the meter, which further required extensive testing to demonstrate the performance of the new pipe. This process has since then been successfully completed with the necessary approvals from ABSA (Alberta Boilers and Safety Authority, which is the regulatory authority for Pressure Vessels and Pressure Piping designed and operated in Alberta). The effort has resulted in a one of a kind pipe design using plastic materials that can operate continuously at 260 C and 600 psig under the harshest fluid environments typical of the oilfields of Alberta. The detailed design of the pipe is currently being patented. However, there are two areas of further improvement for this technology; (1) the sample has to be captured, thus there is no continuous monitoring; (2) the instrument is an add-on to the three-phase test separator and not a replacement. The first area is currently being addressed as follows: The need to capture the sample within the MRWCM for a period of time is due to the fact that hydrogen spins need to reach equilibrium state when more of them are aligned along the direction of the magnetic field than in the opposite direction. The sample then remains within the same magnetic field for at least 2 The complex population of spins that exist in hydrogen bearing molecules and are excited by the magnetic field of the MRWCM approaches equilibrium state after . Then the total time needed for the measurement procedure will reach T in order for the relaxation to be captured. With the current size of the uniform magnetic field area of about 10 cm along the axis of the pipe the flow speed is limited to not more than 10 cm / 15 sec ≈ 0.67 cm/sec. Therefore, a flow diameter of around 10 cm will translate to a production rate of 5 m 3 /day, which in effect is an order of magnitude less than typical field production rates. Substantially larger magnets can be considered to resolve the issue, but this will noticeably increase the cost of the system. This approach is considered as a part of our broader development strategy. We also look at the possibility of sub sampling.
Alternatively, we suggest concentrating on the oil peak measurements instead and obtain water by difference. As the T 1 for heavy oil is typically in the range of 10 -100 ms, enough time is available to accommodate realistic flow rates in the wells. For example the same flow diameter can accommodate in excess of 113 m 3 /day flow rates for a 100 ms relaxing oil. The proposed approach could be accommodated with the current design for fluid flow measurements.
The planned steps for testing this approach are as follows: • Modify our available high temperature loop for the purpose of this testing (accurate flow rate measurement, temperature control).
• Analyse flow regimes in the tube and assess the effects of viscosity, flow composition, turbulence and residence time distribution on accuracy of measurements.
• Test the capabilities of our digital fluoroscopy system (possibly combine two of them for getting stereo-effect) of delivering flow rate measurements based on appropriate image correlation analysis.
• With the known flow rate and flow velocity distribution analyse possibility of water content extraction from NMR signal. • Look into the problem of proper continuous sampling (if a bypass loop will be needed in order to reduce flow rate of fluid passing through the meter).
A successful design of such a meter will require a wider diameter pipe and a wider magnet but of similar length. The potential incremental cost will have to be investigated but it is not considered prohibitive.
CONCLUSIONS
NMR based fluid content and fluid measurement technology is now feasible. Properties such as fluid viscosity, solvent content and water-cut can be measured on-line with remarkable accuracy and precision.
Additional field-testing will be required prior to full commercialization. | 2022-06-28T00:08:35.468Z | 2009-01-01T00:00:00.000 | {
"year": 2009,
"sha1": "3e3e261940d6a045864930866c0a170e1c276151",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/147/1/012029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "3e3e261940d6a045864930866c0a170e1c276151",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
3337508 | pes2o/s2orc | v3-fos-license | Periodontal Flap Surgery along with Vestibular Deepening with Diode Laser to Increase Attached Gingiva in Lower Anterior Teeth: A Prospective Clinical Study
Background: Chronic periodontitis in lower anterior teeth results in rapidly progressive gingival recession (GR), loss of alveolar bone, decreased vestibular depth (VD) with consequential tooth mobility, and tooth loss. Treatment option for such cases in this esthetically important area of the oral cavity includes extraction followed by implants for which sufficient bone height and mucogingival complex are a prerequisite. Henceforth, an attempt was made to prolong the life of lower anterior teeth and postpone the need for implants by the treatment of chronic periodontitis with periodontal flap surgery followed by vestibular deepening in single surgical procedure. Materials and Methods: In this clinical, prospective study, conventional periodontal flap surgery was done on 74 sites in lower anterior teeth in 16 patients with attachment loss >5 mm due to chronic periodontitis. Vestibular deepening with diode laser at (wavelength - 810 nm, output power: 0.5–7 W, continuous wave, contact mode) was done after suturing the flap. All the clinical parameters: GR, pocket depth (PD), clinical attachment loss (CAL), width of keratinized gingiva, width of attached gingiva, and VD were assessed preoperatively after Phase I therapy and 6 months postoperatively. Results: At all the 74 sites, there was highly significant gain in attached gingiva, keratinized gingiva, and VD (P ≤ 0.001). Highly significant reduction in PD (P ≤ 00.001), significant reduction in attachment loss (P ≤ 0.01) but no significant reduction in GR (P = 0.897) was observed. Conclusions: The combination of periodontal flap surgery with vestibular deepening with diode laser may be a suitable cost-effective treatment option to prolong the life of periodontally involved lower anterior teeth. The surgical technique can postpone the need for extraction of teeth along with all the intangible benefits of periodontal therapy.
procedure is no longer a part of the armamentarium of periodontists.
The literature on periodontology is replete with studies on reconstruction of periodontium with bone grafts, [6] barrier membranes, and tissue engineering. [7,8] Soft-tissue grafts [9] are used for increasing the attached gingiva and root coverage. Modified apically repositioned flap technique [10,11] and subepithelial connective tissue graft [12] aim at increasing the attached gingiva on multiple teeth. Till date, no periodontal surgical procedure provides for the treatment of teeth with attachment loss of >5 mm in conjunction with gingival augmentation on multiple teeth in one sitting. Hence, the present prospective, clinical study was done to evaluate the effect of periodontal flap surgery in combination with vestibular deepening with diode laser on attached gingiva, VD, pocket depth (PD), attachment loss, and GR.
materIals and methods
This prospective, clinical study was conducted on 16 patients (4 males and 12 females; mean age ± standard deviation [SD]: 35.06 ± 7.52) diagnosed as having generalized moderate-to-severe chronic periodontitis. The individuals were recruited from patients reporting to the Department of Periodontology of Faculty of Dentistry, Jamia Millia Islamia University, New Delhi, India.
The criteria for patient selection were: (1) No history of systemic disease that could affect the outcome of periodontal therapy; (2) good compliance with plaque control instructions; (3) patients suffering from generalized moderate-to-severe chronic periodontitis; (4) at least two mandibular anterior teeth with radiographic bone loss, shallow vestibule, CAL ≥5 mm, and limited attached gingiva; (4) no history of smoking; and (5) absence of traumatic occlusion. Patients excluded from the study were (1) pregnant patients, (2) inadequate compliance with oral hygiene maintenance instructions, and (3) use of any medication known to influence periodontal tissues.
This study was approved by the Institutional Ethical Committee of Jamia Millia Islamia University, New Delhi, India. All the codal ethical formalities of the Institutional Ethical Committee were followed. Routine periodontal therapy was given to all the patients for generalized chronic periodontitis.
A general assessment of selected patients was made through history, clinical examination, and routine laboratory investigations. Phase I therapy including the oral hygiene instructions, full mouth scaling, and root planing for generalized chronic periodontitis was performed on all the patients. Lower anterior teeth with attachment loss >5 mm due to chronic periodontitis only were taken up for the study. A total of 74 sites in 16 patients with radiographic bone loss, tooth mobility, shallow vestibule, clinical attachment loss (CAL) ≥5 mm, and limited attached gingiva were considered. GR, PD, CAL, width of keratinized gingiva, width of attached gingiva, and VD were assessed preoperatively after Phase I therapy and 6-month postoperatively. Occlusal adjustment and temporary esthetic fiber splinting wherever required were done on lingual aspect of teeth with mobility to facilitate the periodontal flap surgery.
After Phase I therapy [ Figure 1a], the conventional periodontal flap for reconstructive surgical procedure was performed under local anesthesia. Periodontal surgical debridement [ Figure 1b] was done followed by placement of sterile synthetic hydroxyapatite and β-tricalcium phosphate bone graft material (Sybograf ™ Plus, Eucare pharmaceuticals, Chennai, India), in the bony defects, wherever required. The periodontal flap was sutured with 3-0 silk sutures to its original position [ Figure 1c]. A horizontal incision was given with diode laser (DenLase, Diode Laser Therapy System, Daheng Group Inc., China; Laser parameters: Wavelength -810 nm, output power: 0.5-7 W, continuous wave [CW], contact mode), to detach the fibers from underlying periosteum leaving 1-2 mm of marginal gingiva and sutures intact [ Figure 1d]. Care was taken to direct the laser away from the periosteum and bone. VD of 6-8 mm was achieved by separating the muscle attachments. The surgical area was covered by noneugenol periodontal dressing (Coe-pak) (COE-PAK™, Periodontal Dressing, GC America Inc., USA) [ Figure 1e]. Ibuprofen 400 mg was prescribed for 3 days to relieve any postoperative discomfort. Postoperative instructions included avoid food for 3 h, cold compresses and soft diet on the 1 st day, avoid biting from front teeth, and passive rinsing with 0.12% chlorhexidine gluconate for 2 weeks.
Patients were advised to report in case of dislodgement of periodontal dressing and recalled after 2 weeks for pack and suture removal. A gentle gingival massage in the surgical c d e f area was advised to prevent reattachment of fibers and oral hygiene instructions were reinforced. Follow-up was done after 1 month and all the clinical parameters were recorded after 6 months [ Figure 1f].
Statistical method
Variables were reported as mean ± SD for continuous variables. Analysis for continuous variables was done using independent sample t-test. SPSS version 17 (SPSS, Inc., Chicago, IL, USA) was used for data analysis. A two-sided P < 0.05 was considered statistically significant.
results
Demographic criteria of 16 patients and baseline clinical characteristics of lower anterior teeth are provided in Table 1.
Out of 74 sites, 32 sites had Grade 1 mobility, 3 sites had Grade 2 mobility, and Grade 3 mobility in 1 tooth.
The amount of bone loss seen after periodontal flap reflection around roots of teeth is also depicted in Table 1. It was observed that only 1 site with Grade 3 mobility coincided with bone loss >2/3 rd root surface. Teeth with Grade 2 mobility included 3 incisors and had >2/3 rd bone loss. Out of the tooth sites with Grade 1 mobility, 1 incisor and 1 canine had 1/3 rd bone loss, 17 incisors had up to 2/3 rd bone loss; and 11 incisors had >2/3 rd bone loss. In the sites with no clinical tooth mobility, 1/3 rd bone loss was seen in 7 incisors and 13 canines, up to 2/3 rd bone loss in 15 incisors and 2 canines; and >2/3 rd bone loss in 1 lateral incisor and no bone loss in 2 canines.
There was no significant reduction in GR (P = 0.897), but significant decrease in post-and pre-operative means of attachment loss and PD [ Table 2]. Comparison of soft-tissue parameters after 6 months showed statistically highly significant gain in keratinized gingiva, attached gingiva, and VD as in Table 2.
dIscussIon
Chronic periodontitis is a multifactorial infectious disease characterized by slow irreversible damage of periodontal supporting tissue loss in a period. [13] The chronic nature of this disease and lack of severe pain allows the patient to report only when the teeth are either mobile or there is loss of clinical attachment manifested as GR. In our clinical practice, patients reported with Miller's Class 3 and Class 4 GR and mobility of teeth in lower anterior teeth. These patients were perceptively perturbed with the prospects of losing these teeth. Lower anterior teeth are of particular concern as they are esthetically important, single rooted, and are the first teeth to be extracted due to the periodontal reasons followed by upper anterior and upper second molars. However, in long-term maintenance studies molars were lost most frequently. [1] Results of a 40-year follow-up study on fate of 455 teeth with questionable prognosis showed that teeth with significant loss of periodontal tissues could be functionally maintained. [14] The average prognosis of the teeth postactive treatment changed very little from initial to 5-8 years, with prognosis being more accurate for single-rooted teeth than multirooted teeth. [15] Long-term preservation of hopeless teeth following periodontal surgery is an attainable goal with no detrimental effect on adjacent surfaces of neighboring teeth. [16,17] However, there was a significant reduction in the mean probing depth for the adjacent interproximal surfaces, pretherapy to posttherapy. [18] Nowadays, with the help of various new technologies, biological approaches, and biomaterials, the challenge is to introduce the experience and knowledge Total sites 74 32 3 1 22 32 18 The universal tooth numbering system (1-32) is used contributing to patient outcomes in terms of function, ease of care, esthetics, and long-term maintenance. [19] With this background, the authors hereby describe a surgical technique which is a combination of conventional periodontal flap surgery [20] with vestibular deepening procedure with diode laser, to retain and prolong the life of periodontal teeth.
In our study, out of 74 sites treated, 36 teeth were mobile [ Table 1]. The treatment of the periodontitis and occlusal adjustment is usually enough to strengthen the supporting tissue and reestablish function, especially in Miller Grade 1 tooth mobility. [21] However, splinting is needed in cases of Miller Grade 2 tooth mobility in addition to the treatment of the periodontitis and occlusal adjustment. Splinting is sometimes indicated in cases of Miller Grade 3 tooth mobility where tooth extraction is not acceptable or contraindicated. Although splinting provides some beneficial distribution of occlusal forces that cause tooth mobility, occlusal adjustment alleviates these occlusal forces by removing destructive contacts and creating proper occlusal clearance. [22,23] Therefore, the mobile teeth were temporarily splinted to facilitate periodontal therapy.
Diode laser was used to make the horizontal incision to achieve simultaneous homeostasis during the second step of surgery. There are many advantages of lasers including excellent homeostasis, precision, tissue surface sterilization, decreased swelling and edema, decreased pain, faster healing, and increased patient acceptance. [24,25] A diode laser is a solid-state semiconductor laser that typically uses a combination of gallium, arsenide, and other elements, such as aluminum and indium, to change electrical energy into light energy. [26] It does not interact with dental hard tissues, making it an excellent soft-tissue surgical laser. It is used for cutting and coagulating gingiva and oral mucosa and for soft-tissue curettage or sulcular debridement. [27] The soft-tissue diode laser is not only beneficial to the patient but also to the operator as the results are more predictable and less stressful to patients and clinicians. [28] Out of 72 teeth with bone loss [ Table 1], 22 sites showed up to 1/3 rd bone loss, 32 teeth had up to 2/3 rd bone loss, and 18 teeth displayed more than 2/3 rd bone loss after raising the periodontal flap. The conventional periodontal flap surgery allowed use of bone graft for periodontal reconstruction and vestibular deepening with diode laser helped in maintaining the mucogingival complex at the presurgical level by apically repositioning the frenal and muscle attachments. The combination of the two surgical procedures in one sitting resulted in highly significant increase in attached gingiva, keratinized gingiva, and VD over multiple teeth while simultaneously relieving the tension on the gingiva. There were minimal patient discomfort and postoperative complications. This one-step surgical technique without involving any other site as in soft tissue graft allows the clinician to increase the VD and attached gingiva while performing the bone reconstructive procedures in patients suffering from moderate to severe chronic periodontitis.
conclusIons
The surgical technique described in this article is a cost-effective method to prolong the life of lower anterior teeth with questionable prognosis. The increase in VD and attached gingiva can improve the success of implants if required in the future. Limitations of the study include long-term follow-up of patients with regard to bone regeneration and effect on tooth mobility.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2018-04-03T00:31:03.519Z | 2018-01-01T00:00:00.000 | {
"year": 2018,
"sha1": "3901ed750894f179ff624d5dbcfdf3dba07eb85b",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc5812079",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2c7edc8d460ac774f3a4dc50f963ac0eb04e2c85",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
3857289 | pes2o/s2orc | v3-fos-license | Stable luminescent iridium(iii) complexes with bis(N-heterocyclic carbene) ligands: photo-stability, excited state properties, visible-light-driven radical cyclization and CO2 reduction, and cellular imaging† †Electronic supplementary information (ESI) available: Additional experimental details, fig
Excited state properties, photo-catalysis and cellular imaging of photo-stable bis-NHC Ir(iii) complexes are described.
. Selected bond length and angles of 4 Hex b, 6 H b, 7b and cis-6 H b.
16-17
Visible-light-driven radical cyclization 17 Synthesis of substrates and general procedure for visible-light-induced radical cyclization. 17 Table S4. Characterization of substrates and products in photocatalysis. [18][19] Visible-light-driven CO2 Reduction 19 Procedure and measurements of CO2 reduction. 19 Table S5. Control experiments of CO2 reduction. Values refer to onset of the anodic peak potential (E ox onset) and oxidation peak potential (Epa) in parenthesis at 25 o C for irreversible couples at a scan rate of 100 mV s -1 ; c Values refer to onset of the cathodic peak potential (E red onset) and reduction peak potential (Epc) in parenthesis for the irreversible reduction waves; d approximate zero-zero excitation energy E0,0 = 1240/λem (onset of emission band at 25 o C), e Calculations of approximate redox potentials of excited iridium(III) complexes: E(Ir IV/III* ) = E(Ir IV/III ) -E0,0, E(Ir III*/II ) = E(Ir III/II ) + E0,0 (eV). 2
Experimental Section
Photophysical measurements UV-vis absorption spectra were recorded on a Hewlett-Packard 8453 diode array spectrophotometer. Photo-excitation and steadystate emission spectra were obtained on a SPEX Fluorolog-3 Model FL3-21 spectrofluorometer. Solution samples for measurements were degassed on a high-vacuum line in a two-compartment cell that consisted of a pyrex bulb (10 mL) and a quartz cuvette (path length: 1 cm) and was sealed from the atmosphere by a Bibby Rotaflo HP6 Teflon Stopper. The solutions were rigorously degassed by at least five successive freeze/pump/thaw cycles. Excited state lifetime measurements of solution samples were performed with a Quanta Ray DCR-3 pulsed Nd:YAG laser system (pulse output 355 nm, 8 ns). The emission signals were detected by a Hamamatsu R928 photomultiplier tube and recorded on a Tektronix TDS 350 oscilloscope, and analysed by using a program for the exponential fits. PL values were measured 4 relative to that of a degassed acetonitrile solution of Ru(bpy)3(PF6)2 (bpy = 2,2-bipyridine) (ref = 0.062) as a standard reference and calculated by: s = r(Br/Bs)(ns/nr) 2 (Ds/Dr), where the subscripts s and r refer to sample and reference standard solutions, respectively, n is the refractive index of the solvents, D is the integrated intensity, and is the luminescence quantum yield. The excitation intensity B was calculated by: B = 1 -10 -AL , where A is the absorbance at the excitation wavelength and L is the optical path length (L = 1 cm in all cases). Errors for values ( 1 nm), ( 10 %), ( 10 %) were estimated. Nanosecond time-resolved absorption and emission measurements were performed using a LP920-KS Laser Flash Photolysis Spectrometer (Edinburgh Instruments Ltd, Livingston, UK). The excitation source was 355 nm output from a Nd:YAG laser.
Cyclic voltammetry
Cyclic voltammetry measurements were performed on a Princeton Applied Research Model 273A potentiostat. The glassy-carbon electrode was polished with 0.05 μm alumina on a microcloth, sonicated for 5 min in deionized water, and rinsed with acetonitrile before use. A Ag/AgNO3 (0.1 M in MeCN) electrode was used as reference electrode and a platinum wire as counter electrode. All solutions of samples were prepared in MeCN containing 0.1 mol dm -3 tetra(n-butyl)ammonium hexafluorophosphate ( n Bu4NPF6) as supporting electrolyte. The solutions were purged and maintained under Argon atmosphere. Scan rates were 100 mV s -1 and the potentials were reported with respect to the potential of Ag/AgNO3. Ferrocene was used as internal reference and recorded with the potential at the range of 0.05 -0.07 V for ferrocenium/ferrocene (Cp2Fe +/0 ) vs Ag/AgNO3.
Computational details
Density functional theory (DFT) and time-dependent density functional theory (TDDFT) calculations have been performed to understand the geometries and the electronic structures of Iridium complexes (1a, 1d) using Gaussian 09 package. 4 PBE0 5 /6-31G*(lanl2dz) 6 was used for the geometry optimization and PBE0/6-31+G*(lanl2dz) was used for TDDFT calculations. The Solvent effects have been studied using self-consistent reaction field (SCRF) method based on PCM models. 7 The choice of solvents (dichloromethne,a dielectric constant e = 8.93) was based on the solvent media for experiments.
X-ray crystal-structure determination
Crystals of 4 Me b (with PF6counter anion), 6 H b, cis-6 H b and 7b suitable for X-ray crystallography were obtained by slow diffusion of diethyl ether into DCM (6 H b), MeCN (4 Me b and 7b) and chloroform (cis-6 H b) solution of these complexes, respectively. The Xray diffraction data were collected on a Bruker X8 Proteum diffractometer except for 7b (Bruker D8 Venture diffractometer). The crystal was kept at 100 K during data collection. The diffraction images were interpreted and the diffraction intensities were integrated by using the program SAINT. Multi-scan SADABS was applied for absorption correction. By using Olex2, 5 the structure was solved with the ShelXS 6 structure solution program using direct Methods and refined with the XL 6 refinement package using Least Squares minimization. The positions of the H atoms were calculated on the basis of the riding mode with thermal paramet ers equal to 1.2 times that of the associated C atoms and these positions participated in the calculation of the final R indices. In the final stage of least-squares refinement, all non-hydrogen atoms were refined anisotropically. Crystallographic parameters are summarized in Table S1. CCDC 1428476-1428479 contain the supplementary crystallographic data for this paper. The data can be obtained free of charge from The Cambridge Crystallographic Data Center via www.ccdc.cam.ac.uk/data_request/cif. 7 were synthesized according to literature procedure. 1,1'-methylenebis(3-methyl-1H-imidazol-3-ium) diiodide, 1,1'-methylenebis(3-butyl-1H-imidazol-3-ium) diiodide were prepared according to a modification of the literature method. 8 Iridium trichloride (IrCl3) hydrate, lithium trifilate (LiOTf) and silver(I) oxide (Ag2O) were purchased from commercial sources and were used as received without further purification. Deionized (DI) water was used in the experiment procedure.
General procedure for preparation of iridium(III) complexes 1-9: Characterization for synthesis: 1 H NMR spectra were recorded using deuterated solvent on a Bruker Avance DPX-300, AV-400, or DRX-500 Fourier-Transform NMR spectrometer; chemical shifts are reported relative to tetramethylsilane. (Solvents: CDCl 3, CD3CN and CD3OD; chemical shifts:, ppm; J, Hz). Positive-ion electrospray Ionization (ESI) mass spectra were obtained on a Finnigan LCQ quadrupole ion trap mass spectrometer. Elementary analysis of the new complexes was performed on a Flash EA 1112 elemental analyzer at the Institute of Chemistry, the Chinese Academy of Sciences. These complexes were synthesized from refluxing of HC^N ligand with IrCl3 in aqueous solution of 2-methoxyethanol (75% in volume) for overnight (12-18 hours), the resulting precipitate was filtrated, washed by water, ethanol and diethyl ether and dried by air. The dichloro-bridged iridium complexes [(C^N)2IrCl]2 were used without further purification. A typical procedure is described as follows (example of 1): In a 50 mL two-neck round-bottomed flask was [(ppy)2IrCl]2 (150 mg, 0.14 mmol, 1 eq), 1,1'methylenebis(3-butyl-1H-imidazol-3-ium), 2I -(159 mg, 0.308 mmol, 2.2 eq), and silver(I) oxide (136 mg, 0.588 mmol, 4.2 eq) in 2-methoxyethanol (volume: 10 mL) to give a black suspension. The reaction was refluxing at 120 °C for overnight, 2methoxyethanol (volume: 10 mL) was removed under vacuum pump. 40 mL of DCM was added and filtrated through celite, the resulting filtrate was washed by Lithium triflate aqueous solution (0.1 g/mL) 2 times. Organic layer was dried by MgSO4. After organic solvent was removed, the residue was prepared to purify by chromatographic column, and eluted by DCM/MeCN (V/V = 20/3). After removed organic solvent, the yellowish green solid was in good yield of 82% complex of 1b: 210 mg, 0.231 mmol). δH
Complex 1a:
The synthesis procedure of bis-NHC carbene Ir(III) complexes with counter anions of PF6is similar to that of trifilate Ir(III) complexes. 1a (not purified by chromatographic column) was dissolved into MeOH and excess of NH4PF6 was added into above solution, resulting precipitate was filtrated, washed by water, MeOH and Et2O, and dried by air. The solid was prepared to purify by chromatographic column, and eluted by DCM/MeCN (V/V = 10/1 to 5/1
Complex 3a:
Yield: 19%, this complex is difficult to purify. After twice chromatograph purification procedures, the desired crude product was purified from growing crystals by diffusion diethylether into a solution of DCM. δH (400 MHz, CD3CN) = 8.
Synthesis of 2-(5-bromothiophen-2-yl)pyridine:
To a 100 mL round-bottomed flask was added 2-(thiophen-2-yl)pyridine (700 mg, 4.34mmol) in DCM to give a yellow solution. Bromine (694 mg, 4.34mmol) in 5 mL of DCM was added at ice bath. After addition for 5mins, red precipitate was formed, and it was left stirring for overnight. 75% was converted after HNMR identification. Another 150 mg of bromine was added again. The crude material was loaded on a 2.0mm plate.
Synthesis of 1-chloroisoquinoline:
To a flask was added isoquinolin-1-ol (3g, 20.67 mmol) and phosphoryl trichloride (10 mL), and heated for 4hrs at 80 o C. 100 mL of ice water was added into the cooled mixture to quench the reaction. The aqueous layer was back-extracted with DCM (30 mL× 3). Combined the organic layers and washed with water (20 mL×3). The organic layer was dried by MgSO4, filtrated and concentrated. The product was used without further purification (3.12g, 92%). 1
Synthesis of 1-(thiophene-2-yl)isoquinoline (Htiq):
In a 100 mL two-neck round-bottomed flask was 2-bromothiophene (1.95 g, 12 mmol) and magnesium (0.291 g, 12.00 mmol) in dried THF (20 mL) to give a colorless suspension. Iodine 10 mg was added to initialize the reaction and refluxing for 30 mins. In a 250 mL two-neck round-bottomed flask was 1-chloroisoquinoline (1.63 g, 10.00 mmol) and Ni(dppe)Cl2 (53 mg, 0.100 mmol) in dried THF (30 mL) to give a orange solution. The Grignard reagent of first solution was transferred by cannula at refluxing temperature, the resulting dark brown solution in the 250 mL flask was left stirring for 20 mins and refluxing for overnight. Monitoring of reaction shows product left without starting materials. H2O was added to end the reaction. The aqueous layer was back extracted with Et2O (50 mL×3). Combined the organic layers and washed with water (50 mL×2). The organic layer was dried by MgSO4, filtrated and concentrated. The crude product was added to a silica gel column and was eluted with DCM/Hex (V/V from 1/1 to 3/1). Collect fractions with Rf = 0.25 in DCM/Hex (V/V =1/1). Purified Htiq: 1.94g, Yield: 94%. 1
Synthesis of di(1H-imidazol-1-yl)methane:
In a 250 mL two-neck round-bottomed flask was 1H-imidazole (6.8 g, 100 mmol) in THF (200 mL) to give a colorless solution. Sodium hydride (4.39 g, 110 mmol) was added by three lots and resulting grey suspension was added dibromomethane (9.03 g, 51.9 mmol) by three lots under ice bath. The resulting grey suspension was heated at 50 o C for overnight, and grey solid was converted to yellow. The reaction was monitored by 1 H NMR. Upon reaction was completed, the solvent was removed, and the residue was dissolved by 100 mL of methanol and passed through celite and washed by 50 mL of methanol. The filtrate was reduced and prepared to a silica gel column and was eluted with DCM/MeOH (V/V = 4/1 to 1/1). Collected yellow fractions with Rf = 0.25 (DCM/MeOH: V/V = 4/1) which could be stained in I2 champer. 1 H NMR (400 MHz, CDCl3) δ = 7.91 (s, 2H), 7.37 (s, 2H), 6.89 (s, 2H), 6.20 ppm (s, 2H).
Synthesis of substrates:
Substrate A:
Substrate B:
General procedure for visible light-induced photocatalytic reductive cyclization of organohalides (Table 3 -7): Use substrate A1, photo-catalyst 4 Me b and N,N-diisopropylethylamine (DIPEA) as example: To a test tube (Pyrex, 15×125 mm) charged with an organohalide substrate (50 μmol) and 2mol% of 4 Me b complexes (1 μmol) were added MeCN (4 mL), 5eq of DIPEA (87 μL) and 2.5eq of HCOOH (10 μL). The mixture in test tube was degassed by nitrogen (bubbling for 10 min through septa by cannula). The test tube was placed in the irradiation apparatus equipped with blue LEDs (centred at 460 nm, 12 w). 12 . The resulting mixture was stirred at ambient temperature for specified time. The organic solvent was evaporated under reduced pressure. The n added 30 mL of Et2O and followed with 10 mL of saturated NaHCO3 (aq) solution. The mixture was extracted by Et2O (15 mL×2) and combined organic solvent was washed by water (15 mL×2), dried over MgSO4. After organic solvent is removed, the yield of product can be calculated by adding internal standard 5,5'-dimethyl-2,2'-bipyridine (with known weight) to the resulting crude residue. Table S4. The characterization of substrates and products in photocatalysis. 1 H NMR spectrum matches to reported literature. 17
Visible-light-driven CO2 Reduction
Photo-catalytic procedure of CO2 reduction In a 4 mL CH3CN/TEA (4:1, v/v; TEA = triethylamine) solution, [Co(TPA)Cl]Cl and [Ir] (4 Me b,OTf anion) (with specified concentration) was added into a Pyrex tube (Volume: 22 mL (16 (OD.)×150; 1.2 thickness (mm)) and purged with CO2 through a septum (purity ⩾ 99.8%) for 10 min, followed by 250 μL CH4 was injected to the tube prior to the irradiation using blue LEDs (centred at 460 nm, 12 w). 12 All reactions and LEDs were cooled by aluminium blocks by using cooling fans. Gas sample (200 μL) was drawn from the headspace of the tube and injected to GC-TCD for measurement.
Measurement of gases products
Gas chromatographic analysis was conducted using Agilent 7890A gas chromatography equipped with a thermal conductivity detector (TCD) and a HP-Plot 5Å column with Ar as the carrier gas. The oven temperature was held at 40 o C. Inlet and detector temperature were set at 80 o C and 150 o C respectively. Calibration curves were established separately based on the averaged results from three point of injections for CO (R 2 = 0.9997), H2 (R 2 = 1.0000) and CH4 (R 2 = 0.9996). Biological studies: Anticancer properties: The cell lines were maintained in cell culture media (minimum essential medium for HeLa) supplemented with 10% fetal bovine serum, 100 U/mL penicillin, and 100 μg/mL streptomycin at 37 o C humidified atomsphere with 5% CO2. Cell growth inhibitory effects of the iridium(III) complexes and cisplatin were determined by MTT cytotoxicity assay. Drug treated cells were incubated with MTT for 12h at 37°C in a humidified atmosphere of 5% CO2 and were subsequently lysed in solubilizing solution. Cells were then maintained in a dark, humidified chamber overnight. The formation of formazan was measured by using a microtitre plate reader at 580 nm. Growth inhibition by a drug was evaluated by IC50 (concentration of a drug causing 50% inhibition of cell growth). Each growth inhibition experiment was repeated at least three times and the results were expressed as means ± standard deviation (SD).
Fluorescence microscopic examination
HeLa cells (2×10 5 cells) were seeded in a one chamber slide (Nalgene; Nunc) with culture medium (2 mL per well) and incubated at 37 o C in a humidified atmosphere of 5% CO2/95% air for 24 h. A stock solution of iridium complex was prepared in DMSO and then diluted to 5μM into the cells glass coverlips (Mattek 35mm glass bottom dish) for imaging experiments. After treating with Ir complexes only or the mixture of Ir complexes with ER-TrackerTM (1μM) or Lysotracker ® (100 nM) or Mitotracker ® (50 nM) for | 2018-07-14T01:00:43.396Z | 2016-01-20T00:00:00.000 | {
"year": 2016,
"sha1": "21246613f9cb2d24c80bdc11805fc10806fbe564",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2016/sc/c5sc04458h",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21246613f9cb2d24c80bdc11805fc10806fbe564",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
247101698 | pes2o/s2orc | v3-fos-license | CIP4 targeted to recruit GTP-Cdc42 involving in invadopodia formation via NF-κB signaling pathway promotes invasion and metastasis of CRC
Cdc42-interacting protein 4 (CIP4), a member of the F-BAR family, which plays an important role in regulating cell membrane and actin, has been reported to interact with Cdc42 and be closely associated with tumor invadopodia formation. In this study, we found that CIP4 expression was significantly higher in human CRC tissues and correlated with the CRC infiltrating depth and metastasis, as well as the lower survival rate in patients. In cultured CRC cells, knockdown of CIP4 inhibited cell migration and invasion ability in vitro and tumor metastasis in vivo, while the overexpression of CIP4 promoted invadopodia formation and matrix degradation ability. We then identified GTP-Cdc42 as a directly interactive protein of CIP4, which was upregulated and recruited by CIP4. Furthermore, activated NF-κB signaling pathway was found in CIP4 overexpression of CRC cells contributing to invadopodia formation, while the inhibition of either CIP4 or Cdc42 led to the suppression of the NF-κB pathway and resulted in a decreased quantity of invadopodia. Our findings suggested that CIP4 targets to recruit GTP-Cdc42 and directly combines with it to accelerate invadopodia formation and function by activating NF-κB signaling pathway, thus promoting CRC infiltration and metastasis.
INTRODUCTION
Colorectal cancer (CRC) was the third leading cancer type for new cases and deaths in 2021. 1 Although surgical techniques and adjuvant therapy have advanced, the overall survival of CRC patients has not improved significantly in recent years. 2 Liver and lung metastasis after radical resection and chemoradiotherapy is the most important cause of death in CRC patients. 3 Metastasis is a complex biological process that needs cancer cells to form a leading edge and move forward. 4 The invasion of cancer cells into surrounding tissue and the vasculature requires the chemotactic migration of cancer cells, steered by protrusive activity of the cell membrane and its attachment to the extracellular matrix (ECM). 5 Recent work has uncovered a prominent actin-based cellular struc-ture, called invadopodia, as a unique structural and functional module through which major invasive mechanisms are regulated. [6][7][8] The co-localization of F-actin with actin-bundling protein cortactin combines with microtubules, driving the invadopodia to degrade the ECM and to facilitate distant metastasis. 9 CIP4 (Cdc42-interacting protein 4) is a protein encoded by the TRIP10 gene located on human chromosome 19. 10 CIP4 was first identified by using activated Cdc42 as a bait in a yeast two-hybrid screen, which contains an F-BAR domain at the N-terminal, an SH3 domain at the C-terminal, and an HR1 domain in the center. 11,12 It has been reported that CIP4 plays an important role in various cellular events by regulating cell membranes and actin, such as vesicle formation, endocytosis, cytoplasmic membrane microtubule transformation, adhesion, and invadopodia formation in a variety of cells. [13][14][15][16][17] Other studies have associated CIP4 with cell invasiveness and migration in different types of cancer, such as breast cancer, non-small cell lung cancer, nasopharyngeal carcinoma, and osteosarcoma , which indicates that CIP4 has a crucial part to play in tumor metastasis. [18][19][20][21][22] CIP4 has been identified by the interaction with activated Cdc42. Cdc42 is a key member of the Rho family, which is well established to be central to the dynamic actin cytoskeletal assembly and rearrangement, the underpinnings of normal cell-cell adhesion, cell migration, and even transformation. 23 Cdc42 is also involved in the regulation of invadopodia formation. 24,25 Studies have demonstrated the podocalyxin-like 1 promotes invadopodia formation and metastasis through the activation of Rac1/Cdc42/cortactin signaling in breast cancer cells. 26 Nuclear factor kB (NF-kB) is a critical cell signaling pathway that is involved in many cellular activities and can result in cancer if not appropriately regulated. 27 Inappropriate activation of NF-kB leads to tumor proliferation, invasion, and metastasis. 28,29 Early studies showed that Cdc42 regulates specifically in the NF-kB-dependent transcription. 30,31 Therefore, in view of the special role of CIP4 and Cdc42 in invadopodia formation, along with the unknown downstream signaling pathway, whether NF-kB is in response to the activated Cdc42 interacting with CIP4 to promote CRC metastasis attracted our interest.
Our previous research found that CIP4 expression is significantly upregulated in human CRC tissues. 32 In the present study, we demonstrated the expression and clinical significance of CIP4 in CRC samples, and provided evidence that CIP4 binds to GTP-Cdc42 to promote invadopodia formation and ECM degradation. Importantly, we further explored the downstream regulation mechanism of the complex of CIP4 and Cdc42 in CRC progression and metastasis, and here the NF-kB signaling pathway was confirmed to be essential.
RESULTS
CIP4 expression in CRC tissues correlates with tumor development, invasiveness, and patient survival rate Western blot was used to test the expression of CIP4 in 14 CRC tissues (T) and paired adjacent normal colorectal tissues (N). Our results revealed that CIP4 was upregulated in all of the 14 CRC tissues at the protein level (p < 0.0001; Figure 1A). Immunohistochemistry (IHC) staining was performed in 107 paraffin-embedded CRC tissue sections. The representative photographs showed that the expression of CIP4 was significantly elevated in human CRC tumors compared to the corresponding normal tissues. The CIP4 expression score was also higher in tumor tissues (p < 0.0001; Figure 1B). Interestingly, we found that CIP4 exhibits higher expression in the invasion front (a) than the tumor central area (b) in some CRC tissues ( Figure 1C). The correlation between CIP4 expression level and CRC clinicopathological characteristics was analyzed further (Table 1). We found that the CIP4 level was closely related to tumor differentiation (p < 0.001) and invasive depth (p < 0.001).
To investigate whether the different levels of CIP4 expression in CRC are related to a patient's prognosis, we performed a bioinformatic analysis of the NCBI GEO database (GEO: GSE17538). Kaplan-Meier survival analysis revealed that patients with a higher level of CIP4 expression had a worse clinical outcome (p = 0.0353, Figure 1D). These observations demonstrate that CIP4 may play an important role in CRC invasion and metastasis as well as a patient's survival.
CIP4 promotes CRC cells migration and invasion in vitro and tumor metastasis in vivo
A high level of CIP4 expression was observed in Lovo and HT29 cells, while HCT116 and DLD1 showed a low level of CIP4 expression according to our previous research. 32 To gain insight into the potential role of CIP4 in CRC invasion and metastasis, we generated Lovo-small hairpin CIP4 (shCIP4) and HT29-shCIP4 cell lines that stably downregulated CIP4, and HCT116-CIP4 and DLD1-CIP4 cell lines that stably overexpressed CIP4 ( Figure 2A). Transwell migration and wound-healing assays were performed to evaluate the ability of cells tomigrate. The migration ability of Lovo-shCIP4 and HT29-shCIP4 cells was reduced compared with the control cells, while HCT116-CIP4 and DLD1-CIP4 cells showed increased migration ability compared with the control cells (p < 0.05; Figures 2B and 2C). The Matrigel-coated Boyden chamber invasion assay revealed that the knockdown of CIP4 significantly reduced the invaded cell numbers. Inversely, the overexpression of CIP4 accelerated the invasion ability of HCT116-CIP4 and DLD1-CIP4 cells (p < 0.01; Figure 2D).
The effect of CIP4 on the tumor metastasis was assessed by an animal model for colon cancer metastasis. Spontaneously metastasizing colonic tumors were formed after the suture of colon cancer tumors into the cecal wall of BALB/c-nu/nu athymic mice. Eight weeks after operation, no liver metastatic nodules were found in all 5 HT29-shCIP4 groups (0 of 5), while 3 out of 5 HT29-shCtrl groups presented with liver metastasis (3 of 5). The numbers of nodules observed were 2, 3, and 5, respectively, as shown in Figure 2E. IHC staining confirmed that the tumors derived from the HT29-shCIP4 group exhibited fewer CIP4 expression levels than the tumors derived from the control cells.
CIP4 is sufficient for invadopodia formation and function in CRC cells
We evaluated the effect of CIP4 on invadopodia formation by examining the co-localization of F-actin (red) with actin-bundling protein Cortactin (green) in CRC cell lines. The suppression of CIP4 reduced the occurrence of invadopodia from 45.83% to 19.67% in Lovo cells (p < 0.001; Figure 3A) and from 26.00% to 16.67% in HT29 cells (p < 0.001, Figure S1A) and also shrank the morphology of invadopodia ( Figure S1B). Meanwhile, more than 35% (HCT116) or 38% (DLD1) of cells contained invadopodia after overexpressing CIP4, which existed as bright, large puncta surrounding the nuclei, compared with 16.33% or 16.83% of control cells with smaller invadopodia, respectively (p < 0.01 and P < 0.001; Figures 3A, S1A, and S1B). In addition, we observed the effect of CIP4 on the morphology of invadopodia by scanning electron microscopy (SEM). A great quantity of elongated branching invadopodia in the ventral side of Lovo was observed. The number of invadopodia was reduced and the protrusions became shorter with the decreased expression of CIP4. The same phenomenon was found in HCT116 cells; the invadopodia was denser and more extended than control cells when CIP4 was overexpressed ( Figure 3B). The matrix degradation is an indispensable step in tumor metastasis, 33 so we detected the ability of invadopodia to degrade matrix gels by the matrix degradation assay. The overexpression of CIP4 enlarged the cavities formed by the degradation of matrix gels by approximately 2.7-fold (HCT116, p < 0.01), while CIP4 knockdown reduced the cavities by 2.3-fold (Lovo, p < 0.05) ( Figure 3C). We further observed the morphology of matrix degradation by CRC cells through SEM. As is shown in Figure 3D, the cells with high levels of CIP4 expression (Lovo-shCtrl and HCT116-CIP4) inserted the invadopodia deeply into the matrix gels and disintegrated it. On the contrary, the cells with low levels of CIP4 expression (Lovo-shCIP4 and HCT116-Ctrl) appeared to be smoother and merely adhered to the surface of the matrix gels. Based on the above data, we determined that CIP4 is necessary and sufficient to promote invadopodia formation and focal matrix degradation in CRC cells.
CIP4 promotes the expression and activation of Cdc42
Cdc42 cycles between activation with GTP and inactivation with GDP. 34,35 It has been reported that the overexpression of an active form of Cdc42 is sufficient to form invadosome actin cores. 36 Since CIP4 was first identified as an interacting protein of Cdc42, the regulatory mechanism between them in CRC remains to be revealed. In Figure 1A, we examined the expression levels of CIP4 in 14 pairs of CRC tissues, and we again examined the expression levels of Cdc42 in these tissues. The results showed that the protein level of Cdc42 in tumor tissue (T) was higher than that of matched normal colorectal tissue (N)-the same as CIP4 (p < 0.001; Figure 4A). As shown in Figure 4B, CIP4 and Cdc42 protein expression levels in consecutive paraffin-embedded slices of human CRC tissue were detected by IHC. We confirmed that the Cdc42 expression score was higher in tumor tissues (tumor 1, tumor 2) than normal tissues (Normal) (p < 0.01). In the meantime, we observed the consistence in expression and localization of CIP4 and Cdc42. The bioinformatic analyses indicated that there was a positive correlation between CIP4 and Cdc42 (R = 0.297, p = 0.023).
To determine whether CIP4 regulates Cdc42 expression and activation, we detected the expression of Cdc42 and GTP-Cdc42 in Lovo and HT29 cells when CIP4 expression was blocked, and we found that the decreased protein level of Cdc42 and GTP-Cdc42 compared with the control cells. Conversely, the raised protein level of Cdc42 and GTP-Cdc42 was found in HCT116 and DLD1 cells when CIP4 expression was increased ( Figure 4C).
The immunofluorescence results also confirmed that the expression of Cdc42 (red) was positively correlated with CIP4 (green) ( Figure 4D).
CIP4 directly interacts with activated Cdc42 to accelerate invadopodia function
Since we have demonstrated that CIP4 can promote the activation of Cdc42, whether CIP4 interacts with activated Cdc42 and participates in the process of invasion and metastasis of CRC is essential to our research. The immunofluorescence assays validated the localization of CIP4 and its partial co-localization with Cdc42 ( Figure 5A). The co-localization coefficients between CIP4 and Cdc42 in Lovo cells and HT29 cells are 0.64 (n = 5) and 0.71 (n = 5); in HCT116-CIP4 and DLD1-CIP4 cells the coefficients are 0.42 (n = 5) and 0.38 (n = 5) ( Figure S2A).
Cdc42 has the ability to combine guanosine diphosphate/guanosine triphosphate (GDP/GTP). In response to extracellular stimuli, Cdc42 transitions from an inactive GDP-bound status to an active GTP-bound status and interacts with downstream effectors to propagate changes in cell behaviors. We constructed a prokaryotic vector for the mutants of activated Cdc42 (L61Cdc42), containing a Gln61to-Leu substitution. Next, we performed the glutathione S-transferase (GST) pull-down assay to see whether CIP4 combines GTP-Cdc42 directly. Coomassie brilliant blue staining showed the clear protein bands with consistent molecular weights (GST for 26 kDa, CIP4-GST for 101 kDa, and L61Cdc42-6HIS for 27 kDa) ( Figure S2C). We then detected the expression of CIP4-GST and L61Cdc42-6HIS fusion protein by western blot, which showed the combination between L61Cdc42-6HIS and CIP4-GST, while the negative control GST did not combine with L61Cdc42-6HIS ( Figure 5B). According to the results, we verified that CIP4 directly interacts with GTP-Cdc42 and may form a complex to participate in the formation of invadopodia.
In the previous data, we determined that CIP4 is necessary and sufficient to promote invadopodia formation. We then validated the localization of CIP4 (green) and its partial co-localization with Cortactin (blue) and F-actin (red) in Lovo and HT29 cells by confocal laser scanning ( Figure 5C). Representative photographs indicate that CIP4 not only promotes the formation of invadopodia but also assembles in the invadopodia. We further constructed a lentivirus vector FLAG-L61Cdc42 that continuously activated Cdc42 mutant L61Cdc42 and transfected it into the CIP4-overexpression or CIP4knockdown CRC cells as well as their control cells. After performing the immunofluorescence assays to locate the invadopodia (blue for Cortactin and red for F-actin) and activated Cdc42 (green for FLAG, representing the activated Cdc42), we found that in the cells with a high expression level of CIP4 (Lovo-shCtrl, HT29-shCtrl, HCT116-CIP4, and DLD1-CIP4), the activated Cdc42 was more likely to gather in the invadopodia, while in the cells with a low expression level of CIP4 (Lovo-shCIP4, HT29-shCIP4, HCT116-Ctrl, and DLD1-Ctrl), the activated Cdc42 tended to scatter throughout the cells ( Figure 5D). Therefore, we inferred that CIP4 recruits GTP-Cdc42 into the invadopodia and interacts with it, promoting the formation and function of invadopodia.
The NF-kB signaling pathway is involved in accelerating invadopodia formation regulated by CIP4 through GTP-Cdc42 GTP-Cdc42 has been reported to be able to activate the NF-kB signaling pathway. 31 Since NF-kB is widely known to be a critical part of tumor invasion and metastasis, 37,38 we wondered whether the invadopodia formation was accelerated by NF-kB, which was regulated by CIP4 through GTP-Cdc42. We investigated the effects of CIP4 on the key molecule of the NF-kB signaling pathway, RelA (p65), the phosphorylation status of p65 (p-p65), and the p65 in the nucleus. 39,40 As is shown in Figures 6A and S4A, the downregulation of CIP4 in Lovo-shCIP4 and HT29-shCIP4 cells led to the decreased expressions of Cdc42 and GTP-Cdc42 as well as p-p65 and the p65 in the nucleus compared with the control cells. We then overexpressed the activated Cdc42 in Lovo-shCIP4 and HT29-shCIP4 cells and found the expressions of p-p65 and the p65 in the nucleus reverted. After being treated with the NF-kB activator lipopolysaccharide (LPS), the expressions of p-p65 and the p65 in the nucleus increased significantly while the expressions of CIP4, Cdc42, and GTP-Cdc42 remained low.
Correspondingly, the overexpression of CIP4 upregulated the expressions of Cdc42 and GTP-Cdc42 as well as p-p65 and the p65 in the nucleus in HCT116-CIP4 and DLD1-CIP4 cells compared with the control cells. When treated with the Cdc42 inhibitor ML141, the expressions of Cdc42 and GTP-Cdc42 decreased as did p-p65 and the p65 in the nucleus. After being treated with the NF-kB inhibitor QNZ, the expressions of p-p65 and the p65 in the nucleus was suppressed, while the expressions of CIP4, Cdc42, and GTP-Cdc42 barely changed ( Figures 6B and S4B). The optimum conditions of activators and inhibitors to treat cells were examined in Figure S3. These results indicated the NF-kB was the downstream pathway regulated by CIP4 through GTP-Cdc42.
Next, we detected the influence of the expression changes of these proteins on the formation of invadopodia by immunofluorescence assays ( Figures 6C, 6D, S4C, and S4D). The quantity of invadopodia was consistent with the expression level of p-p65 (p < 0.01), which re-vealed that the activation of the NF-kB signaling pathway was an indispensable part of invadopodia formation. Therefore, our data demonstrated that CIP4 promotes the expression and activation of Cdc42, which activates the NF-kB signaling pathway, thereby accelerating the invadopodia formation.
DISCUSSION
CRC is one of the most common cancers worldwide. 41 Although much information on the molecular basis of CRC has been provided by numerous researchers to develop all kinds of targeted therapeutic options, the clinical cure rate remains unsatisfactory. 42 The main obstacle of improving the clinical treatment effect of CRC currently is metastasis, which is a complex biological process involving various molecular regulations that still need to be clearly revealed. Therefore, to elucidate the molecular mechanism of CRC infiltration and metastasis and to intervene and treat its progress are scientifically significant in improving the cure rate of malignant tumors and the survival rate of patients.
In a previous study, we found significantly higher expression of CIP4 in CRC cells compared to corresponding normal tissues, and demonstrated that PRKA kinase anchor protein 9 (AKAP-9) regulates the expression of CIP4 to promote metastasis and, consequently, the epithelial-mesenchymal transition (EMT) of CRC. 32 A similar phenomenon is observed in breast cancer cells, in which CIP4 interacts with N-WASP in response to epidermal growth factor receptor (EGFR) increasing the formation of invadopodia and ECM degradation. 19 While CIP4 is regarded as a suppressor of Src-induced invadopodia formation and invasiveness in MDA-MB-231 breast tumor cells. 18 CIP4 also increases the expression and activity of matrix metalloproteinase-2 (MMP-2) by regulating EGFR signaling to promote metastasis in non-small cell lung cancer and nasopharyngeal carcinoma cells. 20,21 The ability of CIP4 to facilitate metastasis and invasiveness is confirmed in hepatocellular carcinoma (HCC), osteosarcoma, and renal cell carcinoma as well. 22,43,44 Although multiple molecular signals are involved in CIP4 function on cancer cells as reported, the specific mechanism of the interaction between CIP4 and Cdc42, which is the most well-known directly related protein to CIP4, remains unclear. Therefore, we focused on exploring the association between CIP4 and Cdc42 in invadopodia formation and ECM degradation, as well as the downstream signaling pathway in response in CRC.
Our results confirmed the high expression of CIP4 in CRC tissues, especially in the invasion front. The functional experiments in vivo and in vitro support our assumption that CIP4 plays an important role in promoting metastasis in CRC. Since the invadopodia formation is closely related to CIP4, we observed the quantity and morphology as well as the ECM degradation ability changes of invadopodia in CRC cells under different expression levels of CIP4, which indicates that CIP4 is capable of accelerating the formation and function of invadopodia. To figure out whether the interaction between CIP4 and Cdc42 contributes to this cellular structure, we verified that CIP4 can upregulate the expression and activation of Cdc42 and directly combine with the activated Cdc42. We have not been able to identify the specific binding site yet, but we believe that it may be the HR1 domain, as mentioned. 45 Furthermore, we spotted an interesting phenomenon that the complex of CIP4 and GTP-Cdc42 is located on invadopodia, and GTP-Cdc42 has been proved to be recruited by CIP4 to assemble in invadopodia to maximize the impact. While further exploration of the downstream molecular mechanism is required, we noticed the wildly activated signaling pathway NF-kB in CRC, which supports tumorigenesis by enhancing the cell invasion and metastasis. 46,47 As expected, the high expression level of CIP4 and activated Cdc42 results in the increased phosphorylation of RelA (p65), which controls a great amount of NF-kB activity and promotes the formation of invadopodia. While the inhibition of CIP4 and Cdc42 leads to the suppression of the NF-kB pathway ended up with decreased quantity of invadopodia. These results highlight the essential role of the NF-kB pathway in invadopodia formation and function regulation by the interaction between CIP4 and GTP-Cdc42.
Taken together, our studies reveal the molecular mechanism of CIP4 promoting CRC infiltration and metastasis. The combination with GTP-Cdc42 and the activation of the NF-kB signaling pathway associated with invadopodia formation and function are particularly important in CRC. Considering the unclarity of all of the details in the interaction, along with the recent study that identified the CIP4 phosphorylation by protein kinase A (PKA) on the modulation of cancer invasion, 43 our future research will focus on the possible modification of CIP4 that leads to the interaction with Cdc42 as well as the specific mechanism of RelA (p65) phosphorylation regulating invadopodia formation and function.
Clinical samples
Formalin-fixed paraffin-embedded human colorectal carcinoma tissues (n = 107) for this study were obtained from the Department of Pathology, Nanfang Hospital, Southern Medical University. Each case was confirmed a definite diagnosis of primary CRC. The fresh surgically resected CRC tissues and matched adjacent normal tissues (n = 14) were immediately frozen in liquid nitrogen till the later study. The study was approved by Nanfang Hospital, Southern Medical University, Guangzhou, China. www.moleculartherapy.org
Western blot analysis
Total proteins from cell or tissue were separated by lysis buffer (FDbio, Hangzhou, China), and the concentration was detected by bicinchoninic acid (BCA) protein assay kits (FDbio). Proteins were separated by 10% or 12.5% SDS-PAGE gel and transferred to polyvinylidene fluoride (PVDF) membranes. After being blocked by 5% skim milk for 1 h at room temperature, the protein bands were incubated with primary antibodies at 4 C overnight. The membranes were then incubated with goat anti-mouse or anti-rabbit secondary antibody (FDbio) and detected by enhanced chemiluminescence.
Wound-healing assay Cells (5-10 Â 10 5 ) were seeded into 6-well plates and cultured to 90% confluence. Cell monolayers were wounded by a sterile 10-mL pipette tip. The detached cells were removed by PBS. After culturing for another 48 h, the wound gaps were observed and photographed by an inverted microscope (Olympus, Tokyo, Japan).
Transwell migration and invasion assays
Cells (0.1-1 Â 10 6 ) were resuspended in 200 mL serum-free medium and seeded in the 24-well transwell upper chambers (Corning, Corning, NY, USA). A total of 500 mL RPMI 1640 medium containing 10% FBS was added into the lower chambers. After culturing for 12-48 h, cells on the membrane were fixed with formalin, stained with H&E, and then calculated and photographed using the ordinary optics microscope (Olympus). Invasion assays were performed according to the same procedures with the diluted Matrigel (Corning) (RPMI 1640: Matrigel = 5:1) spread on the inside bottom of the transwell upper chambers.
SEM
Cells were seeded on 8-mm-diameter pre-cleaned coverslips and cultured in 24-well plates for 36 h, washed with PBS, and fixed with 2.5% glutaraldehyde at 4 C overnight. After being washed with PBS, the cells on coverslips were dehydrated by graded ethanol at 4 C, soaked in 100% acetone for 20 min, in 100% isoamyl acetate for 15 min and in propylene epoxide for 20 min at 45 C. The coverslips were put in vacuum and sprayed with metal foil. Further observations were made under SEM.
Matrix degradation assay
The confocal disks (Nest) were incubated with 200 mL of 50 mg/mL polylysine, 200 mL 0.5% glutaraldehyde, 200 mL 0.2% gelatin from pig Oregon Green 488 (Invitrogen), and 200 mL 5 mg/mL sodium borohydride, each for 15 min at room temperature and washed with PBS 3 times in between. Cells (5Â10 3 ) were seeded in each disk and cultured for 48 h. Then performed the procedures of immunofluorescence.
GST pull-down
The Ctrl-GST, CIP4-GST, and L61Cdc42-6HIS prokaryotic expression vectors were constructed and purchased from Genechem. The fusion proteins were induced by isopropyl-b-D-1-thiogalactopyranoside (IPTG) in BL21 (TransGene Biotech, Beijing, China) growing in Luria-Bertani (LB) ampicillin. The optimum inducement condition for CIP4-GST is 0.5 mM of IPTG in 16 C for 20 h, and for L61Cdc42-6HIS is 0.8 mM of IPTG in 16 C for 12 h ( Figure S2B). The proteins were purified by combining GST-agarose or His-agarose at 4 C overnight. The purified Ctrl-GST and CIP4-GST proteins were eluted from the GST-agarose and combined with L61Cdc42-6HIS fusion protein and His-agarose at 4 C overnight. After being washed 5 times, the remaining protein mixture combined with His-agarose was analyzed by Coomassie brilliant blue staining and western blot assays.
Statistical analysis
All of the statistical analyses were performed using SPSS version 24.0 (SPSS Statistics, Armonk, NY, USA) and presented as the mean ± standard deviation (SD). The co-localization coefficients between CIP4 and Cdc42 were analyzed using ImageJ version 2.1.0 (NIH, Bethesda, MD, USA). Graphs were plotted using Prism 7.0a (GraphPad, San Diego, CA, USA). The significance of correlation between the expression of CIP4 and histopathological factors was determined using the Pearson c 2 test. The enumeration data was analyzed using the Student's t test or one-way ANOVA. Survival curves of CIP4 expression in CRC patients was carried out using the Kaplan-Meier method. p values of 0.05 or lower were considered statistically significant.
Data and code availability
All of the data generated or analyzed during this study are included in this article. | 2022-02-26T00:22:54.687Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "46c98b9a2b2bc070199700dbd9cabf535f860d4c",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "dbfcd38146d00b53e8bd0be7738261e4587e0b7d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259966787 | pes2o/s2orc | v3-fos-license | Surgical repair of severe dysphagia lusoria
This case report describes a case of severe dysphagia lusoria secondary to an aberrant right subclavian artery causing compression of the esophagus. Our 62-year-old female patient presented with severe dysphagia and underwent right carotid–subclavian bypass with uncovered thoracic endovascular aortic repair and coil embolization of the aberrant right subclavian artery. This case is unique in that an uncovered dissection stent graft was used to avoid occluding the anatomic left subclavian artery and, therefore, avoid a left carotid–subclavian bypass. This case highlights a unique anatomic variant, its surgical repair, and the long-term improvement in the patient's quality of life.
The term "dysphagia lusoria" was coined in 1761 when Bayford described a case of death from longstanding dysphagia due to emaciation. 1 He described an aberrant right subclavian artery running anterior to, and causing compression of, the esophagus (Fig 1).However, dysphagia lusoria is defined as dysphagia due to compression of the esophagus from any of several congenital vascular abnormalities. 3 An aberrant right subclavian artery is the most common vascular abnormality and consists of a right subclavian artery coming off the left side of the aortic arch, a double aortic arch, or a right aortic arch with a left ligamentum arteriosum (Fig 1). 3 This anatomic anomaly has been reported to only cause dysphagia in 40% to 60% of patients. 1Often, symptoms will not develop until later in life when atherosclerotic changes have occurred in the aberrant vessel. 3his was the case for the patient in our case study who was 62 years old when she became symptomatic.In her case, her anatomy was a right subclavian artery coming off the descending aorta, which passed in a retropharyngeal manner, causing symptomatic compression (Fig 2 ).
CASE REPORT
A 62-year-old woman was referred to the vascular surgery office for continued management of her aberrant right subclavian artery causing dysphagia.Initially, she was experiencing dysphagia due to solids only and then also liquids.Despite dietary modification, she continued to experience symptoms.She had prior imaging showing an aberrant right subclavian artery, but she had been asymptomatic for >60 years.
METHODS
Under the Health Insurance Portability and Accountability Act of 1996, a single case report is an activity to develop information to be shared for medical and/or educational purposes.Therefore, the use of protected health information to report a single case does not require institutional review board review for Health Insurance Portability and Accountability Act of 1996 purposes.The patient provided written informed consent for the report of her case details and imaging studies, with the understanding that no photographs or images would contain any personal information or defining features.
DISCUSSION
Dysphagia lusoria is a congenital abnormality caused by compression of an aberrant origin of the right subclavian artery on the esophagus.It is an uncommon abnormality that usually leads to difficulty swallowing.Most patients present with this condition later in life.This is thought to result from the compliance of vessels in younger adults and children and that vessels become more rigid and aneurysmal later in life, causing compression on the esophagus. 4In most cases, the aberrant right subclavian artery courses posteriorly to the esophagus and can be associated with a Kommerell diverticulum (not present in our patient). 2,5Again, the incidence of an anomalous right subclavian artery is quite rare and has been reported to be w0.4% to 1.8% in the general population.Therefore, the surgical indications have not been well defined.
Most patients can be managed with dietary modification and instructions to chew well and eat slower.This is an effective management strategy, provided that patients are able to maintain their weight and good nutritional status.Surgical intervention becomes necessary for patients who show no improvement with these dietary and eating modifications and those who are not amenable to these conservative management techniques. 1 Surgical management of this condition was first reported in the mid-1900s as a left posterolateral thoracotomy, followed by division of the ligamentum arteriosum and, subsequently, either ligation or reimplantation of the aberrant vessel. 1 However, ligation results in extremity ischemia and, sometimes, steal syndrome.Historically, surgical repair evolved into transecting the aberrant subclavian artery at its origin with transposition to the ascending aorta through a median sternotomy or lateral thoracotomy. 4ith the evolution of endovascular surgery, surgeons are now able to manage most patients with a minimally invasive approach.This includes the use of covered or uncovered stents and ligation of the aberrant subclavian artery with reimplantation via a right supraclavicular incision.This surgical intervention is less morbid and allows for faster recovery.In our patient, we were able to place an uncovered stent, which allowed us to not cover the left subclavian artery, given its very close takeoff to the right subclavian artery.This was followed by coil embolization of the right subclavian artery, proximally and then the use of polytetrafluoroethylene via a small supraclavicular incision to create a right carotidesubclavian bypass.The decision was made to deploy a Cook Medical aortic dissection stent to avoid migration of the Amplatzer plug because without the aid of the aortic stent, it would have been very difficult to land the plug right at the origin of the aberrant right subclavian artery.In addition, we were able to link the plug to the aortic stent, ensuring no further migration.The Amplatzer plug itself was also oversized 2 mm past the origin of the aberrant right subclavian artery to avoid any migration.
This novel method has not been previously described.The advance is that it allowed us to avoid a two-staged procedure with a more traditional and well-described technique of using a covered stent graft.Also, the very close proximity of the right and left subclavian arteries would have mandated a proximal position of the stent graft at the level of the left common carotid artery and bilateral carotidesubclavian bypasses.We used an uncovered Cook Medical aortic dissection stent to secure the 16-mm Amplatzer plug exactly at the origin of aberrant right subclavian artery (Figs 3 and 4).The uncovered stent allowed us to aggressively move the plug all the way to the vessel origin at the takeoff of the aorta without fear of misdeploying the plug too far into the aorta.Accurately deploying the plug exactly at the origin was important because if the plug had been placed 1 or 2 cm more distally, it would have occluded the vessel but likely would have continued to propagate aortic pulsation on the esophagus and not alleviated her symptoms.Another interesting and somewhat unanticipated benefit of this technique was that the first disk of the Amplatzer plug linked into the uncovered aortic stent.
CONCLUSIONS
The present case is unique in that the patient underwent right carotidesubclavian bypass with uncovered thoracic endovascular aortic repair owing to proximity of the anatomic left subclavian artery.This case highlights a unique anatomic variant, its successful surgical repair, and the long-term improvement in the patient's quality of life.
Computed tomography angiography was obtained because her preceding imaging study was performed years earlier and showed an aberrant origin of the right subclavian artery arising from the distal aortic arch, just distal to the origin of the left subclavian artery coursing posterior to the esophagus.Because she remained symptomatic despite a thorough workup for her dysphagia and because her imaging study showed dysphagia lusoria, she was taken to the operating room for planned thoracic endovascular aortic repair and right carotidesubclavian bypass.The case was performed with the intention of using a bare metal aortic stent (dissection stent; Cook Medical) in the arch and subsequent plugging of the right subclavian artery at its origin and distal to the esophagus.Aortography of the aortic arch was performed, followed by supraclavicular cutdown on the right neck to control the right subclavian artery and visualize the right internal mammary artery.A 36-mm  180-mm stent was deployed over the left and right subclavian arteries, without coverage of the carotid artery origins.According to the Ishimaru classification, the proximal landing zone was zone 2 and the distal landing zone was zone 5.No dilation of the stent was performed after deployment.The Cook dissection stent was oversized 10% from the size of the aorta to ensure a good position but not cause undo radial force or outward force on the normal aorta.The right subclavian artery was then accessed, and a 7F Brite tip sheath (Cordis) was inserted.A 16-mm Amplatzer plug was then deployed at the origin of the right subclavian artery.The Amplatzer plug consists of three disks, two thin outer disks, and a thicker middle disk.The stent graft was interlinked between the outermost disk and middle disk (Figs 3 and 4).Care was taken to ensure the plug was extremely proximal on the takeoff of the right subclavian artery, ensuring that the esophagus did not continue to experience arterial pressure and pulse action.The sheath was removed, and a large clip was placed on the right subclavian artery proximal to the right internal mammary artery and vertebral From the Department of Surgery, Lankenau Medical Center.Author conflict of interest: none.Correspondence: Cassidy Hart, MD, Department of Surgery, Lankenau Medical Center, 100 E Lancaster Ave, Wynnewood, PA 19096 (e-mail: cassidy10@ msn.com).The editors and reviewers of this article have no relevant financial relationships to disclose per the Journal policy that requires reviewers to decline review of any manuscript for which they may have a conflict of interest.2468-4287 Ó 2023 Published by Elsevier Inc. on behalf of Society for Vascular Surgery.This is an open access article under the CC BY-NC-ND license (http:// creativecommons.org/licenses/by-nc-nd/4.0/).https://doi.org/10.1016/j.jvscit.2023.101265arteries.A carotidesubclavian bypass was performed with a 6mm polytetrafluoroethylene graft.The patient returned 1 month postoperatively, with computed tomography angiography showing no flow in the proximal portion of the right subclavian artery.The patient's symptoms had improved significantly over the first few months.A repeat upper gastrointestinal series showed slightly less compression of the esophagus (Fig 2).Before surgery, we discussed with the patient that if she did not have a good clinical response, we would perform staged robotic ligation of the aberrant subclavian.However, she responded well, and a second procedure was not required.On further follow-up at 1 year postoperatively, she continued to remain symptom free and had no further episodes of dysphagia.
Fig 1 .
Fig 1. Anatomic illustration showing normal arch anatomy (A) and showing aberrant right subcavian artery and its relationship to the esophagus (B).2
Fig 2 .
Fig 2. Pre-(A) and postoperative (B) esophagrams showing compression of the esophagus from the aberrant right subclavian artery preoperatively and decreased compression (arrow) of the esophagus with the presence of the stent and Amplatzer plug postoperatively.
Fig 3 .
Fig 3. Three-dimensional rendering of thoracic aorta with stent and Amplatzer plug occluding the aberrant right subclavian artery. | 2023-07-19T15:11:39.697Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "d66b0f2348ca54c67c299ae9c38b8a1ddb9d154f",
"oa_license": "CCBYNCND",
"oa_url": "http://www.jvscit.org/article/S2468428723001740/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1b50b4a0f2727d3693f0e190d495ff3e30cb33e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
14413611 | pes2o/s2orc | v3-fos-license | Porting Elements of the Austrian Baroque Corpus onto the Linguistic Linked Open Data Format
We describe work on porting linguistic and semantic annotation applied to the Austrian Baroque Corpus (ABaC:us) to a format supporting its publication in the Linked Open Data Framework. This work includes several aspects, like a derived lexicon of old forms used in the texts and their mapping to modern German lemmas, the description of morpho-syntactic features and the building of domain-specific controlled vocabularies for covering the semantic aspects of this historical corpus. As a central and recurrent topic in the texts is death and dying, a first step in our work was geared towards the establishment of a death-related taxonomy. In order to provide for linguistic information to their textual content, labels of the taxonomy are pointing to linked data in the field of language resources.
Introduction
ABaC:us 1 is a project conducted at ICLTT 2 focusing on the creation of a thematic research collection of texts based on the prevalence of sacred literature during the Baroque era, in particular the years from 1650 to 1750. Books of religious instruction and works concerning death and dying were a focal point of Baroque culture. Therefore, the ABaC:us collection holds several texts specific to this genre including sermons, devo-1 Partly supported by funds of the Österreichische Nationalbank, Anniversary Fund (project number 14783), the ABaC:us project started in spring 2012. See http://www.oeaw.ac.at/icltt/abacus and http://www.oeaw.ac.at/icltt/abacus-project for more details. 2 The Institute for Corpus Linguistics and Text Technology (http://www.oeaw.ac.at/icltt/) of the Austrian Academy of Sciences in Vienna pursues corpus-based linguistic and literary research, focusing on the creation and adaptation of corpora and dictionaries as well as technologies for building, accessing and exploiting such data. tional books and works related to the dance-ofdeath theme. The corpus comprises complete versions, not just samples, of first editions 3 yielding some 165.000 running words. An interdisciplinary approach has been adopted for the creation of this digital corpus, which is designed to meet the needs of both literary/historical and linguistic/lexicographic research.
In order to guarantee easy data-interchange and reusability, the corpus was encoded in TEI (P5). 4 In addition, applied PoS tags and lemma information 5 , taken from modern German language, allow for complex search queries and more sophisticated research questions. 6 While starting work on the semantic annotation of the corpus, we saw the need to develop a specific taxonomy, which would also ease the task of semi-automated semantic annotation of the morpho-syntactically annotated corpus and other related texts (Declerck et al., 2011, Mörth et al., 2012. Following a bottom-up strategy, we identified all death-related lexical units such as nom-3 The majority of the selected works can be ascribed to the Baroque Catholic writer Abraham a Sancta Clara (1644-1709): e.g. Mercks Wienn (1680), Lösch Wienn (1680), Grosse Todten Bruderschafft (1681), Augustini Feuriges Hertz (1693), and Todten-Capelle (1710). For detailed information about the author see Eybl (1992) and Knittel (2012). The ABaC:us collection combines high quality digital texts with image scans of facsimiles of the earliest known prints housed in different libraries such as the Austrian National Library, the Vienna City Library, the Melk Abbey, and the Library of the University of Illinois. 4 See http://www.tei-c.org/Guidelines/P5/ for details. 5 PoS tagging has been realized using Tree Tagger, an open standard developed at the University of Stuttgart. See http://www.ims.unistuttgart.de/projekte/corplex/TreeTagger/ for more information. 6 All ABaC:us texts, which represent a non-canonical variety, were tagged using automated tools adapted to the needs of historic language and were afterwards verified by domain-experts.
inal simplicia, compound nouns and multi-word expressions for the personification of death. In addition, all terms and phrases dealing with the "end of life", "dying" and "killing" were identified. In total, more than 1.700 occurrences could be discovered in Mercks Wienn, Grosse Todten Bruderschafft and Todten-Capelle, the three most important works of our corpus.
The next step consisted in organizing the identified vocabulary in a taxonomy, which is encoded in the SKOS format (Simple Knowledge Organization System) 7 . Based on the Resource Description Framework (RDF) 8 , SKOS "provides a model for expressing the basic structure and content of concept schemes such as thesauri, classification schemes, subject heading lists, taxonomies, folksonomies, and other similar types of controlled vocabulary." 9 We chose it because SKOS concepts can be (1) "semantically related to each other in informal hierarchies and association networks", (2) "the SKOS vocabulary itself can be extended to suit the needs of particular communities of practice" and finally, because it (3) "can also be seen as a bridging technology, providing the missing link between the rigorous logical formalism of ontology languages such as OWL and the chaotic, informal and weaklystructured world of Web-based collaboration tools." 10 With the use of SKOS (and RDF), we are also in the position to make our resource compatible with the Linked Data Framework 11 .
The following sections provide an overview of the ABaC:us taxonomy and describe the way the language data contained in its labels are linked to web resources in the Linguistic Linked Open Data (LLOD) cloud 12 .
The ABaC:us Taxonomy
Currently the scheme of the ABaC:us taxonomy consists of 7 concepts comprising 362 terms or phrases, which are encoded in SKOS labels. In addition, 137 compounds and associated terms have been integrated in 4 more temporary concepts, which still await a further processing. The terms included in the labels (both preferred and alternative ones) have been manually excerpted from the original texts and partly normalized. The majority of texts are written in German, 7 http://www.w3.org/2004/02/skos/ 8 http://www.w3.org/RDF/ 9 http://www.w3.org/TR/2009/NOTE-skos-primer-20090818/ 10 Ibid. 11 See http://linkeddata.org/ 12 http://linguistics.okfn.org/resources/llod/ some parts in Latin, therefore all lexical labels belong to one of these languages. Table 1 lists concepts and definitions. Row 3 and 4 show selected examples for preferred and alternative terms in German and Latin-for better readability, a rudimentary English translation has been added. The reader can see how the death as "end of life" (concept/1) and the personalized death (concept/2) are distinguished.
Labels are related to each other by means of the following properties: abacus:hasTranslation and inverse abacus:isTranslationOf, used for German and corresponding Latin terms, abacus:hasVariant and inverse abacus:isvariantOf indicate spelling variants.
In order to systemize concept 4 (dealing with "manners of death") we use the annotation property skos:comment: "death by accident or circumstances", "death by disease", "death by foreign hand", and "death as a murderer" (i.e. personification of death) 13 . We refrained from creating concepts (labeled skos:broader) in this case, as this kind of terms does not represent corpus text. Next, we will link for this purpose to corresponding concepts included in external knowledge sources, allowing thus to distinguish between concepts and terms directly related to our corpus and other knowledge sources that can be used for additional interpretation and classification. This can be seen as the most important difference of the ABaC:us taxonomy to other vocabularies, which are often characterized by strict hierarchical formalisms making them little useful for literary sciences 14 .
Lexicalization of the Taxonomy
In order to be able to use the taxonomy in the context of NLP applications, there is the need to lexicalize the content of its labels, enriching them with linguistic information. This includes tokenization, lemmatization, PoS tagging, and possibly other levels of natural language (NL) processing. Labels enriched with this information can be better compared to text, which has also been submitted to NL processing tools. If a certain amount of linguistic similarity is found in a text passage with a lexicalized label, this text segment can then be semantically annotated with the concepts the label is associated with.
The model we adopt for the representation of the results of lexicalized labels is the one described by lemon 15 , developed in the context of the Monnet project 16 . lemon is also available as an ontology 17 , which has been imported in our taxonomy, so that we can make direct use of all classes and properties of this model.
Tokenization and Sense Disambiguation
All tokens in ABaC:us have been semiautomatically annotated with lemma and PoS information, following the STTS tag-set (Mörth et al., 2012) 18 , so that all parts of the texts selected as relevant terms for inclusion in the labels come already with this information. Thus, our task consists mainly in applying lemon ontology elements for annotating the labels of the taxonomy with this linguistic information.
As can be seen in Example 1 below, for the term included in the alternative label "rasender Tod@de" (raging death), we make use of the lemon property decomposition for encoding the results of tokenization. And we use the lemon property altRef, which has as rdfs:range an entity that is encoded as an instance of the lemon class lexicalSense 19 , for linking to the concept the alternative label is an expression of.
Linking to external Lexical and Linguistic Resources
We still need to associate the tokens, which are now each encoded as value of the lemon property decomposition, with morpho-syntactic information. As mentioned earlier, we already have all the information about the corresponding modern German lemmas and PoS (in the STTS format) for all tokens of the corpus. But, instead of using directly the lemon class lexical entry and the lemon properties canonical form and lexical property for including the linguistic information we have for every token in the corpus, we are for now linking the values of 19 See http://lemon-model.net/lemon.rdf for the whole list of properties and classes of lemon. skos:concept skos:definition skos:prefLabel skos:altLabel concept/1 "Das Ende des Lebens" "the end of life" "Tod" @de "mors" @la "death" "End" "Garauß" "Hintritt" "Todsfall" "Verlust deß Lebens" concept/2 "Der Tod als Subjekt" "death as a subject" "Tod" @de "mors" @la "death" "dürrer Rippen-Kramer" "General Haut und Bein" "ohngeschliffener Schnitter" "Reuter auf dem fahlen Pferd" "Verbeinter Gesell" concept/3 "aufhören zu leben" "the process of dying" "sterben" @de "mori" @la "dying" "ad Patres gehen" "das Valete von der Welt nehmen" "dem Tod vnter die Sensen gerathen" "den Todten-Tantz antretten" "in Gott entschlaffen" concept/4 (Comment: This concept is about "Todesarten", "manners of death") "einen bestimmten Tod erleiden" "specific ways of dying" "getötet werden" @de "to be killed" "aufgehängt werden" "erbärmlich hingerichtet werden" "ermort werden" "mit solchen vergifften Pfeil getroffen werden" "zu todt gebissen werden" concept/5 "Verstorbene, Leichen" "dead bodies" "Toter" @de "mortuus" @la "corpses" "christliche Leiche" "Leichnam" "seelig-verstorbener" "todter Cörper" "Todter" concept/6 "tot sein" "to be dead" "tot" @de "mortuus" @la "dead" "abgestorben" "der Geist ist hinaus" "leblos" "verblichen" "verstorben" concept/7 "töten, ermorden" "to kill someone" "töten" @de "killing" "erwürgen" "morden" "todt schlagen" "tödten" "Vergifften" Table 1: ABaC:us Taxonomy the lemon property decomposition to already existing lexical entries that are encoded in the LOD format. We choose for this the actual DBpedia instantiation of Wiktionary 20 . There we get also the information that "rasender" is an adjective with lemma "rasend" and that "Tod" is a noun with lemma "Tod" (see Example 1 below) 21 . The two meanings we have distinguished in the ABaC:us taxonomy for "Tod" (death), as the "end of life" and as "a subject", are also present in this external resource 22 . Depending on the specific Wiktionary entries, we have a variable number of sense-specific translations at our disposal. The word "Tod", with the meaning "end of life", is provided with 44 translations. We can automatically add those labels to our taxonomy and link them to the German labels via the abacus:isTranslationOf property, and so support cross-lingual access to our semantically annotated corpus. It was more difficult to find an English equivalent for the second meaning of "death", "death as a subject" 23 , since no direct translation for English is given in this instantiation of Wiktionary. The same can be said of the ambiguous German lemma "rasend" (raging). As a result, the term "rasender Tod@de" (raging death) is now encoded in our taxonomy (with lemon being integrated) this way:
Conclusion
The ABaC:us collection contains a wide range of death-related linguistic vocabulary deriving from the Baroque era. Its writers were extremely inventive in paraphrasing experiences with death and dying. Thus, one integral approach was to make those different concepts more easily discernible. The numerous SKOS labels in the ABaC:us taxonomy give evidence of how the culture of death and dying was transmitted in lexical and linguistic patterns. By making those patterns accessible and reusable on the (L)LOD, we complement existing contemporary concepts of the topic and provide a basis for sharing and comparing the concepts, which can be used in NLP applications in the context of eHumanities. | 2014-10-01T00:00:00.000Z | 2013-09-01T00:00:00.000 | {
"year": 2013,
"sha1": "71c813258fc059850ccb7c7b012519ec3c240118",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "71c813258fc059850ccb7c7b012519ec3c240118",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
244742985 | pes2o/s2orc | v3-fos-license | Solanum Fruits: Phytochemicals, Bioaccessibility and Bioavailability, and Their Relationship With Their Health-Promoting Effects
The Solanum genus is the largest in the Solanaceae family containing around 2,000 species. There is a great number of edibles obtained from this genus, and globally, the most common are tomato (S. lycopersicum), potato (S. tuberosum), and eggplant (S. melongena). Other fruits are common in specific regions and countries, for instance, S. nigrum, S. torvum, S. betaceum, and S. stramonifolium. Various reports have shown that flavonoids, phenolic acids, alkaloids, saponins, and other molecules can be found in these plants. These molecules are associated with various health-promoting properties against many non-communicable diseases, the main causes of death globally. Nonetheless, the transformations of the structure of antioxidants caused by cooking methods and gastrointestinal digestion impact their potential benefits and must be considered. This review provides information about antioxidant compounds, their bioaccessibility and bioavailability, and their health-promoting effects. Bioaccessibility and bioavailability studies must be considered when evaluating the bioactive properties of health-promoting molecules like those from the Solanum genus.
INTRODUCTION
The numerous species of the Solanum genus are distributed mainly in tropical and subtropical areas around the globe; these are used in folk medicine or food crops. The positive effects on the human health of these plants are linked to their content of phenols, alkaloids, saponins, terpenes, flavonoids, coumarins, and carotenoids (1,2). Some have been reported with anticancer, antioxidant, antidepressant, antihypertensive, anti-inflammatory, hypolipidemic, hypoglycemic, hepatoprotective, anti-obesogenic, and antidiabetic properties (1)(2)(3)(4).
In addition, there are reports regarding the bioaccessibility and bioavailability of these molecules. Bioactive compounds are subjected to modifications during processing and gastrointestinal digestion. Moreover, those that permeate the intestinal barrier are metabolized, and then most are distributed for excretion. Thus, the bioavailability of bioactive molecules is often low (5). The low bioavailability of bioactive compounds can hinder their potential bioactive effects on human health. Thus, they are important factors to consider during in vitro and in vivo evaluations. The present document focuses on consumables matrices and those with potential bioactive effects.
A summary of the compounds identified and isolated from Solanum species and their potential bioactive properties can be found in Table 1.
Phenolic Compounds
Phenolics have at least one aromatic ring with one hydroxyl group in their structure. Phenolics can be classified into flavonoids and non-flavonoids (25). One of the most common species worldwide is S. lycopersicum, which is reported with large concentrations of phenolic like chlorogenic acid, resveratrol, quercetin, and myricetin (31)(32)(33). Moreover, S. tuberosum, has been reported with high concentrations of phenolics in the peels, mainly phenolic acids, especially chlorogenic acid (34). Furthermore, S. melongena, where hydroxycinnamic acid derivatives are reported as major phenolics (35). Also, S. nigrum had as major compounds myricetin, 3,4-dicaffeoylquinic acid, 3-caffeoylquinic acid, 5-caffeoylquinic acid, and 4,5dicaffeoylquinic (36). S. betaceum had chlorogenic acid and 3-O-caffeoylquinic acid as major compounds (37). The S. stramonifolium plant reports high concentrations of phenolics, and the root extract presents anticancer effects (38). Less common species such as S. scabrum and S. burbankii have the presence of petunidin, delphinidin, and malvidin (39). Some health-related effects of the main phenolics present in this genus are antioxidant, anti-inflammatory, antidiabetic, cardioprotective, and anti-obesity. Phenolic compounds from diverse sources are highly unstable during the gastrointestinal digestion and have low bioavailability (3,40).
Saponins
Saponins are glycosidic compounds consisting of an aglycone (sapogenin) linked to one or more o oligosaccharide moieties. Saponins are classified according to their structure into steroidal or triterpenoid (60,61). Most saponins are poorly absorbed in the intestine, have foaming properties in aqueous solutions, exert a hemolytic effect, and cause a bitter taste and astringency (60). Nonetheless, they have been proved to have potential health benefits with anti-insulin resistance and anti-obesogenic effects (62). Saponins obtained from Solanum species have shown antitumor, anti-inflammatory, antiviral, antimycotic, antioxidant, hypoglycemic, and hypolipidemic activities (63). In S. melongena, the presence of saponins has been reported especially in the seeds; cholestane-type steroidal melongosides-N, O, P, R, S (64), as well as furostanol-type steroidal saponins, melongoside T-V (63). Saponins isolated from S. melongena peels showed inhibition of the enzyme lipase, which was more effective than the drug used as control (orlistat) (8). Diosgenin is present in dietary Solanum species and has been isolated to study its health-promoting effects: modulating oxidative stress, improving lipid profile, and regulating mitochondrial dysfunction pathway (65). In S. surattense at least 11 different saponins have been isolated and shown in vitro cytotoxic activities against cancer lines (66). The fruits from S. torvum have various steroidal saponins, proven to have anticancer effects against breast, liver, gastric, and lung cancer lines (9,67).
BIOACCESSIBILITY OF ANTIOXIDANTS IN SOLANUM EDIBLES
It is important to consider the structural changes antioxidants suffer after ingestion. Processing is a starting point in this journey, from cutting or mashing to boiling, baking, or even freezing, that can help release antioxidant compounds, thus increasing their bioaccessibility. Cooking in water can reduce the content of hydrosoluble compounds but cooking with oil can cause synergy and increase the bioaccessibility of lipophilic antioxidants. High temperatures break down cell walls and could increase phytochemicals bioaccessibility, but thermolabile compounds are highly degraded (86). Bioaccessibility is studied using different models to simulate gastrointestinal digestion, allowing researchers to calculate the portion of phytochemicals available for absorption. Phytochemicals can be released from the food matrix due to the simulated conditions based on the physiological data of digestion: electrolytes, digestive enzymes, dilution, pH, and time of digestion (87). Therefore, gastrointestinal digestion can degrade and transform antioxidant compounds, especially those that are highly sensitive to pH changes, such as anthocyanins. Digestive enzymes can also breakdown phytochemicals; like phenolic compounds attached to sugars. Enzymes like the lactase phloridzin hydrolase can hydrolysate sugar moieties from glycosylated bioactive molecules (5,88). Eggplant total phenolic content (TPC) was reported before and after four different cooking methods: baking, boiling, frying, and grilling. Antioxidant activity was determined using the ABTS assay, and reducing capacity was quantified using the FRAP (ferric reducing/antioxidant power) assay. Results indicated that TPC was improved >300% by frying, 67% by baking, and 42% by boiling. Grilling eggplant decreased TPC by 34.5%. Raw, boiled, and baked eggplant samples subjected to in vitro digestion showed bioaccessibility of TPC of 112.5, 93.4, and 101.8%, respectively; this suggests that phenolic compounds were released after in vitro digestion. Fried undigested samples showed the highest amount of TPC, but once the simulated digestion was performed, the bioaccessibility was 67%, the lowest compared to the other three samples. Grilling and in vitro digesting eggplant showed that bioaccessibility of TPC went up to 217.4%. The ABTS and FRAP results were consistent with the TPC in all digested and undigested samples. There were two exceptions: the digested boiled samples in the ABTS assay and the raw samples in the FRAP assay. The boiled digested samples increased by 336.4 %; this could be because the phenolic compounds reactive to the ABTS radical were released only after the simulated digestion. Moreover, in the second case, a bioaccessibility of 24% was attributed to a failure in extracting the reducing compounds during digestion, assuming the solvent used with the raw samples was more efficient than in vitro digestion (35). Thus, it is suggested that cooking can break down cell walls releasing antioxidants and can increase their bioaccessibility; however, they can also be more susceptible to degradation.
Drying methods have been assessed to preserve the phenolic content and the antioxidant capacity of S. melongena. Freezing, drying tunnel, and drying oven methods were evaluated, combined with slicing and mincing the eggplant. Results indicate that sliced eggplant dried at 45-50 • C in a drying oven was the best option to obtain a flour rich in bioactive compounds. In the same study, freezing the material had a negative effect on this objective (89). A study was conducted using tomato products to determine the bioaccessibility of lycopene. The tomato pulp was processed by high-pressure homogenization and microwave heating into different end-products in the presence of three different oils: coconut, olive, and fish. Highpressure homogenization, followed by heating at 90 • C, increased the bioaccessibility of lycopene. It hypothesized that these processes damage the cellular barriers allowing lycopene to be more bioaccessible (90).
The bioaccessibility of β-carotene in grape tomatoes ranged between 14 and 31%, considering they used two different digestion methods (91). Common home processes like paste processing and drying significantly increased total lycopene, phenolic, and flavonoid content, as well as the total antioxidant capacity. It is suggested that thermal processing disrupts cell membranes and cell walls, releasing lycopene from the insoluble portion of the matrix (92). Synergic interactions have been identified to benefit bioaccessibility. For instance, red cabbage was co-digested with different vegetables, enhancing bioaccessibility of total anthocyanins by 10-15% when samples were carotenoid-rich, like tomatoes. In contrast, the carotenoid bioaccessibility was decreased by 42-56%. This example of phytochemical interaction shows that some combinations exert synergy and others antagonism in bioaccessibility (93). Anthocyanins are highly sensitive to pH changes that naturally occur in the digestive tract, which has led to developing techniques to improve bioaccessibility and bioactivity, such as microencapsulation (94).
After boiling, the bioaccessibility of polyphenols in white and purple potatoes was evaluated. All polyphenols in the samples increased during the gastric phase of the in vitro digestion but decreased during the intestinal phase. Nonetheless, the boiled undigested samples had lower content of polyphenols. It is discussed that common chemical extractions underestimate the polyphenol content that could be released in the intestine. p-Coumaric acid in the purple potato was not detectable in the gastric phase, but it was detected in the intestinal phase, and it was 16x higher than in the boiled undigested samples. Also, caffeic acid in white potato was the only phenolic that increased its bioaccessibility in the intestinal phase. It is hypothesized that soluble polyphenols accumulate in cell vacuoles, which are released by pepsin action during gastric digestion. Chlorogenic acid interacts with starch, increasing its bioaccessibility and delaying absorption because it is only released once digested (95).
Two purple potatoes (Amachi and Leona) were subjected to simulated digestion, showing that the total anthocyanin concentration was over 30-fold higher in Amachi compared to Leona digests. In descending colon digesta, concentrations in Leona were 7-fold higher than in Amachi. This data was relevant because of the interest in testing anthocyanins against tumorigenic colon cells (Caco-2). Amachi digesta caused cytotoxicity in non-tumorigenic cells, while Leona's only caused cytotoxicity in tumorigenic cells. Also, it is suggested that microbial metabolism can decrease anthocyanin levels (96). The bioaccessibility of phenolic compounds in S. nigrum leaves to evaluate the effect of heating in the release of phenolic compounds has been studied. The phenolic compounds myricetin, quercetin-3-O-robinoside, 3,4-dicaffeoylquinic acid, 3-caffeoylquinic acid, and rutin were the most abundant. The boiling process improved the phenolic content but decreased after the in vitro digestion (36). These results differ from other species of the genus where phenolics were also put through simulated digestion, but the process increases the content. The discrepancies are attributed to the interaction of certain phenolics like chlorogenic acid with starch, which was not abundant in the samples (95). Despite the decreased content of phenolic compounds in S. nigrum samples after digestion, they still had bioactivity against oxidative stress and prevented DNA oxidative damage (36).
The fruits of S. betaceum are known as a functional ingredient; their phytochemicals have been linked to metabolic syndrome prevention. In this subject, simulated digestion was carried out using the fruit's seeds, pulp, and skin, and bioactive effects such as enzyme inhibition and antioxidant activity were analyzed. Phenolic acids, anthocyanins, condensed tannins, carotenoids (only in pulp) were quantified and showed enzyme inhibitory properties. The extracts were able to inhibit α-glucosidase, αamylase, and lipase before and after digestion, and the seeds extract inhibition power was improved after digestion. The seed extracts showed to be the most bioactive, and this was linked to the presence of condensed tannins that was higher than in skin and pulp samples (97).
IMPLICATIONS OF THE BIOAVAILABILITY OF SOLANUM ANTIOXIDANTS AND THEIR HEALTH-PROMOTING PROPERTIES
The bioavailability of most compounds is crucial to the bioactive effect, and is affected by factors like the individual factors and characteristics of the consumer, interaction with other compounds, delivery matrix, preparation processes, bioactive type/category, chemical structure, and others (98,99). The pharmacological concept of bioavailability considers liberation, absorption, distribution, metabolism, and excretion, generally known as LADME (100). For all of these factors and the low absorption in the gastrointestinal tract, phytochemicals have low bioavailability (5). Information about the bioavailability of specific phytochemicals is still limited, but we can predict the bioavailability of phytochemicals by using the Lipinski's rule (101); which predicts the drug-likeness of the passive absorption of a molecule considering five chemical characteristics: a molecular weight ≤500, partition coefficient (LogP) <5, hydrogen bond donors <5, and a maximum of 10 hydrogen receptors (102) (See Supplementary Table 1).
The evaluation of the bioavailability of phenolic compounds has led to determine that their glycosylation directs the route of their absorption; glycones are transported through active absorption and aglycones through passive diffusion. After their absorption in the intestine, most phenolics are extensively metabolized by xenobiotic enzymes abundant in enterocytes and liver for their excretion through bile, feces, and urine (5). Even though most phenolics are easily degraded during their passage through the gastrointestinal tract, some are highly absorbable. Caffeic acid is 95% absorbed in the small intestine and stomach. Also, chlorogenic acid, the caffeic acid ester form, is absorbed in the colon after the microbiota transforms it into several metabolites (103,104). The fact that chlorogenic acid usually reaches the colon has been considered beneficial for gut health because it promotes the growth of Bifidobacterium species (105). Therefore, chlorogenic acid is considered highly bioavailable for humans; it is estimated that 33% is absorbed intact in the stomach and 7% in the small intestine after hydrolysis (106).
Alkaloids found in Solanum have low bioavailability. Therefore, some studies have been focused to improve their absorption, mainly delivery via liposomes, nanoparticles, gels, and emulsions. Transdermal delivery has also been another option, seeking to obtain effective products and avoiding side effects (2,107). Furthermore, most saponins are hydrosoluble due to their glycosidic groups. They have an amphiphilic nature and, therefore, the ability for self-micellization in the gastrointestinal environment and have shown to be stable to pH variations during digestion. Some saponins can be chemically hydrolyzed by acid or alkali, forming sapogenins, prosapogenins, sugar residues, or monosaccharides. When gastric digestion is simulated, some saponins show deglycosylation, dehydration, hydration, and oxygenation, leading to the presence of different structures connected to saponins' anticarcinogenic activities. Nevertheless, the bioavailability of saponins has not been widely studied (61,104,108). Moreover, carotenoids found in cultivars of tomato and their products have been widely studied, and there are valuable results in this context (See Table 2). Pro-vitamin A (β-carotene) has also been evaluated in eggplant. Although it is hypothesized that bioavailability can be reduced when these compounds interact with some vitamins, aspirin, and sulphonamides (69,113), because these groups of phytochemicals may compete for absorption; for example, co-consumption of lutein has a negative effect on the absorption of β-carotene and vice versa (99, 115) ( Table 2).
CONCLUSIONS
Plants of the Solanum genus contain bioactive compounds that are antioxidant agents and have different mechanisms of action to prevent or lessen diseases and their complications. The bioactive potential of diverse materials has been proven, but there is constant interest in evaluating the transformations during their digestion and absorption. The amount of compound or mixture of compounds needed to achieve the desired effect is also a matter of research to formulate effective and safe phytopharmaceuticals. Fruits, roots, and aerial parts of plants among the Solanum genus can benefit human beings by improving their health when consumed as part of the daily diet, as a nutraceutical, or biopharmaceutical.
AUTHOR CONTRIBUTIONS
EG-G and JH contributed to the conception of the manuscript. CE-R, LC-A, and LM-I contributed to the writing. EG-G and JH edited the manuscript. All authors contributed to the article and approved the submitted version.
ACKNOWLEDGMENTS
CE-R thanks CONACYT for the doctoral scholarship. EG-G would like to thank Cátedras CONACYT for the project 397. | 2021-12-01T14:59:05.471Z | 2021-11-25T00:00:00.000 | {
"year": 2021,
"sha1": "989e26a8d8b93f399860ba2ffc55239a5e4cd166",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnut.2021.790582/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "989e26a8d8b93f399860ba2ffc55239a5e4cd166",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118695202 | pes2o/s2orc | v3-fos-license | COLLIER -- A fortran-library for one-loop integrals
We introduce the fortran-library COLLIER for the numerical evaluation of one-loop scalar and tensor integrals in perturbative relativistic quantum field theories. Important features are the implementation of dedicated methods to achieve numerical stability for 3- and 4-point tensor integrals, the support of complex masses for internal particles, and the possibility to choose between dimensional and mass regularization for infrared singularities. COLLIER supports one-loop N-point functions up to currently N=6 and has been tested in various NLO QCD and EW calculations.
Introduction
Next-to-leading order (NLO) predictions for processes induced by strong (QCD) and electroweak (EW) interactions are a basic ingredient for the analysis of high-energy collider experiments. In the past years many automatic tools based on different methods have been developed for the calculation of QCD corrections [1,2], and an NLO generator for EW corrections has been constructed recently [3]. While in unitarity-based methods [4] a one-loop amplitude is directly expressed in terms of a set of basic scalar integrals, the traditional Feynman-diagrammatic approach as well as recently developed recursive methods [2,3,5] rely instead on tensor integrals. For the reduction of tensor integrals to scalar integrals various methods have been invented and refined over the past decades [6,7,8,9], resulting in several libraries that are available for the calculation of oneloop scalar and tensor integrals [10]. In this article we introduce COLLIER, a Complex One-Loop LIbrary in Extended Regularisations. Its particular strengths are the numerically stable calculation of 3-and 4-point tensor integrals owing to the implementation of sophisticated expansion methods for critical phase-space regions, the support of complex masses for internal particles, and the possibility to treat infrared singularities either via dimensional or via mass regularisation. Tensor integrals for 5-point and 6-point functions are reduced with methods that do not involve inverse Gram determinants. The library has already been applied successfully to many complex NLO QCD and EW calculations, among others to the processes [11,12] e + e − → WW → 4 fermions, H → 4 fermions, pp → ttbb, pp → WWbb, pp → tt + 2jets, and pp → ℓℓ + 2jets. It is integrated in the NLO generators OPENLOOPS [2] and RECOLA [3] and the publication of the code is in preparation [13].
Representation of tensor integrals
A one-loop N-point tensor integral of rank P has the general form The denominator factors are given by where p k and m k are the momentum and the mass of the particle in the corresponding looppropagator and iδ (δ > 0) is an infinitesimal imaginary part. While COLLIER accepts only real values for the four-momenta p k , it permits complex values for the masses m k . Thus, it can be applied to calculations in which propagators of unstable particles are regularised by a complex mass prescription [11,14]. Lorentz covariance allows to decompose a tensor integral as Since the tensor T N,µ 1 ...µ P is totally symmetric, the Lorentz-invariant coefficients T N 0...0i 2n+1 ...i P are symmetric in i 2n+1 , ..., i P .
Ultraviolet-(UV-) or infrared-(IR-) singular integrals are represented in dimensional regularisation, where D = 4 − 2ε, as Note that we distinguish between singularities resulting from the IR and from the UV domain and that we absorb a term c(ε) = Γ(1 + ε)(4π) ε in the constants ∆ UV , ∆ IR and ∆ IR . COLLIER provides numerical results for the complete integrals T N , i.e. for the sum of the finite part T N fin (µ 2 UV , µ 2 IR ) and the a UV -, a IR 2 -and a IR 1 -terms. The user can assign arbitrary values to the unphysical mass scales µ 2 UV , µ 2 IR as well as to the constants ∆ UV , ∆ IR , and ∆ IR , which have to drop out in UV-and IR-finite quantities. Varying these parameters allows to check numerically the cancellation of singularities.
UV-and IR-singular integrals are by default calculated in dimensional regularisation. Collinear singularities can also be regularised with small masses. To this end, masses must be declared small in the initialisation together with corresponding (not necessarily small) numerical values. The small masses are treated as infinitesimally small in the scalar and tensor functions, and only in mass-singular logarithms the finite values are kept.
A general one-loop amplitude δ M can be written in terms of tensor integrals as where j runs over all appearing tensor integrals with rank P j and N j propagators. Traditional calculations rely on the representation of δ M in terms of the T N j j,i 1 ...i P j and perform algebraic manipulations of the corresponding coefficientsc j in D dimensions. New methods inspired by Ref. [5] and implemented in the automatic NLO generators OPENLOOPS [2] and RECOLA [3], on the other hand, make use of the representation in terms of the full tensors T N j ,µ 1 ···µ P j j and perform a recursive numerical calculation of the respective coefficients c j µ 1 ···µ P j . COLLIER can be used in either of these approaches as it provides the Lorentz-covariant coefficients T N j j,i 1 ...i P j as well as the full tensors T N j ,µ 1 ···µ P j j .
Implemented methods
The method used to evaluate a tensor integral depends on the number N of its propagators. For N = 1, 2, explicit numerically stable expressions are employed [6,9].
For N = 3, 4, scalar integrals are calculated using analytical expressions as given in Ref. [15], while tensor integrals T N,P of higher rank P by default are numerically reduced to integrals of lower rank T N,P−1 , T N,P−2 and to integrals with a lower number of propagators T N−1 via standard Passarino-Veltman reduction. Schematically this can be written as where [...] denotes a linear combination of the corresponding terms and the determinant ∆ = det(Z) of the Gram matrix Z i j = 2p i p j has been made explicit on the left-hand side. In certain regions of the phase-space the Gram determinant ∆ can become small, so that the numerical solution of (3.1) gets unstable. This problem reflects the ambiguity of the representation of T N,P in terms of the integrals on the right-hand side which tend to become linearly dependent in this case. Since even the scalar integrals become dependent, this problem is intrinsic to all reduction methods relying on the full set of basic scalar integrals, i.e. it affects unitarity-based approaches as well. In the tensor reduction method, on the other hand, spurious Gram singularities can be avoided for delicate phasespace points by adjusting the strategy of solving the system of linear equations obtained from the Passarino-Veltman algorithm. Consider to this end (3.1) for P → P + 1, in which the integral of interest, T N,P , now appears on the right-hand side. Neglecting in first approximation terms of order O(∆), the integrals T N,P can be calculated recursively from integrals of lower rank T N,P−1 and from integrals with a lower number of propagators T N−1 . In this way tensor integrals of arbitrary rank can be determined at zeroth order in the small parameter ∆. Inserting afterwards the so-determined higher-rank tensor integral T N,P+1 into the left-hand side of (3.2) allows to calculate also terms of order O(∆) for T N,P . Proceeding systematically in this way one obtains T N,P as a series expansion in the parameter ∆, where higher precision in the form of O(∆ k ) terms is achieved at the prize of calculating higher-rank tensor integrals T N,P+k . Based on the described strategy, various expansion methods have been suggested in Ref. [9] with the respective expansion parameter(s) depending on the region in phase space. All these methods have been implemented in COLLIER to arbitrary order in the expansion parameter. In order to decide which method to use for a certain phase-space point, an a priori error estimate is performed for the different methods considering a simplified propagation of errors from scalar integrals and neglected higher-order terms into the tensor integrals of highest rank. During the actual calculation of an expansion the precision is further checked by analysing the correction of the last iteration. In single cases where the a priori error estimate turns out as having been too optimistic, other expansions are tried in addition. In this way stable results are obtained for almost all phase-space points ensuring reliable Monte Carlo integrations.
While the methods described so far are formulated in the literature in terms of the Lorentzinvariant coefficients T N i 1 ...i P , a new generation of NLO generators, such as OPENLOOPS and RECOLA, needs the elements of the full tensors T N,µ 1 ···µ P . To this end, an efficient algorithm has been implemented in COLLIER to construct the tensors T N,µ 1 ···µ P from the coefficients T N i 1 ...i P . It performs a recursive calculation of those tensor structures in (2.4) that are built exclusively from momentum vectors. Non-vanishing elements of other tensor structures involving metric tensors are then obtained by adding pairwise equal Lorentz indices, and their value differs from the corresponding value of the pure momentum tensor only by a combinatorial factor and a potential minus-sign induced by the metric tensors. The relevant combinatorial factors are calculated and tabulated during the initialisation of COLLIER.
The numbers of invariant coefficients T N i 1 ...i P and tensor elements T N,µ 1 ···µ P j are compared in Table 3. For N ≤ 4 the number of invariant coefficients is smaller than the number of tensor elements, and this fact constitutes a basic precondition of the Passarino-Veltman reduction method. For N ≥ 5, on the other hand, there are less tensor elements than coefficients and the reduction method for N ≥ 6 presented in (7.7) of Ref. [9] has been actually derived in terms of full tensors. Its translation to tensor coefficients requires an additional symmetrisation and the resulting coefficients are not unique because of the overdefined number of tensor structures. Therefore for the calculation of the tensors T N,µ 1 ···µ P the reduction for N ≥ 6 has been implemented in COLLIER also directly at the tensor level without resorting to a covariant decomposition.
Structure of the library
The structure of the library COLLIER is illustrated schematically in Figure 4. The core of the library is formed by the building blocks Coli and DD. They constitute two independent implementations of the scalar integrals T N 0 and the Lorentz-invariant coefficients T N i 1 ...i P employing the methods described in the previous section. The module tensors provides routines for the construction of the tensors T N,µ 1 ...µ P from the coefficients T N i 1 ...i P as well as for a direct reduction of 6-point integrals at the tensor level. The user interacts with the basic routines of Coli, DD and tensors via the global interface of COLLIER. It provides routines to set or extract numerical values of the parameters in Coli and DD as well as routines to call the calculation of tensor coeffi- cients T N i 1 ...i P or tensor elements T N,µ 1 ...µ P . The user can choose whether the Coli-or the DD-branch shall be used for the calculation of the integrals. It is also possible to calculate each integral with both branches for the purpose of comparison.
In the evaluation of a one-loop matrix element the same tensor integral is called various times: On the one hand, a single user call of an N-point integral leads to recursive internal calls of lower N ′ -point integrals and for N ′ ≤ N − 2 the same integral is reached through more than one path in the reduction tree. On the other hand, different user calls and their reductions typically involve identical tensor integrals. In order to avoid multiple calculations of the same integral the sublibraries of COLLIER are linked to a global cache system which works as follows: A parameter N ext numerates external integral calls, while for the book-keeping of internal calls a binary identifier id is propagated during the reduction. A pointer is assigned to each index pair (N ext , id). During the evaluation of the first phase-space points the arguments of the corresponding function calls are compared and pairs (N ext , id) with identical arguments are pointed to the same address in the cache. For later phase-space points the result of the first call of an integral is written to the cache and read out in subsequent calls pointing to the same address.
Conclusions
We have introduced the fortran-based Complex One-Loop LIbrary in Extended Regularizations COLLIER. It provides the complete set of basic scalar integrals as well as tensor integrals of arbitrary rank for up to N = 6 external particles (an implementation for N ≥ 7 is in progress).
In order to ensure numerical stability the expansion methods for 3-and 4-point integrals of Ref. [9] have been implemented to arbitrary order in the corresponding expansion parameter. UV singularities are regularised dimensionally, IR singularities integrals can be regularised dimensionally or alternatively by introducing small masses. Complex values are supported for the masses of internal particles in loop propagators, permitting thus the application of COLLIER to processes involving unstable particles. As output the user obtains either the coefficients T N i 1 ...i P of the covariant decomposition of the respective tensor integral or the elements of the tensor T N,µ 1 ...µ P themselves. A recalculation of identical integrals is avoided by an efficient built-in cache system. The fundamental building blocks of the library are provided in two implementations that allow for an independent calculation of each integral and for direct numerical cross-checks.
COLLIER has already been successfully applied to a large number of calculations of QCD and EW corrections and is integrated in the NLO generators OPENLOOPS and RECOLA. Publication of the code facilitating its use by other generators and other groups is in preparation. | 2014-07-01T01:05:34.000Z | 2014-07-01T00:00:00.000 | {
"year": 2014,
"sha1": "15065133813cb0efa2aa0047cd2ca253e1119b6e",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/211/071/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "cc87ca4149828649d52b1fb8a2960700dbc9710c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
239089307 | pes2o/s2orc | v3-fos-license | CiLiQuant: Quantification of RNA Junction Reads Based on Their Circular or Linear Transcript Origin
Distinguishing circular RNA reads from reads derived from the linear host transcript is a challenging task because of sequence overlap. We developed a computational approach, CiLiQuant, that determines the relative circular and linear abundance of transcripts and gene loci using back-splice and unambiguous forward-splice junction reads generated by existing mapping and circular RNA discovery tools.
INTRODUCTION
Circular RNAs (circRNAs) are a novel class of non-coding RNAs found in eukaryotic transcriptomes that result from a process called back-splicing during RNA maturation. In recent years, circRNAs are attracting considerable research attention and evidence of their involvement in normal development and disease has been reported (Maass et al., 2017;Gaffo et al., 2019;Vo et al., 2019). Due to their stable, circular conformation, tissue-specific expression patterns and abundance in biofluids, circRNAs are emerging as potential biomarker candidates in minimally-invasive liquid biopsies (Salzman et al., 2013;Zhang et al., 2018;Su et al., 2019;Hulstaert et al., 2020). As circRNAs share most of their sequence with their linear counterparts, it is impossible to uniquely assign massively parallel RNA sequencing reads to linear or circular transcripts if the read does not include the backsplice sequence itself. This hampers calculations of the relative contribution of circRNAs to aggregated gene counts and can obscure differential expression analyses. To this end, we developed a computational pipeline that builds on the output of common mapping strategies to determine the linear or ambiguous character of forward junctions. Based on this classification, we also propose two strategies to determine the circular fraction for a region of interest.
CiLiQuant Pipeline
The pipeline requires three input files in tab-separated format. Any combination of existing mapping and circRNA discovery tools can be used to generate the first two files: one file for forward-splice junctions, e.g., from STAR (Dobin et al., 2013), and one for back-splice junctions e.g. from CIRCexplorer2 (Zhang et al., 2016). The only requirements are that these junction files were generated from the same sequencing data using the same reference genome and that the files contain information about the coordinates of the junctions and their respective read counts in separate columns. The third input file should contain start and stop coordinates of the genes (or exons) of interest. The pipeline can be initiated using a single command and only requires Python and the Pandas package (The Pandas Development Team, 2010). The pipeline has a strand specific as well as an unstranded mode. Users can also impose a minimal count threshold to ignore forward-splice and back-splice junctions with less supporting reads than this threshold. More details and example input files can be found on GitHub (https:// github.com/OncoRNALab/CiLiQuant). For each gene, the pipeline classifies forward-splice junctions as having a linear or ambiguous origin based on their overlap with detected backsplice junctions ( Figure 1A). A forward-splice junction that starts and stops in between any detected back-splice junction is considered ambiguous because both linear and circular RNA can contribute to the read counts of this junction. In case the forward-splice junctions do not (completely) fall within the start and stop of any detected back-splice junction, they are classified as linear.
Equation 1 Using this information, our method determines the relative circular to linear RNA abundance at two levels, back-splice and gene level ( Figure 1B). At the gene level, this circular fraction is calculated by comparing the average number of back-splice junction reads to the average number of linear only junction reads in the gene (Eq. 1). At the individual back-splice level, reads of a back-splice junction are compared to the average of linear only forward-splice junction reads that are directly flanking the back-splice. The circular fraction is calculated as in Eq. 1, but "lin" is limited to those particular flanking junctions. Of note, sometimes this flanking information is not available-either because there are no flanking reads or because those reads are classified as ambiguous as they could be derived from another circRNA. Therefore, an alternative calculation using the average of all linear only junction reads in the gene is provided as well. This alternative calculation compares reads of a back-splice junction to the average of all linear only forward-splice junction reads in the corresponding gene. For each circular fraction, an Agresti-Coull 95% confidence interval is calculated (Brown et al., 2001).
Next, Ribonuclease R (RNAse R) treatment (RNR07250 (250 U), Lucigen) was performed according to our previously described protocol (Vromman et al., 2021). In summary, one sample of each cell line was treated with RNase R and one sample of each cell line was treated as a buffer-control. This was followed by a clean-up step using Vivacon 500, 10,000 MWCO Hydrosart columns (VN01H02, Sartorius). Next, the NEBNext Ultra II Directional RNA Library Prep Kit for Illumina (E7760L, New England Biolabs) was used in combination with the NEBNext Multiplex Oligos for Illumina (E7600S, New England Biolabs) to index and prep the samples. The protocol was adjusted to obtain relatively long insert sizes (RNA fragmentation of 7.5 min; first-strand cDNA synthesis elongation step of 50 min instead of 15). The last bead clean-up step was performed twice to completely remove all indexes from the samples. Finally, the samples were pooled and sequenced on a NovaSeq 6000 instrument using a NovaSeq 6000 S1 Reagent Kit v1.5 (300 cycles) (20028317, Illumina), resulting in approximately 300M paired-end reads per sample. Raw FASTQs are available at SRA (PRJNA789110). Only reads passing quality control (base calling accuracy of ≥99% in at least 80% of the nucleotides in both mates of a pair) were included in this analysis. RNA sequencing data was further subsampled to 10, 25, and 50 M paired-end reads using Seqtk v1.3 (Li, 2012). Finally, CiLiQuant without filter was applied to the forward-splice and back-splice junction file generated by STAR (Dobin et al., 2013) and CircExplorer2 (Zhang et al., 2016) using UCSC GRCh38/hg38 as reference genome. In each sample, at least 98% of all reads could be mapped (92% and 76-81% of reads were uniquely mapped in the untreated and RNase R treated samples, respectively). CiLiQuant output tables for 50M paired-end reads per sample can be found in Supplementary Material.
RESULTS
CiLiQuant determines whether detected forward-splice junction reads have an ambiguous (linear or circular) origin by looking at overlap with detected back-splice junctions ( Figure 1A). The classification is important for genes that have multi-exon or overlapping circular transcripts because their linear forwardsplice junction count may be overestimated. Figure 2 illustrates this problem for junctions of NFATC3 in HLF cells. In the total RNA-sequencing data from HLF and NCIH23 cell lines, 20% of genes that have at least one forward-splice junction read have at least one back-splice junction read as well (3,889 and 3,897 genes, respectively). For those genes with circular transcripts, we observed that the average of true linear counts per junction can be at least 2 times overestimated in 7-8% of the genes compared to the average of all forward-splice counts per junction obtained with STAR mapping ( Figure 3A). The impact of the ambiguous reads is even more prominent at individual back-splice level. In HLF, 9,384 unique back-splice junctions with flanking forward-splice reads are detected. For 13% of those back-splice junctions, all flanking junctions are ambiguous which means that we cannot be sure whether the flanking reads are from linear or circular transcripts. CiLiQuant here provides the alternative of comparing back-splice reads to the average of linear only forward-splice reads per junction in the entire gene. For an additional 12% of back-splice junctions, the average linear only reads per flanking junction can be at least two times overestimated compared to the average of all forward splice reads per flanking junction ( Figure 3B). Similar results were obtained in NCIH23: 8,556 unique back-splice junctions with flanking forward-splice reads detected, 13% of those back-splice junctions only have ambiguous flanking junctions and in 10% of back-splice junctions the linear only flanking reads per junction can be overestimated at least two times ( Figure 3B). CiLiQuant also calculates circular-to-linear fractions at both the gene and individual back-splice level ( Figure 1B). The back-FIGURE 1 | CiLiQuant classifies splice junctions based on their linear or circular origin and determines circRNA fractions. (A) Input, processing and output of the pipeline. Three input files required (number of junction reads may be in separate column or in score column), junctions are divided into four types based on overlap with detected circRNAs, quantification both at back-splice and gene level; (B) Examples of circRNA fraction calculations. CircFractionFlank only considers linear junctions directly next to the back-splice of interest (linear_flanking). CircFractionAllLinear and CircFraction consider all linear (linear_flanking and linear_nonflanking) junctions in the gene. In each calculation the sum of junction reads is corrected for the number of distinct junctions. Note that the counts in this example are rather low resulting in large confidence intervals. Test case with real sequencing data on GitHub. ambi: ambiguous (circular or linear origin); bs: back-splice; chr: chromosome; CI: Agresti-Coull 95% confidence interval; linear_flanking: forward-splice that partially overlaps with a back-splice junction (starts outside and stops inside the back-splice interval or vice versa); linear_nonflanking: forward-splice that shows no overlap with a back-splice junction.
Frontiers in Bioinformatics | www.frontiersin.org February 2022 | Volume 2 | Article 834034 splice level calculation gives an idea of the frequency of backsplicing compared to forward-splicing in that specific region. Circular fractions at gene level can be used to determine to which extent (part of) the gene is expressed in a circular form. A nonzero circular fraction indicates that read counts of this gene are not merely coming from linear transcripts. This can be important for samples such as biofluids where the circular extracellular RNAs seem to be better preserved than their linear counterparts (Hulstaert et al., 2020). RNase R digests linear RNAs while keeping circular RNA molecules intact. In Figure 4, we verified that the circRNA fractions, as computed by CiLiQuant, increased after RNase R treatment as expected. We observed a 2.5-fold and a 2.4-fold increase in circRNA fraction (back-splice level) in HLF and NCIH23 samples respectively when applying RNase R treatment. Similar enrichment was observed at gene level and this trend was also present at lower sequencing depths (data not shown). Of note, both at the gene and back-splice level, the circular fraction, does not always reach 100% after RNase R treatment. Possible explanations are that not all linear RNA is degraded by the RNase R treatment, or that circRNAs that span other forward-splice junctions may be missed because of low coverage, leading to misclassification of a splice junction as linear only.
DISCUSSION
Several RNA sequencing library preparation methods can pick up both linear and circular RNA transcripts. However, contrary to FIGURE 2 | Sashimi plot with linear only and ambiguous flanking junctions of NFATC3 in HLF cells. A single-exon circRNA (chr16_68121986_68123121_+) has seven supporting back-splice reads (blue). The 66 forward-splice junction reads on the left do not fall within other detected back-splice junctions and are therefore classified as linear only (green). The 110 forward-splice junction on the right are ambiguous (orange) as can be derived from both linear and circular transcripts [with 24, 2, and 2 back-splice reads supporting the back-splice junctions that span these forward-splice junctions, respectively (blue)]. the clear circular RNA origin of back-splice reads, the linear or circular origin of forward-splice junctions is not always obvious. CiLiQuant classifies forward-splice junction reads as linear only or ambiguous depending on the overlap with detected circRNA transcripts. By correcting the sum of junction reads for the number of unique junctions, the linear only and back-splice junction reads can be directly compared. The entire pipeline is freely available on GitHub under MIT license (https://github.com/OncoRNALab/ CiLiQuant), platform independent and can be initiated using a single command. Other count-based strategies that determine circular-to-linear RNA ratios rely on the maximally expressed transcript or on splice reads flanking the circRNA (Rybak-Wolf et al., 2015;Jakobi et al., 2019;Ma et al., 2019). However, these calculations can still be based on reads coming from both circular as well as linear transcripts in case other circRNAs span this region as well as shown in Figure 2. Moreover, the flexible input format of CiLiQuant also allows the usage of junction count files from various combinations of mappers and circRNA quantification tools. This enables users to look at circRNA transcript fractions using their preferred mapping and circRNA detection strategy. An alternative would be to use a model-based strategy that simultaneously quantifies linear and circular reads using a predefined pseudolinearized reference for circRNAs (Li et al., 2017). The dual approach of CiLiQuant allows users to look at the circular versus linear RNA abundance from different perspectives. The confidence interval further provides the user with a level of uncertainty on the calculated fraction; e.g., a ratio of 2/2 has higher uncertainty than a ratio of 20/20 reads. This allows to study the effect of perturbations on alterations of this circular fraction, hinting at a causal relationship between the perturbans and splicing. As the output table includes the original counts, the user can still impose a minimum count threshold and look at absolute count differences. Possible applications include, but are not limited to, comparing different tissues, cell types or biomaterials with respect to the circRNA fraction of particular genes, or overall; identifying genes or regions with abundant circRNA enrichment; discovering interesting circRNAs for biomarker purposes. Finally, the ambiguous read category can help to determine the relative level of mixed (linear and circular RNA) signal in aggregated gene counts. Any gene with partially overlapping circRNAs or multi-exon circRNAs potentially suffers from ambiguous reads. These reads are often mistakenly considered as read evidence for linear transcripts (example in Figure 2). In case of sufficient sequencing depth, the linear only classification of CiLiQuant can be used as a starting point for differential expression analysis of linear RNA junction counts only. This approach avoids interference of circRNA derived reads in linear transcript differential expression analyses and it is similar to the current circRNA differential expression analyses that are based on back-splice junction counts only. A limitation of the CiLiQuant algorithm is that it depends on the detection of all circRNAs present in the sample to classify junction reads. In case a circRNA is not detected because of insufficient sequencing depth, the junction reads that fall within this circRNA may be misclassified as linear only. However, the obtained fractions could still be useful for relative comparisons of samples sequenced at the same depth.
The pipeline took 2 hours to process junction files generated from 10 M paired-end reads (single node run, maximum memory used: 370 M), this increased to 3 hours for 50 M paired-end read data (single node run, maximum memory used: 430 M). Parallelization is possible by splitting up the reference file, for example by chromosome.
In conclusion, the CiLiQuant pipeline distinguishes linear only from potentially mixed junction reads and determines the circular contribution in RNA sequencing data in a systematic and uniform way.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are publicly available. This data can be found here: https://www.ncbi.nlm.nih. gov/bioproject/PRJNA789110. Frontiers in Bioinformatics | www.frontiersin.org February 2022 | Volume 2 | Article 834034 | 2021-10-20T16:29:46.336Z | 2021-09-10T00:00:00.000 | {
"year": 2022,
"sha1": "ffc7bd500727408756f95deb1fb21f77333fe77e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbinf.2022.834034/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "d8e115516c51f5b10f8f62d6a3ba448163694c96",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
255865087 | pes2o/s2orc | v3-fos-license | Prognostic significance of CXCL5 expression in cancer patients: a meta-analysis
CXCL5 is a member of the CXC-type chemokine family, which has been found to play important roles in tumorigenesis and cancer progression. Recent studies have demonstrated that CXCL5 could serve as a potential prognostic biomarker for cancer patients. However, the prognostic value of CXCL5 is still controversial. We systematically searched PubMed, Embase and Web of Science to obtain all relevant articles investigating the prognostic significance of CXCL5 expression in cancer patients. Hazards ratios (HR) with corresponding 95% confidence intervals (CI) were pooled to estimate the association between CXCL5 expression levels with survival of cancer patients. A total of 15 eligible studies including 19 cohorts and 5070 patients were enrolled in the current meta-analysis. Our results demonstrated that elevated expression level of CXCL5 was significantly associated with poor overall survival (OS) (pooled HR 1.70; 95% CI 1.36–2.12), progression-free survival (pooled HR 1.65; 95% CI 1.09–2.49) and recurrence-free survival (pooled HR 1.49; 95% CI 1.15–1.93) in cancer patients. However, high or low expression of CXCL5 made no difference in predicting the disease-free survival (pooled HR 0.63; 95% CI 0.11–3.49) of cancer patients. Furthermore, we found that high CXCL5 expression was associated with reduced OS in intrahepatic cholangiocarcinoma (HR 1.91; 95% CI 1.31–2.78) and hepatocellular carcinoma (HR 1.87; 95% CI 1.55–2.27). However, there was no significant association between expression level of CXCL5 with the OS in lung cancer (HR 1.25; 95% CI 0.79–1.99) and colorectal cancer (HR 1.16; 95% CI 0.32–4.22, p = 0.826) in current meta-analysis. In conclusion, our meta-analysis suggested that elevated CXCL5 expression might be an adverse prognostic marker for cancer patients, which could help the clinical decision making process.
Background
Despite great improvements in early detection, surgical techniques, chemotherapy, radiotherapy, biological treatment and multidisciplinary treatment in recent years, cancer is still a major public health problem globally, which is associated with high morbidity, mortality and economic burden [1]. It is estimated that 1,735,350 new cancer cases and 609,640 cancer deaths are projected to occur in the United States in 2018 [2]. Given the poor prognosis of cancer patients, numerous investigators have focused on searching for biomarkers that could predict prognosis of cancer. However, sensitivity and specificity of most cancer biomarkers widely used now are not yet satisfactory [3]. Therefore, it is desperately needed to identify novel applicable prognostic biomarkers, not only improving poor prognosis but also providing novel therapeutic targets.
Chemokines are chemotactic cytokines that could regulate the migration of immune cells into damaged or diseased organs in response to pro-inflammatory stimuli [4]. According to cysteine residues in the NH2-terminal part of the protein, chemokines can be classified into four highly conserved groups, namely C, CC, CXC, and CX3C [5]. Chemokines and their receptors could bring about the transcription of target genes involved in cell invasion, motility, survival and interactions with the extracellular matrix, which can induce migration, chemotaxis and rearrangement of the cytoskeleton in the target cell, and therefore promote multiple physiological functions of cells, including cell growth, development, differentiation and apoptosis [6][7][8][9]. Over the past few years, accumulating evidence has revealed that chemokines play pivotal roles in progression of tumor [10]. Chemokines produced by tumor and stromal cells can induce the expression and distribution of tumor-associated leukocytes, trigger angiogenesis, contribute to the growth and metastasis of malignant cells and generate fiber keratinocytes [6,11,12]. In addition, chemokines and their receptors are critical mediators of inflammation microenvironment of cancer, which has been proposed to represent the seventh hallmark of cancer [13,14]. Given the important roles of chemokines in cancer, abnormal expression of chemokines has been detected in many tumors, and several chemokines have been proven to be associated with poor prognosis of cancer patients [15][16][17].
CXCL5, also known as epithelial-derived neutrophilactivating peptide 78 (ENA78), is originally discovered as a potent chemoattractant and activator of neutrophil function. Through binding to its receptor CXCR2, CXCL5 could induce the chemotaxis of neutrophils, promote angiogenesis, and remodel connective tissue [18]. Accumulating evidence suggests that CXCL5 may participate in cancer-related inflammation, which is involved in many aspects of malignancy in cancer biology [19]. Furthermore, abnormal expression of CXCL5 has been identified in many tumors. CXCL5 is overexpressed in gastric cancer, prostate cancer, endometrial cancer, squamous cell cancer, hepatocellular carcinoma and pancreatic cancer, and increased expression of CXCL5 is associated with advanced tumor stages, local invasion, neutrophil infiltration and metastatic potential [20][21][22][23][24][25]. Recent studies have revealed that CXCL5 could serve as a potential prognostic biomarker for patients with cancer [5,19,26,27]. However, its prognostic value is still controversial owing to the fact that most studies reported so far are limited in discrete outcome and sample size. Therefore, we performed the current quantitative meta-analysis to elucidate the prognostic significance of CXCL5 expression in cancer patients.
Study strategy
The present review was performed in accordance with the standard guidelines for meta-analysis and systematic reviews of tumor marker prognostic studies [28,29]. The database Web of Science, PubMed and Embase were independently searched by two researchers (Binwu Hu and Huiqian Fan) to obtain all relevant articles about the prognostic value of CXCL5 in patients with any tumor. The literature search ended on March 1, 2018. The search strategy used both MeSH terminology and freetext words to increase the sensitivity of the search. The search strategy was: "CXCL5 or CXC chemokine ligand 5 or ENA78 or epithelial cell derived neutrophil attractant 78" AND "cancer or tumor or carcinoma or neoplasm or malignancy" AND "prognostic or prognosis or survival or outcome". We also screened the references of retrieved relevant articles to identify potentially eligible literatures. Conflicts were solved through group discussion.
Inclusion and exclusion criteria
Studies included in this analysis had to meet the following inclusion criteria: (1) patients were pathologically diagnosed with any type of human cancer. (2) CXCL5 expression levels were determined in human tissues or plasma samples. (3) Patients were divided into two groups according to the expression levels of CXCL5, the relationship between CXCL5 expression levels with survival outcome was investigated. (4) Sufficient published data or the survival curve were provided to calculate hazard ratios (HR) for survival rates and their 95% confidence intervals (CI). Exclusion criteria were as follow: studies using non-human samples, studies without usable or sufficient data, laboratory articles, reviews, letters, case reports, non-English or unpublished articles and conference abstracts. All eligible studies were carefully screened by two researchers (Binwu Hu and Huiqian Fan), and discrepancies were resolved by discussing with a third researcher (Xiao Lv).
Data extraction
Two investigators (Binwu Hu and Huiqian Fan) extracted relevant data independently and reached a consensus on all items. For all eligible studies, the following information of each article was collected: author, year of publication, tumor type, samples detected, expression associated with poor prognosis, Newcastle-Ottawa Scale (NOS) score, method of obtaining HRs, characteristics of the study population (including country of the population enrolled, number of patients (high/low), follow up (month)), endpoints, assay method, cut-off value and survival analysis. For endpoints, overall survival (OS), disease-free survival (DFS), progression-free survival (PFS) and recurrence-free survival (RFS) were all regarded as endpoints. We employed HR which was extracted following a methodology suggested previously to evaluate the influence of CXCL5 expression on prognosis of patients [30]. If possible, we also asked for original data directly from the authors of the relevant studies.
Quality assessment
Quality of all included studies was assessed independently by two researchers (Binwu Hu and Huiqian Fan) using the validated Newcastle-Ottawa Scale, and disagreements were resolved through discussion with another researcher (Songfeng Chen). This scale uses a star system to evaluate a study in three domains: selection of participants, comparability of study groups, and the ascertainment of outcomes of interest. We considered studies with scores more than 6 as high-quality studies, and those with scores no more than 6 as lowquality studies.
Statistical analysis
Statistical analysis was performed using Stata Software 14.0 (Stata, College Station, TX). Pooled HRs (high/low) and their associated 95% CIs were used to analyze the prognostic role of CXCL5 expression in various cancers. The heterogeneity among studies was evaluated using Cochran's Q and I 2 statistics. A p value less than 0.10 or an I 2 value larger than 50% were considered statistically significant. The fixed-effect model was used for analysis without significant heterogeneity between studies (p > 0.10, I 2 < 50%). Otherwise, the random-effect model was chosen. To explore the source of heterogeneity, subgroup analysis and meta-regression were preformed through classifying the included studies into subgroups according to similar features. We also conducted sensitivity analysis to test the effect of each study on the overall pooled results. The publication bias was evaluated by using both Begg's test and Egger's test. A p value less than 0.05 was considered statistically significant.
Characteristics of studies
According to our search strategy, the initial search algorithm retrieved a total of 554 studies. The following studies were excluded: duplicates (n = 196), review (n = 14), patent (n = 9), meeting abstract (n = 64), studies describing non-cancer topics (n = 27), studies describing non-CXCL5 topics (n = 126), studies belonging to basic research (n = 75), studies lacking relevant data (n = 26) and non-English articles (n = 2). Eventually, 15 studies meeting the inclusion criteria were included in this metaanalysis. The screening process and results are shown in Fig. 1.
Association between CXCL5 expression levels with OS of cancer patients
Fourteen studies including seventeen cohorts reported the relationship between abnormal expression levels of CXCL5 with OS in a total of 4952 cancer patients. We used random-effect model to calculate the pooled HR. The pooled HR for OS was 1.70 (95% CI 1.36-2.12, p < 0.001), which suggested that elevated expression level of CXCL5 was significantly associated with poor OS in cancer patients (Fig. 2). Given that significant heterogeneity existed among studies (I 2 = 65.1%; p < 0.001), we further conducted subgroup analysis by factors of sample size (fewer than 100 or more than 100), type of cancer (digestive system or non-digestive system carcinoma), follow-up time (fewer than 100 or more than 100 months), samples detected (blood or tissue), paper quality (NOS scores ≥ 7 or < 7) and source of HR (directly or indirectly) to explore the source of heterogeneity ( Fig. 3a-f ). The results of subgroup analysis illustrated that the association between increased expression level of CXCL5 with poor OS of cancer patients was still significant in all factors above except for the subgroup of studies with fewer than 100 patients (HR 1.60, 95% CI 0.81-3.17, p = 0.175) ( Table 2). To further explore the sources of heterogeneity, we performed meta-regression by the covariates including above factors. However, metaregression didn't reveal p values less than 0.05 in above covariates, which indicated that all above factors were not the sources of heterogeneity (Table 2). Furthermore, using Cox multivariate analysis in eight studies including ten cohorts, we found that elevated CXCL5 expression levels was an independent prognostic factor for OS in cancer patients (HR 1.65, 95% CI 1.24-2.20, p = 0.001).
Association between CXCL5 expression levels with OS of certain types of cancer
We further evaluated the prognostic value of CXCL5 in certain types of cancer. (Fig. 4d).
Association between CXCL5 expression levels with DFS, PFS and RFS of cancer patients
There were three studies respectively evaluating the relationship between CXCL5 expression levels with DFS, PFS and RFS. Through systematic analysis, our results revealed that higher expression level of CXCL5 was significantly associated with shorter PFS (HR 1.65; 95% CI 1.09-2.49, p = 0.018) (Fig. 5a) and RFS (HR 1.49; 95% CI 1.15-1.93, p = 0.003) (Fig. 5b). However, high or low (Fig. 5c). In addition, due to the limited number of included studies, we did not perform the subgroup analysis.
Sensitivity analysis and publication bias
We performed sensitivity analysis to examine the effects of individual study on the overall results. For OS, the sensitivity analysis identified that results from Wu et al.
(2) and Speetjens et al. affected results greatly, which indicated that these two studies were possible to be the main source of heterogeneity. However, the list of pooled HRs and 95% CIs after excluding single study one by one indicated robustness of our results, in which all pooled HRs and 95% CIs were above the null hypothesis of 1 (Fig. 6a). For DFS (Fig. 6b) and PFS (Fig. 6c), the sensitivity analysis revealed that all included studies affected results greatly. For RFS, only the results from Bièche et al. did not influence the results greatly (Fig. 6d). The sensitivity analysis results demonstrated that our results for DFS, PFS and RFS were not that stable, which might be because of the limited number of studies included in each analysis. Therefore, more relevant studies are warranted to investigate the effects of CXCL5 on DFS, PFS and RFS in human cancer. Begg's test and Egger's linear regression test were conducted to evaluate publication bias. For OS, Begg's test (p = 0.773) (Fig. 6e) and Egger's test (p = 0.157) (Fig. 6f ) showed no significant publication bias across studies. For DFS, PFS and RFS, because of the limited number of studies (below 10) included in each analysis, publication bias was not assessed.
Discussion
CXCL5 is originally discovered as a potent chemoattractant and activator of neutrophil function [33]. Through interacting with CXCR2 receptor, it could function both as a chemoattractant and as an angiogenic factor [35,38,39]. Recently, CXCL5 has been shown to be able to promote the proliferation, migration and invasion of various tumor cells and play pivotal roles in the pathogenesis and progression of cancer [27,37]. It was reported that CXCL5 protein was higher in various lung cancer tissues, which was positively associated with tumor stage, lymph node metastasis, and worse survival [26] [19,40]. Zhou et al. also reported that CXCL5 was overexpressed in intrahepatic cholangiocarcinoma cell lines and tumor samples, which could promote intrahepatic cholangiocarcinoma growth and metastasis by recruiting intratumoral neutrophils [27]. Furthermore, CXCL5 could directly induce endothelial cell proliferation and invasion in vitro and promote tumor angiogenesis in nonsmall cell lung carcinoma and pancreatic cancer [41][42][43]. Considering the important functions of CXCL5 in cancer, studies have demonstrated that CXCL5 could serve as a potential prognostic biomarker for cancer patients. However, the prognostic value of CXCL5 is still controversial. Because even in the same type of tumor, there are almost opposite conclusions about the prognostic value of CXCL5 [34][35][36].
Here we performed the current comprehensive metaanalysis to systematically explore the prognostic value of abnormally expressed CXCL5 in cancer patients. We examined 15 independent studies including 19 cohorts and 5070 patients. Through systematic analysis, our results demonstrated that high expression level of CXCL5 was significantly associated with poor OS in cancer patients. Due to the significant heterogeneity across these studies, we performed subgroup analysis and meta-regression analysis to explore the sources of heterogeneity. The results of subgroup analysis suggested that sample size (fewer than 100 or more than 100) altered the significance of prognostic role of CXCL5 in OS (HR 1.60, 95% CI 0.81-3.17 vs HR 1.69, 95% CI 1.37-2.08). This indicated that difference in sample size might be the source of heterogeneity. However, metaregression analysis failed to identify the source of the significant heterogeneity in above covariates. In addition, by combining HRs from Cox multivariate analysis, we found that CXCL5 was an independent prognostic factor of OS in cancer patients.
Furthermore, we evaluated the prognostic value of CXCL5 in certain types of cancer. We found that high CXCL5 expression was associated with reduced OS in intrahepatic cholangiocarcinoma and hepatocellular carcinoma, which was consistent with previous studies. However, there was no significant association between expression level of CXCL5 with the OS of lung cancer and colorectal cancer. For lung cancer, results from Oksana et al. were contrary to others greatly [32]. The reason might be that they only evaluated the prognostic value of CXCL5 in early stage non-small cell lung cancer (stages I and II) [32]. Similarly, for colorectal cancer, the results from Speetjens et al. also conflicted with others because they did not include stage IV patients [35,36]. Therefore, we may speculate that CXCL5 might have different prognostic roles in different tumor stage and larger-scale, multicenter studies including all stage patients are needed to verify our hypothesis.
DFS, PFS and RFS are all important parameters reflecting the progression of tumor. Our results demonstrated that higher expression level of CXCL5 was significantly associated with shorter PFS and RFS in cancer patients. However, high or low expression of CXCL5 made no difference in predicting the DFS of cancer patients. In addition, because only three studies respectively were included to evaluate the association between CXCL5 expression levels with DFS, PFS and RFS, more studies are necessary to explore the relationship between CXCL5 with tumor progression.
Mechanisms underlying the regulatory role of CXCL5 in tumorigenesis and tumor progression have been extensively investigated. CXCL5 could activate multiple signaling pathways to promote the progression of cancer. Dai et al. found that overexpression of CXCL5 markedly upregulated the activity of the JNK, ERK and p38 MAPK signaling pathways, which may contribute to the promoting effects of CXCL5 on the proliferation and migration of glioma cells [18]. In bladder cancer, CXCL5 was found to be significantly upregulated and the CXCL5/CXCR2 axis could promote the migration and invasion of bladder cancer cells by activating the PI3K/AKT-induced upregulation of MMP2/MMP9 [19,40]. The CXCR2/CXCL5 axis was also found to enhance epithelial-mesenchymal transition of hepatocellular carcinoma cells through the activation of the PI3K/AKT/GSK-3β/Snail signaling [44]. Furthermore, Hsu et al. demonstrated that progression of breast cancer induced by TAOB-derived CXCL5 was associated with increased Raf/MEK/ERK activation and mitogen-and stress-activated protein kinase 1 (MSK1) and Elk-1 phosphorylation, as well as Snail upregulation [44]. In addition, CXCL5 was shown to have potent effects on neutrophil recruitment in cancer [45,46]. Meanwhile, neutrophils could potentiate cancer cell migration, invasion and dissemination by secreting immunoreactive molecules such as hepatocyte growth factor, oncostatin M, b2-integrins or neutrophil elastase, which might be another mechanism for CXCL5 promoting cancer progression [27,47,48]. What's more, it has been reported that stem cells could produce CXCL5, and Zhao et al. demonstrated that CXCL5 secreted by adipose tissue-derived stem cells could promote breast tumor cell proliferation [49]. Thus, we could speculate that CXCL5 might be the indicator of the presence of putative cancer stem cells, which have been shown to be associated with the metastasis and poor prognosis of cancer patients [50,51].
However, the current meta-analysis had some limitations. First, the cut-off value of high and low CXCL5 expression was different among studies, which might lead to the bias of the results. Second, some HRs could not be directly obtained from the publications. Thus, calculating them through survival curves might not be precise enough. Third, differences of paper quality and sample size across the studies might cause bias in the meta-analysis, although meta-regression did not show the paper quality or sample size as the resource of heterogeneity. Therefore, larger-scale, multicenter, and high-quality studies are desperately necessary to confirm our findings.
Conclusions
In conclusion, our study revealed that elevated expression level of CXCL5 might be an adverse prognostic marker for OS, PFS and RFS in cancer patients. However, no significant association was found between CXCL5 expression level with DFS in the current meta-analysis. In a word, this is the first meta-analysis to evaluate the relationship between expression levels of CXCL5 with prognosis of cancer patients. In the future, more relevant studies are warranted to investigate the role of CXCL5 in human cancer. | 2023-01-17T14:17:33.107Z | 2018-05-02T00:00:00.000 | {
"year": 2018,
"sha1": "361d0291911a62e0e2e309930ef912110dd4f918",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12935-018-0562-7",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "361d0291911a62e0e2e309930ef912110dd4f918",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
235298190 | pes2o/s2orc | v3-fos-license | Immunotherapeutic Efficacy of IgY Antibodies Targeting the Full-Length Spike Protein in an Animal Model of Middle East Respiratory Syndrome Coronavirus Infection
Identified in 2012, the Middle East respiratory syndrome coronavirus (MERS-CoV) causes severe and often fatal acute respiratory illness in humans. No approved prophylactic or therapeutic interventions are currently available. In this study, we developed chicken egg yolk antibodies (IgY Abs) specific to the MERS-CoV spike (S) protein and evaluated their neutralizing efficiency against MERS-CoV infection. S-specific IgY Abs were produced by injecting chickens with the purified recombinant S protein of MERS-CoV at a high titer (4.4 mg/mL per egg yolk) at week 7 post immunization. Western blotting and immune-dot blot assays demonstrated specific binding to the MERS-CoV S protein. In vitro neutralization of the generated IgY Abs against MERS-CoV was evaluated and showed a 50% neutralizing concentration of 51.42 μg/mL. In vivo testing using a human-transgenic mouse model showed a reduction of viral antigen positive cells in treated mice, compared to the adjuvant-only controls. Moreover, the lung cells of the treated mice showed significantly reduced inflammation, compared to the controls. Our results show efficient neutralization of MERS-CoV infection both in vitro and in vivo using S-specific IgY Abs. Clinical trials are needed to evaluate the efficiency of the IgY Abs in camels and humans.
Introduction
Respiratory infections affect millions of people worldwide and pose risks to many, especially children and the elderly [1]. Middle East respiratory syndrome coronavirus (MERS-CoV) is an emerging zoonotic virus causing severe and often fatal respiratory illness in humans [2]. MERS-CoV was first detected in 2012 [3,4]. Since then, documented infections in humans have steadily increased, with 2566 cases as of December 2020 and an estimated 35% fatality rate [5]. The virus can transmit from camel to camel, and dromedary camels demonstrate high seropositivity to MERS-CoV [6][7][8]. Transmission from camel to human also occurs, and several risk factors, such as direct contact with infected dromedary The MERS-CoV S protein engages with the viral cellular receptor dipeptidyl peptidase 4 (DPP4) to mediate viral attachment to host cells and subsequent fusion of the virus with the cell membrane [18,[51][52][53]. The S protein plays a key role in counteracting coronavirus infection, as shown in studies on human-neutralizing antibodies from rare memory B cells in individuals infected with SARS-CoV [54] or MERS-CoV [17]. In such studies, antibodies targeting the S protein of SARS-CoV effectively inhibited virus entry into host cells. More recently, it has been found that SARS-CoV S elicits polyclonal and vigorously neutralized SARS-CoV-2 S-mediated entry into cells, thus encouraging the use of this molecular target for vaccination and immunotherapies [55]. In this study, we continue our previous investigation on the efficacy of IgY in neutralizing MERS-CoV [27] by reporting the first in vitro and in vivo investigations of anti-MERS-CoV S1 IgY antibodies in neutralizing the virus. Together, these two studies are the first to investigate the potential of MERS-CoVspecific IgY to treat MERS-CoV infection in camels and humans.
Isolation and Purification of IgY
SDS-PAGE revealed that the IgY preparation dissociated into a major and minor protein band with molecular weights of~68 kDa (heavy chain) and~27 kDa (light chain), respectively, and a purity of 90% ( Figure 1). The total IgY contained in a milliliter of egg yolk was estimated to be 4.4 mg, or about 60 mg of total IgY from a single egg yolk (~15 mL). chicken for the production of IgY antibodies might be due to the widespread and economic production of chicken eggs in large farms. The MERS-CoV S protein engages with the viral cellular receptor dipeptidyl peptidase 4 (DPP4) to mediate viral attachment to host cells and subsequent fusion of the virus with the cell membrane [18,[51][52][53]. The S protein plays a key role in counteracting coronavirus infection, as shown in studies on human-neutralizing antibodies from rare memory B cells in individuals infected with SARS-CoV [54] or MERS-CoV [17]. In such studies, antibodies targeting the S protein of SARS-CoV effectively inhibited virus entry into host cells. More recently, it has been found that SARS-CoV S elicits polyclonal and vigorously neutralized SARS-CoV-2 S-mediated entry into cells, thus encouraging the use of this molecular target for vaccination and immunotherapies [55]. In this study, we continue our previous investigation on the efficacy of IgY in neutralizing MERS-CoV [27] by reporting the first in vitro and in vivo investigations of anti-MERS-CoV S1 IgY antibodies in neutralizing the virus. Together, these two studies are the first to investigate the potential of MERS-CoV-specific IgY to treat MERS-CoV infection in camels and humans.
Isolation and Purification of IgY
SDS-PAGE revealed that the IgY preparation dissociated into a major and minor protein band with molecular weights of ~68 kDa (heavy chain) and ~27 kDa (light chain), respectively, and a purity of 90% ( Figure 1). The total IgY contained in a milliliter of egg yolk was estimated to be 4.4 mg, or about 60 mg of total IgY from a single egg yolk (~15 mL).
Dynamics of Anti-S IgY Antibodies in the Sera of Chickens and Egg Yolks
Steady increases in serum levels of MERS-CoV S-specific IgY titers were observed in chicken sera after the first immunization. Levels peaked in week 7 and remained high until week 12. Sera of chickens who received the adjuvant only showed no reactivity to the MERS-CoV S antigen. Anti-MERS-CoV S antibody titers were not detected in the eggs Pharmaceuticals 2021, 14, 511 4 of 17 until week 3 after immunization, then they increased until reaching a peak at week 7, and then plateaued until week 12 ( Figure 2).
Dynamics of Anti-S IgY Antibodies in the Sera of Chickens and Egg Yolks
Steady increases in serum levels of MERS-CoV S-specific IgY titers were observed in chicken sera after the first immunization. Levels peaked in week 7 and remained high until week 12. Sera of chickens who received the adjuvant only showed no reactivity to the MERS-CoV S antigen. Anti-MERS-CoV S antibody titers were not detected in the eggs until week 3 after immunization, then they increased until reaching a peak at week 7, and then plateaued until week 12 ( Figure 2).
Immunoreactivity of Anti-S IgY of the MERS-COV
The specificities of anti-MERS-CoV S IgY antibodies were tested using Western blotting analysis. IgY induced by the S protein recognized the recombinant S protein at approximately 142 kDa ( Figure 3).
Immunoreactivity of Anti-S IgY of the MERS-COV
The specificities of anti-MERS-CoV S IgY antibodies were tested using Western blotting analysis. IgY induced by the S protein recognized the recombinant S protein at approximately 142 kDa ( Figure 3).
Dot Blotting
The specificities of anti-S IgY antibodies were confirmed by dot blotting analysis. Purified IgY antibodies showed reactivity with the S protein, S1, and receptor-binding domain. They were not reactive to the nucleocapsid protein of MERS-CoV, as shown in Figure 4.
Dot Blotting
The specificities of anti-S IgY antibodies were confirmed by dot blotting analysis. Purified IgY antibodies showed reactivity with the S protein, S1, and receptor-binding domain. They were not reactive to the nucleocapsid protein of MERS-CoV, as shown in Figure 4.
Dot Blotting
The specificities of anti-S IgY antibodies were confirmed by dot blotting analysis. Purified IgY antibodies showed reactivity with the S protein, S1, and receptor-binding domain. They were not reactive to the nucleocapsid protein of MERS-CoV, as shown in Figure 4.
Anti-S IgY Neutralizes MERS-CoV
Anti-S IgY can potently neutralize live MERS-CoV in permissive Vero cells, with 100% neutralization at IC 100 concentrations less than 12.5 µg/mL. Nonspecific IgY Abs from adjuvant-only controls did not exhibit antiviral activity against MERS-CoV up to 1000 µg/mL ( Figure 5). These results suggest that anti-S MERS-CoV IgY antibodies exhibited a potent ability to neutralize MERS-CoV infection. IC 100 was determined as the reciprocal of the highest dilution at which no CPE was observed in the cells.
Anti-S IgY Neutralizes MERS-CoV
Anti-S IgY can potently neutralize live MERS-CoV in permissive Vero cells, with 100% neutralization at IC100 concentrations less than 12.5 μg/mL. Nonspecific IgY Abs from adjuvant-only controls did not exhibit antiviral activity against MERS-CoV up to 1000 μg/mL ( Figure 5). These results suggest that anti-S MERS-CoV IgY antibodies exhibited a potent ability to neutralize MERS-CoV infection. IC100 was determined as the reciprocal of the highest dilution at which no CPE was observed in the cells.
RT-qPCR-Based Neutralization Activity
The in vitro neutralization effect of the IgY Abs was examined by mixing different
RT-qPCR-Based Neutralization Activity
The in vitro neutralization effect of the IgY Abs was examined by mixing different dilutions of the IgY Abs with MERS-CoV incubating for 1 h at 37 • C and then applying to the cells (as described in Section 4.8). This approach showed a high neutralization effect on the virus at a 50% neutralizing concentration (NC 50 ) of 51.42 µg/mL. The neutralization effect of the IgY Abs was assessed using real time RT-PCR of the cell cultures treated with different dilutions of the anti-MERS-CoV S IgY Abs, relative to the virus control cells (cells infected with the virus and untreated), which showed concentration-dependent inhibition of the virus ( Figure 6). The log IgY concentration was plotted against the percentage of inhibition of each concentration and the NC 50 was calculated following a nonlinear variable slope equation according to the equation: Y = 100/(1 + 10ˆ((LogIC50-X) × HillSlope))).
IgY Confers In Vivo Protection in Virus-Challenged Mice
MERS-CoV viral titers showed a marked reduction in the quantitative pathological score of the lungs in the anti-S IgY group compared to the controls ( Figure 7A) but with no statistically significant difference. The body weights of hDPP4-Tg mice were not significantly different between the MERS-CoV S IgY group and the adjuvant-only group after intranasal inoculation with 10 6 tissue culture infectious dose 50%(TCID50) of MERS-CoV ( Figure 7B). Histopathological investigations revealed that Tg mice developed progressive pulmonary inflammation due to acute MERS-CoV infection on day 8 post infection. Inflammatory reactions, including partial and mild cellular infiltration with mononuclear cells and macrophages in response to viral infections, were observed in the alveolar areas of the lung tissues ( Figure 7C). Among virus-infected Tg mice, intraperitoneal injection of anti-S IgY antibodies led to significantly weaker inflammatory reactions (p = 0.041), compared to the adjuvant-only control group ( Figure 7C
IgY Confers In Vivo Protection in Virus-Challenged Mice
MERS-CoV viral titers showed a marked reduction in the quantitative pathological score of the lungs in the anti-S IgY group compared to the controls ( Figure 7A) but with no statistically significant difference. The body weights of hDPP4-Tg mice were not significantly different between the MERS-CoV S IgY group and the adjuvant-only group after intranasal inoculation with 10 6 tissue culture infectious dose 50%(TCID 50 ) of MERS-CoV ( Figure 7B). Histopathological investigations revealed that Tg mice developed progressive pulmonary inflammation due to acute MERS-CoV infection on day 8 post infection. Inflammatory reactions, including partial and mild cellular infiltration with mononuclear cells and macrophages in response to viral infections, were observed in the alveolar areas of the lung tissues ( Figure 7C
Discussion
MERS-CoV poses a continuing threat to human health, especially due to its high fatality rate of about 35%. Prevention and treatment strategies to control MERS-CoV infection are urgently needed. Although vaccines remain one of the most important approaches against viral infections, they generally take a long time to develop, and they do not provide immediate prophylactic protection or treat ongoing infections [56]. Passive immunotherapy is a highly successful treatment for some severe and even life-threatening human diseases [57]. Treatments using IgY from chicken eggs has received considerable attention in recent years. Previous studies showed that a single egg can yield up to 100 mg of total IgY, and one hen can produce 250 eggs per year, thus generating large quantities of protective IgY at a comparatively low cost [37].
In this study, hens injected with MERS-CoV S subunit protein were shown to be highly immunogenic, demonstrating a high titer and long-lasting humoral immune response for at least 2 months without the need for boosters. Among treated hens, a high titer of specific MERS-CoV S IgY Abs was observed in the sera at 2 weeks post injection and in the eggs at 4 weeks post injection and remained at this high titer for 12 weeks. Other studies have shown that hens maintain a high antibody titer against a variety of antigens used for immunization for at least 3 to 4 months [37]. A large quantity of high-specificity IgY thus could be produced in a few months using this IgY technology. The results in this study and our previous study [27] indicate the potential for a rapid response to MERS-CoV and other emerging infections [39].
In the Western blot assay, the anti-S MERS-CoV IgY antibody exhibited immunoactivity to viral S recombinant protein, which is reported to promote binding of MERS-CoV to host-cell surface molecules during the attachment phase [58]. Anti-S MERS-CoV IgY Abs also exhibited binding to S1 and receptor-binding domain recombinant proteins. However, a dot-blot immunoassay revealed no reactivity to the recombinant nucleocapsid protein, confirming that anti-S IgY Abs are antigen-specific. This observation aligns with other reported observations that the IgY Abs response to highly conserved mammalian proteins is robust and demonstrates high affinity, meaning it could target a broad spectrum of epitopes on protein immunogens [59]. Moreover, chicken IgY Abs reportedly exhibit higher avidity (10 9 L/mol) after the first immunization than sheep, which must receive four boosters to reach similar avidity values [60].
The neutralizing activities of the anti-S IgY Abs were assessed in vitro. Vero cells showed dramatic inhibition of MERS-CoV-induced CPE. Quantitative PCR provides a robust, sensitive, and wide dynamic range when used to evaluate antiviral activity [61]. In the present study, qRT-PCR showed a decreased viral load in cells treated with anti-S IgY Abs, compared with virus control cells with no IgY antibodies (50% neutralizing concentration of 51.42 µg/mL). This NC 50 is comparable to our previous study [27] evaluating the neutralizing effect of anti-S1 IgY Abs against MERS-CoV, which showed a 50% neutralizing concentration of 60 µg/mL in vitro. IgY antibodies are reportedly highly effective in neutralizing other bacterial and viral infections of the respiratory system, with no reported side effects [26,62].
Histopathologic examinations and immunostaining (e.g., immunohistochemistry) of lung tissues are essential to better understand disease pathogenesis and evaluate novel In the case of coronavirus infection, lung histopathology can be a useful tool to define affected cells, illuminate structural causes of clinical signs, and clarify potential therapies (Meyerholz and Beck 2020). In comparison with our previous study [27], we found that MERS-CoV-related inflammation in lung cells decreased significantly in mice treated with MERS-CoV S IgY Abs, compared with the controls. This decrease might be a reflection of the reduced viral antigen positive cells in the lung. Histological reduction of lung tissue inflammation is associated with enhanced viral clearance and rapid recovery of the lung tissue following the transfer of cloned Tc (T cytotoxic cells) [63]. Decreased lung pathology also is associated with IgY antibodies in influenza-infected mice [36,37,39,64]. Our in vivo investigation also showed a marked reduction in viral-antigen-positive lung cells in mice treated with MERS-CoV S IgY Abs, compared with adjuvant-only controls, although this difference was statistically non-significant. Compared to our previous study [27], where we observed a significant reduction in viral-antigen-positive lung cells using MERS-CoV S1 IgY Abs, this study showed a marked but non-significant reduction in the number of antigen positive lung cells in mice treated with anti-S IgY, compared with adjuvant-only controls. As with our previous study, there were no significance change in the body weight of the treated animals compared with controls as well as no significant change in the viral titers.
To date, several anti-MERS-CoV antibodies have been developed, each with advantages and disadvantages. To develop monoclonal antibodies, mouse-derived monoclonal antibodies must be humanized before human use [18]. Human-neutralizing antibodies derived from a convalescent MERS patient can be produced in large quantities from Chinese hamster ovary cells [17]. However, the single-clone antibody raises concerns about viral-escape mutants when applied to humans. In a mouse model of infected lungs, administration of transchromosomic bovine human immunoglobulins [65] or dromedary immune serum [66] leads to rapid viral clearance. These animals are not readily available, though, and several monoclonal antibodies might be needed to induce effective viral clearance. The use of IgY helps in reducing the risk of escape mutants as they target multiple epitopes making it harder for the virus to escape all the targeted positions.
Chickens offer several advantages over conventional mammalian species in producing pathogen-specific antibodies. These include high rates of egg production, high IgY content per egg yolk [67][68][69], and humane and non-invasive methods of collecting IgY Abs from eggs [70][71][72]. Clinical and laboratory data demonstrate that IgYs may offer a safe and effective tool for controlling and treating viral diseases. They may be used as a substitute for or essential complement to antimicrobials and vaccines [73][74][75][76]. In our study, the production of MERS-CoV-specific IgY Abs took 2-3 months, plus 2 additional months for the in vitro and in vivo investigations. This quick timetable makes this approach suitable for responding quickly to emerging and re-emerging pathogens.
Immunization of Laying Hens
Eight Lohmann laying hens (25 weeks old) provided by a local broiler farm (Algharbia Breeding Company, Saudi Arabia) were used for egg production. Animals were placed in broiler chicken cages (two animals per cage) in a 12-h light-dark cycle at room temperature (24 ± 3 • C). Water and commercial laying hen food were offered ad libitum. The immunization group (n = 4) was injected with 200 µg of recombinant MERS-CoV S protein obtained from Sino Biological, Inc. (Beijing, China). Injections were administered in the left or right side of the pectoral muscle on days 0, 14, 28, and 49. Before each immunization, the recombinant protein was emulsified in a 1:1 ratio with Freund's Complete Adjuvant (Sigma, St. Louis, MO, USA) for the first immunization, and Freund's Incomplete Adjuvant (Sigma, St. Louis, MO, USA) was similarly used for subsequent booster immunizations. The suspension was mixed by pipetting up and down in a 19-gauge needle attached to a 5-mL syringe until stable. The control group (n = 4) was injected with phosphate-buffered saline (PBS) plus the corresponding adjuvant. Blood samples were taken before each injection and on the day before slaughter. Eggs were collected daily 1 week before the initial immunization and continued for 12 weeks after immunization. Eggs were stored at 4 • C to isolate IgY from the yolk. The Biomedical Ethics Research Committee of the Faculty of Medicine at King Abdulaziz University reviewed and approved the experimental protocol (permit no.: 120-18).
Isolation and Purification of Yolk IgY
Egg yolks from the harvested eggs of immunized and non-immunized hens were pooled and separated from egg whites using egg separators and then washed with deionized water. IgY purification was performed using a Pierce Chicken IgY Purification Kit (Thermo Fisher Scientific, Waltham, MA, USA). IgY concentration was determined via spectrophotometry measuring absorbance at 280 nm (A280) according to the manufacturer's instructions.
Sodium Dodecyl Sulfate-Polyacrylamide Gel Electrophoresis
Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) was performed to determine the purity and molecular weight of IgY using 12% PAGE with a Mini-PROTEAN ® 3 cell (Bio-Rad Laboratories, Hercules, CA, USA). The analysis was conducted under reducing conditions: the sample was mixed with 2× sample buffer boiled for 10 min at 100 • C, then 25 µL of purified IgY was loaded into each well. Prestained Blue Protein Marker (MOLEQULE-ON, Auckland, New Zealand) was used as a molecular weight marker. Electrophoresis was performed at room temperature in running buffer (Tris-glycine buffer) at 200 volts for 40 min. Protein bands were visualized using Coomassie Brilliant Blue stain (Abcam, Cambridge, UK) and analyzed using Gene Tools image analysis software (Syngene, Cambridge, UK).
Reactivity of Anti-S IgY Antibodies by ELISA
The antibody reactivity of anti-S IgY was determined by ELISA. Briefly, microtiter plates were coated with purified MERS-CoV-S antigen (Sino Biological, Inc., Beijing, China) at 500 ng/mL in PBS (0.01 M, pH 7.4) at 100 µL/well and then stored at 4 • C overnight. After washing the plates once with PBS and twice with Tween-20, they were blocked with 250 µL of blocking buffer (5% skim milk in PBS-Tween) at room temperature for 1 h. The wells were washed three times with wash buffer. IgY antibody titers were determined by serially diluting the serum and purified IgY from immunized and non-immunized hens, starting with a 1:50 ratio in blocking buffer. The plates then were incubated at 37 • C for 1 h and washed three times with PBS-Tween. A 1:10,000 dilution of horseradish peroxidase (HRP)-conjugated rabbit anti-chicken IgY (Abcam, Cambridge, UK) was added to each well (100 µL/well) and incubated for 1 h at 37 • C. After washing the plates, the color reaction was developed by adding TMB (100 µL/well) substrate solution (Promega, Madison, WI, USA) and incubating for 30 min. This reaction was stopped by adding 2M H 2 SO 4 (100 µL/well).
The optical density (OD) of each well was read at 450 nm using a microtiter plate reader (ELX800 Biokit). PBS was used as a blank control, and purified IgY derived from non-immunized hens was used as a negative control. The titer of anti-S IgY was defined as the maximum dilution of the sample that resulted in an OD value 2.1 times higher than that of the negative control.
Western Blotting Assay
Western blotting was performed to check the specificity of the anti-MERS-CoV S IgY antibody using a previously described method with some modifications [77]. Five µL containing 500 ng of recombinant S protein was mixed with 20 µL of electrophoresis sample buffer and then subjected to SDS-PAGE in a 14% slab of polyacrylamide gel separated by a 4% stacking gel at 200 V for 40 min at room temperature. The gel and blotting papers were equilibrated in transfer buffer for 10 min, after which the S protein was electrically transferred onto a polyvinylidene fluoride (PVDF) membrane activated by methanol (Thermo Fisher, Waltham, MA, USA) at 30 V overnight. The PVDF membrane was cut into 0.5-cm strips, which were blocked with Tris-buffered saline containing 0.1% Tween 20 (TBS-T) and 5% non-fat dry milk for 1 h at room temperature. The strips were washed three times for 10 min each. The membrane was then incubated in a 1:50 dilution of anti-MERS-CoV S IgY antibodies. After incubation, the strips were washed three times with TBS-T for 10 min each and incubated with HRP-conjugated rabbit anti-chicken IgY Heavy and Light (Abcam, Cambridge, UK) at a 1:10,000 dilution in blocking buffer for 1 h at room temperature. The strips again were washed three times for 10 min, after which they were incubated with HRP colorimetric substrate (Immun-Blot Opti-4CN colorimetric Kit, Bio-Rad) for 15 min at room temperature. This reaction was stopped by rinsing with distilled water. The strips were photographed after development. The same Western blotting procedure was performed to identify the presence of the S IgY antibodies. This was done by subjecting the anti-MERS-CoV S IgY antibodies to SDS-PAGE, transferring onto PVDF membrane, followed by addition of HRP-conjugated rabbit anti-chicken IgY Heavy and development by adding HRP colorimetric substrate.
Dot-Blotting
A dot-blot assay was performed to determine the specificity of the purified anti-S IgY antibodies. PVDF membranes were activated by soaking in methanol for 15 s and washing with distilled water. Then, three different concentrations (500, 100 and 50 ng) of the recombinant antigens S, S1, nucleocapsid, and PBD were dot-blotted individually onto a PVDF membrane. The membrane was incubated in 20 mL of blocking buffer for 1 h at room temperature. After washing three times with TBS-T, the PVDF membrane was immersed in primary anti-MERS-CoV S IgY antibodies (1:200 dilution) in blocking buffer with gentle agitation for 1 h at room temperature. The membrane was incubated with rabbit anti-chicken IgY HRP-conjugate as a secondary antibody (1:10000 dilution) in blocking buffer with gentle agitation for 1 h at room temperature. After washing as previously described, the membrane was placed on an HRP colorimetric substrate (Immun-Blot Opti-4CN Colorimetric Kit, Cat. No. 1708235) (Bio-Rad Laboratories, Hercules, CA, USA) for up to 30 min at room temperature. The reaction was stopped using distilled water.
Microneutralization Assay
Live virus experiments were performed in a biosafety level 3 laboratory in the infectious agent unit of King Fahd Medical Research Center at King Abdulaziz University in Jeddah. A neutralizing assay was performed, as previously described [27,78]. Briefly, MERS-CoV isolate at an MOI of 0.01 (500 µL) in the presence or absence of IgY antibodies was added to an equal volume of serial dilutions of the IgY antibodies for 1 h. The mixture was then inoculated in triplicate onto Vero E6 cells (10,000 cells/well) on 96-well plates and in viral inoculation medium (Dulbecco's Modified Eagle Medium with 2% fetal bovine serum, 1% penicillin/streptomycin, and 10 mmol/L HEPES at pH 7.2). Cells were incubated in a humidified incubator with 5% CO 2 at 37 • C for 2-3 days or until reaching an 80-90% cytopathic effect (CPE) in positive virus control wells (virus with no added IgY Abs). The IC 100 neutralization of the antibody was determined as the reciprocal of the highest dilution at which no CPE was observed.
Neutralization Using Real-Time qRT-PCR
The MERS-CoV isolates at an MOI of 0.01 (500 µL) were added to an equal volume of varying dilutions of the IgY antibodies (440,220,110,55,44,22, and 11 µg/mL). The mixture was then inoculated onto Vero cells (10,000 cells/well in triplicate) on 96-well plates and in the previously described viral inoculation medium. Cells were incubated in a humidified incubator with 5% CO2 at 37 • C for 2-3 days or until reaching 80-90% CPE in positive virus control wells (virus with no added IgY Abs). Upon reaching 80-90% CPE in control wells, 200 µL of culture supernatants were collected, cleared by centrifugation (500× g, 5 min, 4 • C), and stored at −70 • C. In each experiment, a negative control with no added virus or IgY was included.
Real-time RT-qPCR was performed using primers and probes targeting the MERS-CoV N gene, as previously described [78], to assess the neutralization effect of the IgY antibodies. A neutralizing concentration of 50% was used to express IgY Ab neutralization activity and to define the concentration of IgY Ab needed to reduce the viral RNA copies by 50%, relative to the positive virus control.
Effect of Anti-S IgY Antibodies in Transgenic Mice Infected with MERS-CoV
A mouse model of MERS-CoV was used in this study, as previously described [27,79] Yoshikawa et al., JV, 2019. Briefly, transgenic (Tg) mice on a C57BL/6NCr (SLC, Inc., Hamamatsu, Japan) background were developed to express human CD26/dipeptidyl peptidase 4 (hDPP4), a functional receptor for MERS-CoV under the control of an endogenous hDPP4 promoter. The hDPP4-Tg mice (n = 10) were intranasally infected with MERS-CoV using the HCoV-EMC 2012 strain (10 6 TCID 50 ) provided by Dr. Bart Haagmans and Dr. Ron Fochier (Erasmus Medical Center, Rotterdam, the Netherlands). Mice also received a peritoneal injection of either 500 µg of anti-S IgY antibodies or 500 µg of IgY isotype control at 6 h and 1 day post infection. The animal experiment was conducted simultaneously with that described in a previous report of the efficacy of a MERS-CoV anti-S1 IgY antibody [27]. Thus, data from the control mice are used in both studies. Mouse weight was measured at 8 days post infection.
Animals were sacrificed at 1-, 3-, or 5-days post infection (n = 4), and lung tissues were collected for virological detection. After 8 days of observation, the remaining 6 mice were sacrificed for histopathological evaluations. All work with MERS-CoV and passive immunization of mice was conducted at the National Institute of Infectious Diseases in Tokyo, Japan. Stocks of MERS-CoV were propagated and titrated on Vero E6 cells and cryopreserved at −80 • C. Viral infectivity titers were expressed as the TCID 50 / mL on Vero E6 cells and calculated according to the Behrens-Kärber method. Work with infectious MERS-CoV was performed under biosafety level 3 conditions.
Histopathology and Immunohistochemistry
After anesthetizing and perfusion with 2 mL of 10% phosphate-buffered formalin, the mouse lungs were harvested and fixed in paraffin, sectioned, and subjected to hematoxylin and eosin staining. The tissue sections then were autoclaved at 121 • C for 10 min in a retrieval solution at pH 6.0 (Nichirei Biosciences Inc., Tokyo, Japan) for antigen retrieval in preparation for immunohistochemistry. MERS-CoV antigens were detected using a polymer-based detection system (Nichirei-Histofine Simple Stain Mouse MAX PO(R), Nichirei, Tokyo, Japan) with a rabbit anti-MERS-CoV nucleocapsid antibody (40068-RP01, Sino Biological Inc., Beijing, China). Peroxidase activity was detected using 3,3diaminobenzidine (Sigma-Aldrich, St. Louis, MO, USA), and hematoxylin was used for counterstaining.
Quantitative Analysis of Inflammation and Viral Antigen Positivity of Cells
Inflammation was assessed using hematoxylin and eosin staining on the paraffinembedded sections (3 mm thickness) from the Tg mice at 8 days post infection. Light microscopic images were obtained using a DP71 digital camera under low-power magnification and cellSens software (Olympus Corporation, Tokyo, Japan). Inflammation was evaluated by measuring three lobes with an average section area of 3.645 ± 0.726 mm 2 . The inflammation areas were traced using the contour measurement program Neurolucida (version 12, MBF Bioscience, Williston, VT, USA) and analyzed using Neurolucida Explorer (MBF Bioscience Williston, VT, USA). Viral antigen was detected via immunohistochemistry on a continuous paraffin-embedded section. Cells positive for viral antigen were counted in images under high-power magnification (observation area: 0.147 mm 2 ). Data for the Pharmaceuticals 2021, 14, 511 13 of 17 control mice came from a previous study assessing the efficacy of anti-S1 MERS-CoV IgY antibodies [27] as the two experiments were performed simultaneously.
Statistical Analysis
Data are expressed as means with standard errors. Statistical analyses were performed using Graph Pad Prism 9 software (GraphPad Software Inc., La Jolla, CA, USA). Intergroup comparisons (virus titers in the lungs and body weight curves) were performed using twoway analyses of variance, followed by Bonferroni's multiple comparisons test. Comparisons between two groups (the quantitative analysis of inflammation and viral antigen positivity in cells) were performed using the Mann-Whitney test. A p-value of <0.05 was considered statistically significant.
Ethics Statement
The Biomedical Ethics Research Committee of the Faculty of Medicine at King Abdulaziz University reviewed and approved the experimental protocol for the immunization and handling of the chickens (permit no.: 120-18). The Committee for Experiments Using Recombinant DNA and Pathogens at the National Institute of Infectious Diseases in Tokyo, Japan, approved the experiments using recombinant DNA and pathogens. Animal studies strictly followed the Guidelines for Proper Conduct of Animal Experiments of the Science Council of Japan and complied with animal husbandry and welfare regulations. All animals were housed in a facility certified by the Japan Health Sciences Foundation. Animal experiments also were approved by the Committee on Experimental Animals at the National Institute of Infectious Diseases in Japan, and all experimental animals were handled in accordance with biosafety level 3 animal facilities according to the committee guidelines.
Conclusions
The results presented in this study provide evidence for the specific and efficient neutralization of MERS-CoV using anti-S IgY antibodies in vitro and in an animal model of MERS-CoV. Together with our previous study, the two studies provide the first evidence for the potential use of MERS-CoV-specific IgY antibodies as a therapeutic vaccine against MERS-CoV. Further studies are needed to investigate the combined effect of both anti-S and anti-S1 IgY Abs in neutralizing MERS-CoV through intraperitoneal and intranasal routes of administration. Clinical trials are needed to evaluate the efficacy of this therapy in camels and humans. The IgY antibodies might prove useful for treating MERS-CoV in high-risk populations, such as those with immature or weakened immunity, or in highexposure groups, such as healthcare workers, camel handlers, and slaughterhouse workers. Furthermore, the IgY antibodies can be used to treat MERS-CoV in camels, which can transmit the virus to humans. The data generated in this study provide a platform for future studies to generate specific and efficient IgY antibodies against other coronaviruses. | 2021-06-03T06:17:23.753Z | 2021-05-26T00:00:00.000 | {
"year": 2021,
"sha1": "4c69378756f4cdd0256eda94fe8e9c636e6ec71a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8247/14/6/511/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d961e83b34efb69bb3f0531cea8ba1a30c43fa2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
247079141 | pes2o/s2orc | v3-fos-license | The 3D Bioprinted Scaffolds for Wound Healing
Skin tissue engineering and regeneration aim at repairing defective skin injuries and progress in wound healing. Until now, even though several developments are made in this field, it is still challenging to face the complexity of the tissue with current methods of fabrication. In this review, short, state-of-the-art on developments made in skin tissue engineering using 3D bioprinting as a new tool are described. The current bioprinting methods and a summary of bioink formulations, parameters, and properties are discussed. Finally, a representative number of examples and advances made in the field together with limitations and future needs are provided.
Introduction
Tissue engineering has become an important research area in the past two decades since it allows restoration of the functionality of damaged tissues and organs [1]. The skin is the outer covering and the largest organ of the human body. Skin tissue engineering and regeneration has been favourable for making important advances in wound healing by designing constructs with similar structures and biological functions of native tissues [2]. With the advent of 3D printing technology, much effort has been invested to transform conventional approaches and develop new 3D bioprinting techniques that can produce more complex, functional, and personalised three-dimensional architectures with better cells directly into the structure and the ones that can. Among the first ones, it is possible to find fused deposition modelling (FDM), stereolithography (SLA), selective laser sintering (SLS) and low-temperature deposition manufacturing (LDM). On the other hand, there are cellular bioprinting techniques that uses bioinks with viable cells in order to form the construct. These bioprinting techniques could be classified into four categories: laser-based, droplet-based, extrusion-based, and stereolithography-based bioprinting [20,21] (Figure 1).
Fused Deposition Modelling (FDM)
FDM 3D printing is a widely used technique in the industry due to its safety, operational efficiency, durability, and simpler, less expensive equipment [22,23]. In addition, highly reproducible and bioresorbable 3D scaffolds can be fabricated using this technique [24]. Moreover, this technique could be used to print 3D structural support for cell-laden soft materials in the printed constructs [25]. The fabrication of porous, 3Dprinted chitosan scaffolds for skin tissue regeneration was achieved, showing superior healing compared to commercial patches and spontaneous healing [26]. In this vein, the use of keratinocytes, melanocytes, and fibroblast from skin donors leads to a three-dimensional pigmented human skin construct using a two-step print process using collagen [27].
FDM Process
The FDM technique implies a fusion between material layers by depositing layers of thermoplastic material one-by-one. The material extrusion is the base of the FDM technique, in which the thermoplastic filament is heated until it melts. The melting process is carried out by a heated nozzle, which is positioned on the platform surface. This nozzle is part of the extruder head, and it is fed with the thermoplastic material through rotating rollers. This heating method is characterised by following a desired geometric pattern of the object, leaning on a computer-aided design (CAD) model [28] (Figure 2). Although FDM is widely used to produce solid models, it can be adapted to fabricate porous structures. In order to do this, a positive value could be applied to the raster fill gap, to impart a channel within a build layer. In this sense, the channels could be interconnected even in three dimensions when arranged in a regular manner [29]. The FDM process includes at least four stages: CAD modelling, pre-processing on FDM software, part building on an FDM machine, and post-processing of fabricated parts. As a first step, the process creates a 3D solid model in the CAD system and then converts it into an STL format, which can be processed by the FDM software. The parameters that might be included during the FDM process are the raster width, raster angle, air gap, build style, nozzle tip size, and temperature. Once the model file is sent to the printer machine, the printing process begins following the procedure previously described. After completion of the printing, the part could be removed from the printer. Finally, the support structures could be removed by breaking them from the main piece or immersing the model into different types of solutions in order to detach the support structures [30].
FDM Materials
FDM techniques are most popular due to their material choice, which does not require the use of any toxic glues or solvents and has an accessible size with economics parts. In addition, thermoplastic polymers are the most common polymers used for printing, which allows an easy choice of scaffold components [31,32]. These materials include polyolefins such as polyethylene and polypropylene, polylactic acid (PLA), acrylonitrilebutadiene-styrene (ABS), polysulfone, polyetherimide, polycarbonate (PC), polyglycolic acid, polycaprolactone (PCL), and chitosan [33,34]. Among them, PCL, ABS, and PGA are the most widely used for skin, bone, and tendon repair [35][36][37], and the biochemical properties could be improved by the addition of different materials, such as β-tricalcium phosphate and hydroxyapatite [38]. The preference of ABS is due to its high strength, its resistance to corrosion, and low cost, which allows ABS to be used in conditions where other materials are not compatible. Additionally, it is possible to get ABS plastic in different colours, being a distinct characteristic. On the other hand, PC is also a widely used material, commonly applicable to the medicine and automotive and aerospace industries, among others. In addition, an advantage of this material is that it has better mechanical properties compared to ABS and other thermoplastic materials [39].
FDM Parameters
During the FDM process, it is important to take care of all parameters that allow the control of shape, size, and internal structure of the part. Users could select the most important parameters such as build orientation, layer height, model build temperature, nozzle diameter, infill style, part interior density, raster width, raster angle, and air gap [40].
•
Build orientation: the way in which the component could be adjusted into the building platform using the three axes: X, Y, and Z of the FDM machine ( Figure 3). • Layer height: referred to as the layer thickness and the amount of material that is deposited along the z-direction. This parameter impacts directly into the building time and the quality of the surface. In addition, the layer height depends completely on the extruder tip diameter. • Model build temperature: the temperature of the material in the heating nozzle. This temperature regulates the material viscosity extruded from the tip. • Nozzle diameter: since it affects the drop pressure, it directly regulates the road width. In order to maintain a consistent flow of the extruding material, the correct nozzle diameter is required. In this sense, a different range of tips is often provided by FDM systems. The smallest nozzle diameters need more time to complete the extrusion process. • Infill style: allows determination of the internal pattern of the structure, which could be raster, contour, or contour-raster. The most frequently used is the raster fill style, produced by the nozzle movement back and forth to fill the delimited area. On the other hand, to produce contour style, the tip movements are as a closed-loop. Finally, the combination of both previous approaches allows one to get the contour-raster style ( Figure 4). • Part interior density: associated with the air gap inside the raster and it gives information about the density of the material. Three types are available: solid, sparse, and sparse-double dense. In the first one, no air is inside the material. On the other hand, the sparse type allows a specified air gap between the tool paths. Finally, the sparse-double type is similar to the sparse type with the addition that it produces a hexagonal pattern. • Raster width: the thickness of the material deposited from the tip to the platform. It depends on the tip size ( Figure 5). • Raster angle: the direction that the raster tool path is deposited on the x-axis, which could vary from 0 • to 90 • ( Figure 6). • Air gap: the space between two tool paths. Among the air gap types, the most common are: raster-to-raster air gap, used with the adjacent raster tool path with the solid infill style; part sparse, commonly used with the sparse infill style; and perimeter-to-raster air gap, used to describe the gap among the inner contour and the edge of the raster fill inside the contour ( Figure 5).
FDM Advantages and Disadvantages
Finally, it is important to highlight the advantages and disadvantages of the FDM technique. Among the first ones, an interesting advantage is that FDM is simple and safe, since it does not use toxic materials and is easy to use. In addition, once the printing has finished, the only additional step is the support removal, but besides that, it could be handled right afterward. Another important advantage is that the exact amount of material is used due to its extrusion process, avoiding material waste. As it was mentioned previously, users could produce solid or porous parts, since the software allows modifications of various of parameters, such as fill pattern, raster width and angle, and air gaps. Regarding tissue engineering, this method has the advantage of being very simple [25]. Moreover, parts could be printed using different materials and include the possibility of adding new types of materials as long as they meet the necessary requirements. In this sense, bioinks are commonly composed of hydrogels and some bioactive components with shear-thinning or fast-solidifying properties to produce the 3D structures with high accuracy [41]. In addition, there are important advantages regarding the bioprinting process. Among them is the possibility of printing high-viscosity bioinks and high concentrations of cells when processed at physiological temperatures [42].
On the other hand, there are some disadvantages, such as accuracy, since in some cases, the parts could present a grainy surface due to the layer-by-layer deposition process through the nozzle. In addition, it could take a long time to get the part done because of the slow printing speed, which is a consequence of having only one nozzle tip to make the layers. Finally, every new material should meet the diameter requirements of the nozzle tip to be used in the FDM printer [39]. Regarding the printing of biomaterials, the major limitation is the material selection, since the high temperature used during the process does not allow the cell printing, which requires a second step to seed cells on the constructs [38]. Another drawback is the resolution, since it is lower than other methods [43]. Last but not least, the materials could produce the clogging of the nozzle [44].
Stereolithography (SLA)
Steroelithography is a process based on photopolymerisation that uses a laser beam to print a particular model on a photosensitive resin [45]. The SLA technique is widely used in the industry since it is the oldest one. In addition, it could be used in different applications, from prototyping consumer products to printing tissues [46]. Indeed, some authors reported the fabrication of an optimal vascular network for tissue engineering skin using SLA technology with biocompatible, elastic, and surface-coatable materials [47].
SLA Process
A UV laser (355 nm) was used in an SLA process in order to solidify a UV-curable resin using photopolymerisation (Figure 7). The pattern is designed by a computer-controlled laser beam or by a digital light projection on the resin surface. The printing process begins when the platform is immersed below the surface of a tank full of the liquid resin (prepolymer solution). After that, the resin is solidified by the laser beam with the desired pattern. Once the layer is photopolymerised (solidified), the platform is lowered in order to deposit the following layer. The laser beam movement controls the pattern formation and, since it could move across a large space, it is able to produce large-size models [48] ( Figure 8).
SLA Materials
The material used in the SLA technique should be a photosensitive resin. Cationic photopolymerisation or free radical photopolymerisation are the mechanisms used by the photosensitive resin in SLA. The use of these resins is based on the basis that at 355 nm, both radical and cationic photopolymerisation could occur. The most common materials used, such as epoxy, thermoplastics elastomers, or acrylate resins, have interesting properties such as low viscosity and high photosensitivity. In this sense, these materials also have controllable mechanical properties and can stand changes of temperature and humidity. However, a major disadvantage is their high-volume shrinkage, which limits their use. In this vein, cationic photopolymerisation has no volume shrinkage. Among the cationic photoinitiators, the most common structures found are diazonium salts, diaryliodonium salts, triarylsulfonium salts, ferrocenium salts, and thiopyrylium salts. However, the resins used for cationic photopolymerisation are less and the initiator price is high; therefore, the hybrid photosensitive resins (radical and cationic) are the most commonly used [49,50]. The cationic photopolymerisation begins when the UV light absorption produces a homolytic and heterolytic cleavage of the salt. The products, cationic and cation-radical species, react and form strong protonic acids. These substances are responsible for the beginning of the cationic polymerisation since these species produce the direct protonation of the monomer. Acrylated polycaprolactone, gelatin methacryloyl, poly (propylene fumarate), and soybeanoil-epoxidised acrylate are the most common materials used in SLA printing [51].
SLA Parameters
There are some parameters to take into account during the SLA printing process. These are part, support, and recoat parameters. Part parameters are the ones that could affect the accuracy of built parts. In this vein, the parameters included are [52]: Other parameters are related to the shrinkage during the post-curing process. The post-cure shrinkage amount is due to the degree of the prototype cure in the green state, achieved during laser scanning. It is known that when the degree of curing of the prototype increases, the contraction is reduced. The cure degree also depends on [53]:
•
Laser power: when it increases, the curing degree is higher. This occurs because when the laser has a high power, the resin is exposed to higher UV light intensity, which produces more crosslinking.
•
Layer pitch: the curing degree is lower with a higher layer pitch, due to the fact that a lower layer pitch increases the overlaps between adjacent layers, decreasing the amount of uncured resin. • Scan pitch: with a higher scan pitch, the curing degree is lower, since the uncured resin is higher. • Scan speed: if the laser scan is faster, the curing degree decreases, since the exposure energy per unit area is less.
•
Laser stability: when the laser power has any fluctuation, it leads to different laser exposures, which affect the curing degree. • Absorption rate of the materials: the curing degree improves when the material absorption rate is higher.
SLA Advantages and Disadvantages
SLA printing presents a wide range of advantages, such as a stable printing process and the highest resolution compared to other printing techniques. This is due to the resolution reached by SLA printers, which is 20 µm or less, when the other printers' resolutions are between 50 and200 µm [54,55]. Another advantage is the possibility of printing large-size models. However, since the printing rate depends on the laser beam movement, the larger the size of the models, the slower the printing rate is [56]. In this vein, the printing process of the SLA technique is usually slow due to the low photopolymerisation rates during the printing process. In addition, the process is called "discontinuous" since there are different steps, such as the laser scanning, the movement of the platform, and the resin refill, which are separate. Between these steps, there is a time during which there is no printing. Another important disadvantage is that some biocompatible resins are unable to be used in this printing system and it is incapable of printing cells [57,58]. This is due to the UV irradiation, which could damage the DNA and promote the lysis of the cells [59].
Selective Laser Sintering (SLS)
The SLS technique implies the use of a high-powered laser to produce 3D parts from CAD geometry [60]. In SLS technology, the geometric complexity of the part is not important and parts could be produced by adding the material layer-by-layer [61]. In this vein, in order to produce functional plastic components, SLS is the most common technique [62]. This technique could be used to print acellular scaffolds. In this sense, the use of biocompatible and biodegradable polymers allows printing bone scaffolds with the SLS method [63].
SLS Process
In this printing process, the scaffold is produced layer-by-layer. The use of a CO 2 laser beam turns the powder into solid objects by fusing powdered, polymer-based materials such as nylon or polyamide [64]. The printing process begins when the laser crosses the powder in both axes: X and Y, building a profile with two dimensions. The interaction between the laser and the powder causes an increase of the temperature up to the melting point, leading to the fusion and the resulting mass. This process is called sintering [1]. The printing continues when the printer platform lowers, and a new layer of powder is distributed. (Figure 9).
SLS Materials
In SLS, the most important materials used are polymers. In the printing process, three main types of polymers are used, namely, thermoplastics, thermosetting plastics, and elastomers. Thermoplastics are the most commonly used in the SLS technique. These thermoplastic polymers can be classified in the following: amorphous and crystalline. Both materials have particular properties that should be taken into account while setting the parameters. In this sense, the crystalline material's chain molecules are arranged in an orderly structure, whereas the amorphous material's chain molecules are disposed in a random manner. These differences in the chain molecule arrangement lead to different thermal properties [65]. Among the widely used materials in the SLS printing process, the most common are PC, ABS, polyamide, poly(L,D) lactic acid-bioactive glass, polylactide-calcium carbonate, poly(3-hydroxybutyrateco-3-hydroxyvalerate), polycaprolactone-hydroxyapatite, poly(D,L-lactide)-β-tricalcium phosphate, polyamide-hydroxyapatite, and titanium [66][67][68].
SLS Parameters
It is important to take into account the software parameters before the printing process. These parameters could vary according to the properties of the material powder used. Among them, the most important are [69] (Figure 10): • Part bed temperature: the temperature that controls the powder into the part cylinder. The powder is heated in the part cylinder before the movement of the laser scanners, and the part bed temperature is important to reduce laser powder and distortion. • Fill laser power: the power of the laser beam at the part bed surface. This parameter should be set in order to ensure that the powder would heat up to the melting temperature in the part bed surface. • Scan size: sets the speed of the laser beam. • Scan spacing: the space between two neighbouring parallel scan vectors. It is associated with the size of the laser beam and the energy density. • Slice thickness: it is the powder thickness of each layer in the part cylinder. It depends on the part piston depth when it lowers.
SLS Advantages and Disadvantages
The major advantage of the SLS method is its high fracture toughness and mechanical strength, providing high quality for implants [70]. Another important advantage of this technique is the possibility to create components without the supporting structures. In this sense, in each build more parts could be produced, reducing the required amount of post-processing. However, the part strength could be inconsistent, leading to the possibility of different strength for multiple copies of the same part [62]. The wide range of available biomaterials is an advantage of this technique for tissue engineering applications. In this sense, bone replacements or structural-supporting materials could be fabricated using ceramics and metals [71]. Compared to conventional techniques, tissue regeneration could be improved because of the controlled pore size of the scaffold [72,73]. However, because of the high temperature reached during the radiation of the CO 2 laser, cells could not be printed and thermally stable polymers are required [74].
Low-Temperature Deposition Manufacturing (LDM)
In order to fabricate different scaffolds, LDM printers use a more robust technology, when compared to the previously described printers, such as FDM and SLS [75]. The bioactivity of the different materials is preserved thanks to its non-heating characteristic [76]. Natural biopolymers could be printed maintaining their bioactivities. The 3D structures are fabricated with FDM technology, by consecutive addition of extrudate layers following a computer design model [77]. In fact, it was described that the fabrication of a bilayer scaffold for skin tissue engineering applications was made with the upper layer of poly(e-caprolactone-co-lactide)/Poloxamer (PLCL/Poloxamer) nanofibre membrane, and the lower layer was a hydrogel composed of 10% dextran and 20% gelatin [78].
LDM Process
The LDM printer fabricates the scaffold using a chamber where the temperature does not exceed 0 • C. The platform used to produce the scaffold is inside this chamber. Using a layer-by-layer method, the scaffold is produced, and it is freeze-dried in order to remove the frozen solvent. In this sense, the LDM technique implies the manufacturing process with the addition of the phase separation process [79] (Figure 11).
LDM Materials
The LDM printing process allows the use of natural biopolymers, such as collagen type I, sodium alginate, gelatin, and chitosan. In this vein, these materials can keep their bioactivities thanks to the non-heating property [80]. In addition, in order to improve the mechanical and biological properties of the scaffold, inorganic particles could be added. Among these particles, the most frequent are nano-hydroxyapatite, tricalcium phosphate, and magnesium particles [81]. In this vein, the LDM technique allows the production of different scaffolds.
LDM Parameters
In order to produce the correct printed part, it is important to adjust the LDM parameters. Among them, the most common are [76]: • Software: it is necessary to design the electrical model, determining the shape and architecture of the part.
•
Material properties: related to the built part morphology and structure. In this vein, when scaffolds are printed, it is important to consider that their structure depends on the proportion of materials. • LDM device parameters, which include the following parameters: - The chamber temperature: it should be around −30 • C in order to ensure that the extruded material is frozen. - The nozzle temperature: needs to be higher than the previous one to ensure that the extruded lines could integrate with the previous layer. -Nozzle diameter, nozzle scanning speed, and extrusion rate: define the extruded slurry lines' morphology and diameter. A lower extrusion rate and higher nozzle scanning speed could decrease the line's diameter, leading to broken lines.
-Material solution viscosity: determines the morphology and the final structure of the built part. It is also related to broken lines, since a material with high viscosity is difficult to squeeze out of the nozzle. However, it is possible to improve this problem by increasing the nozzle temperature ( Figure 12).
LDM Advantages and Disadvantages
An important advantage of the LDM technique is the high versatility due to the fact that viscous liquids could be prepared even at room temperature. However, it is difficult to reach appropriate flow parameters while selecting the concentration of the solvent and polymer without affecting the solvent's rapid evaporation in the solution [82]. Optimisation of printing parameters is required to achieve the desired extrusion of the dissolved polymer. In addition, the selection of the proper solvents applicable to the polymers is critical to achieve correct liquid viscosity without altering the rheological requirements [83]. Another characteristic of this printing technique is the importance in maintaining the chamber temperature around −30 • C, while the temperature of the nozzle needs to be higher [84].
Laser-Based Bioprinting
The laser-based bioprinting components are the laser source (which could be pulsed or continuous), a laser transparent print ribbon (which may contain a laser-energy-absorbing layer) coated with the layer of cell-laden bioink, and the collector slide that is on a motorised stage. The cell-laden material is patterned in a three-dimensional spatial arrangement by the energy from the laser using the computer design (CAD/CAM) [85]. During the process, the energy-absorbing layer is stimulated by a focused laser pulse that came from the laser source. The energy absorbed vaporises the donor layer, creating a high-pressure bubble that pushes the bioink as droplets into the receiving substrate. This method has a high resolution and reproducibility, making it available to print stem cell graft and skin tissue, among others [86]. In order to produce high-quality products, it is important to consider the laser's wavelength, intensity, and pulse time. In addition, surface tension, viscosity, and tension are also key for the bioinks. Finally, the air gap between the "ribbon" structure and the substrate is also important parameters to be considered [87,88].
The laser-based bioprinting was used in the fabrication of multi-layered tissue constructs, such as skin tissue using fibroblast and keratinocytes in collagen that could mimic the tissue functions [90,91]. One of the most important advantages of this printing technique is the non-contact process which eliminates the nozzle clogging. In addition, this technique presents a high resolution (50 µ), since it presents the capability of printing single cells per droplet, the possibility of using high-cell densities (10 8 cells/mL), and low-viscosity cell suspensions (1-300 mPa s) [92,93]. However, some of the disadvantages include the risk of photonic cell damage due to the laser exposure; this is the most important. In addition, those using metals as a laser-energy-absorbing layer bring up the problem of cytotoxicity induced by metallic nanoparticles. Moreover, the scalability is limited due to the high cost of the laser system and the complexity of the control of the laser pulses [85].
Droplet-Based Bioprinting
In the droplet-based bioprinting technique, cell-laden bioinks are ejected out of the nozzle into a pre-defined location on the substrate, in the form of droplets [94]. They can be classified into inkjet bioprinting (continuous and drop-on-demand thermal, piezoelectric, and electrostatic), electro-hydrodynamic jetting (EHD jetting), acoustic bioprinting, and microvalve-based bioprinting [89].
The inkjet printing technology was adapted in order to assess the inkjet bioprinting, in which the printing ink cartridges are replaced with cell-laden bioink cartridges. This technique can be classified into two groups: continuous inkjet (CI) and drop-on-demand inkjet (DOD) printing. Among them, DOD printing is preferred for bioprinting since, due to the nature of the CI method, the droplet could not be precisely controlled [95]. In this sense, in the DOD method, a trigger ejects droplets on demand, leading to a precise control and positioning of droplets. The DOD bioprinting could be classified as thermal, piezo-electric, or electrostatic systems. All the systems allow printing cell-laden bioinks with a high post-printing cell viability [20,[96][97][98][99][100]. In general, inkjet bioprinting was used to print tissue constructs of skin, among other tissues, such as bone, cartilage, cardiac, and nervous [101][102][103][104][105][106]]. An important advantage of the inkjet bioprinting technique is the high resolution (50 µm), high printing speed (10,000 droplets per second), and the possibility of introducing cell concentration gradients [20]. On the other hand, some disadvantages are that only low-viscosity bioink (3-12 mPa s) could be printed, due to the nozzle clogging that limits the cell concentration in the bioink up to 106 cells/mL [93].
Electro-hydrodynamic jetting-based bioprinting allows the printing of living cells, such as Jurkat cells, mouse neuronal cells, human embryonic kidney cells, and mouse fibroblasts, beyond the high electric fields and forces associated with this process [107][108][109]]. An important advantage of this technique is the high resolution (100 nm), since nanoscale resolution could be achieved and bioinks with high viscosity (1-1000 mPa s) can be printed [107,110]. However, an important disadvantage it that the exposure to the high voltage and high electric fields could be detrimental to the cell viability on the long-term post printing [111].
In the acoustic bioprinting method, cell-laden bioink droplets can be ejected on demand. This technique allows the bioprinting of different types of cells, including mouse embryonic stem cells, fibroblasts, hepatocytes, human Raji cells, and HL-1 cardiomyocytes [112]. An important advantage of this method is that the bioink is in an open pool instead of being in a nozzle, avoiding some stressors, such as heat, high pressure, and voltage [95]. In addition, this technique has a high resolution (37 µm) and high printing speed (10,000 droplets per second). However, this method does not allow bioinks with high viscosity and high cell concentration [94].
In the microvalve-based bioprinting, to control the droplet ejection of cell-laden bioink, electromechanical or solenoid valves are used [113]. This method was used to print different types of cells, such as fibroblasts and keratinocytes, primary bladder smooth muscle cells, and human alveolar epithelial type II, with an interesting post-printing cell viability [91,114]. In addition, multi-layered skin tissues and lung tissue analogue constructs were printed using this technique [91]. An important advantage is the possibility of synchronised ejection from different print heads, allowing the co-culture printing and multi-culture tissue constructs [113]. Another advantage is that the cells are less likely to be damaged since the pneumatic pressure used is lesser than the one used in inkjet bioprinting [95].
Among the drawbacks, the printing speed is moderate (1000 droplets per second), it has a low resolution compared to other methods, the viscosity is limited (1-200 mPa s) due to the nozzle clogging, and the cell concentration is fewer than 106 cells/mL [91].
In general, among the advantages of the droplet-based bioprinting, the most important one is its compatibility with a variety of biological materials. In addition, this technique provides a high resolution (20-100 µm) and speed (1-10,000 droplets/s) while being an interesting low-cost possibility [115]. However, an important disadvantage is the requirement of a liquid or less viscous form of the biological material.
Extrusion-Based Bioprinting
In the extrusion-based bioprinting method, pneumatic pressure or mechanical force are used to extrude the bioink out of the nozzle in an uninterrupted line [89]. This technique originates from fused deposition modelling (FDM) printing.
A major advantage of this method is the scalability, since it has a continuous bioink flow and large deposition rate. In addition, it allows high viscosity bioinks (600 kPa s) and high cell concentrations (10 8 cells/mL) [42]. Depending on the bioink viscosity, cell concentration, and nozzle size, post-printing cell viability could be around 40% and 95% [93]. The requirement of bioinks with shear-thinning properties is another drawback for this technique. On the other hand, this method presents a lower resolution (100 µm) than the others [43,124], and another disadvantage is the nozzle clogging [44].
Stereolithography-Based Bioprinting
Stereolithography-based bioprinting uses a light irradiation, commonly UV, to polymerise a layer of photopolymer resin. A computer code controls the light movement in order to form the 3D structure as the build stage is translated, vertically building the construct layer-by-layer [25]. The stereolithography method could be divided into two modalities; in one, a computer controls the light source and it moves towards the structure required in each layer of the 3D object. The other modality uses an array of several thousand micro-mirrors called a digital micromirror device (DMD). In this case, the micromirrors can be controlled in order to reflect the light in a spatial pattern, allowing the polymerisation of the whole layer at once [125]. This is an important advantage since it reduces the printing time.
This approach has important advantages due to its precise control on the deposition of biologicals and high resolution (200 nm-6 µm) in reduced printing time. In addition, it permits the use of a high cell concentration (>106 cells/mL) with no nozzle clogging problem [89]. However, among the disadvantages, only the use of photocurable bioinks is the most important. In addition, the UV light can alter cells' viability since the irradiation provokes damage of the DNA and promotes cell lysis, and only low viscosity (5 Pa s) of bioinks could be used [89].
Printer Softwares
In the last thirty years, in conjunction with the advance of 3D printing technology, computer-aided design software packages were used for modelling structures previously printed. UG, CATIA, or ProE, among other customised software, are used for this first step. Then, an ST-format file, which contains all the information models, is exported to the 3D printing system to control the moving track of the printing device and construct the structure layer-by-layer.
According to Pakhomova et al., [128] the software is classified based on control tools, general computer-aided design (CAD), tools used to convert medical data to CAD formats, and only a few specialised research project tools. The process of bioprinting shows three distinct phases. In the first step, considered as a pre-processing phase, all the planning details are calculated. This pre-process includes imaging (CT, MRI, etc.) used for the analysis of the anatomical structure of the tissue. Then, a proceeding by CAD is carried out to translate the imaging data into a blueprint for bioprinting. The imaging data are transformed into cross-sectional layers of appropriate scale, such that the bioprinting device will be able to add them in a layer-by-layer fashion. This step is carried out by specialised software programs such as AutoCAD, SOLIDWORKS, and CATIA, among others.
Subsequently, the processing phase is carried out and involves all steps related to construction and manufacturing of the bioprinted tissue. Complexity at this stage is related to the specific printing method and the combination of materials (bioink, scaffold, and other additives). Finally, the post-processing phase includes all steps that occur before bioprinted tissue is completely mature and ready to use [128,129].
Bioinks for 3D Printing Technology
In the past few years, the development and characterisation of new bioinks gained increasing attention, mostly because of the lack of materials suitable for bioprinting. This issue was considered as one of the major drawbacks that substantially limited the progress in the field. Therefore, the number of additive manufacturing techniques able to be used for 3D bioprinting increased over time, with the aim to include droplet deposition such as inkjet, extrusion, and microvalve-based techniques, and lithography and laser-forward, transfer-based techniques for tissue engineering purposes. All of them possess distinct physical and rheological requisites for a suitable ink [9].
The bioprinting process permits the fabrication of 3D tissue constructs with the previously programmed geometries and structures containing biomaterials and/or cells (together known as bioink), by synchronising the bioink crosslinking/deposition with a motorised stage movement. Despite the 3D bioprinting modality used, the bioinks are an essential component during the construct fabrication and they could be stabilised or crosslinked during or immediately after bioprinting to create the final shapes of the intended tissue constructs. The selection of the bioink depends on the specific application, for example, the target tissue, the cell type, and also the bioprinter that will be used [130].
Bioinks should fulfil several requirements to guarantee the success in the fabrication of tissue constructs. They must be highly biocompatible to accommodate live cells and mechanically stable after printing. Moreover, the bioink printability is mandatory. It depends on different parameters such as the surface tension of the bioink, the viscosity of the solution, and the capability to crosslink on its own and on surface properties of the printer nozzle. Furthermore, the printing reliability and encapsulation of live cells deeply depends on the viscosity and the hydrophilicity of the bioink. Some other important desirable aspects for a bioink to highlight include high resolution during printing, ready availability, low cost, their ability of biomimicking the tissue's internal structures, and immunological compatibility [131]. In this sense, naturally derived biomaterials afford a good environment for cell growth by mimicking the native ECM of tissues, self-assembling, and showing biodegradation and biocompatibility properties. Nevertheless, they do not have the mechanical properties needed to conserve the integrity in the in vivo microenvironment and can be unstable and unpredictable. Moreover, poor mechanical properties may cause difficulties in printing, low, rigid tissue structures, and lesser support for the cells in the tissue [132]. Because of this, extensive research is being held, in order to optimise and improve the naturally derived biomaterials properties for their use in 3D printing. In this review, we focus on the most representative and common polymers used as bioinks for 3D bioprinting. Table 1 summarises detailed characteristics and advantages of various printing technologies used with collagen, chitosan, cellulose, hyaluronic acid, and alginic acid-based bioinks. [133].
It can be printed at low temperatures and forms a solidified gel at body temperature [133]. At low concentrations (0.1 wt%), collagen is suitable for droplet ejection, inkjet, and laser-assisted 3D bioprinting. At higher concentrations (above 1.25 wt%), it reaches a viscosity suitable for extrusion [136].
Human primary foreskin-derived dermal fibroblasts [137] Chitosan-based bioink Chitosan is derived from chitin, a polysaccharide from the exoskeleton of shrimp and other sea crustaceans. It has a linear structure, which can be quickly formed into a gel matrix using NaOH [133].
Chitosan-based hydrogels are usually used with an extrusion bioprinter and there are a low number of studies of chitosan printed by jet-based bioprinting methods [138].
Keratinocyte and human dermal fibroblast cells [139] Cellulose-based bioink Cellulose is a linear polysaccharide, the most abundant natural polymer in nature. It is biocompatible and nontoxic [140].
The cellulose hydroxyl groups are available for chemical modification by esterification, graft copolymerisation, etherification, selective oxidation, or intermolecular crosslinking reaction, leading to vast possibilities in bioink formulation [141].
It is used in bioinks as reinforcing material with good bio-adhesion and mechanical properties [140].
EB [142] Fibroblasts [143] Hyaluronic acid-based bioink Hyaluronic acid is an anionic polysaccharide that promotes tissue regeneration. Low molecular weight hyaluronic acid can promote cell differentiation and angiogenesis [133].
Excellent moisture retention and promotes cell proliferation [134].
It can be used alone, but it is more commonly used in combination with other biomaterials to improve the physical properties of the bioink mixture [136].
EB, PEI [11]. Human dermal fibroblast [144] Alginic acid-based bioink Low cell adhesion [135]. Alginate is a naturally derived linear polymer from the cell wall of brown algae. Alginate is a polysaccharide that is negatively charged. This soluble biopolymer supports cell growth and exhibits high biocompatibility [145].
Many bioinks described in the literature are composed of alginate or in combination with other biopolymers. The popularity can be explained by the simplicity of the ionotropic gelation process, and because of the network precursor, sodium alginate, which is commercially available and cheap [136].
Collagen
Collagen is a protein which is the main component of the extracellular matrix of animals, representing approximately 30% of the protein content in vertebrates. It carries out a structural and functionality role. Tissue integrity within the body is assured given its strength and/or flexibility and stability [147]. Foreseeing the therapeutic benefits of collagen biomaterials and their association within composites or hybrids, a wide diversity of biomaterials have been prepared over nearly two decades [148][149][150][151].
Proteins are particularly interesting in the formulation of inks for 3D printing technology. They are essential structural components of living systems, providing support in and around cells, and they are important for tissue functions [152]. The skin, for instance, has a challenging, complex structure to bioprint, consisting of two major compartments: epidermis, dermis, and a third region known as the subcutaneous tissue [153,154]. Because of this, tissue-engineered skin remains elusive despite extensive research, due to the skin's multi-stratified anisotropic structure. It is difficult to replicate applying traditional tissue engineering techniques [4]. Taking into account the skin tissue complexity, Park et al. [155] obtained 3D cell-laden collagen microstructures by 2D cell patterning. This technique provides a simple and powerful manner to mimic the functions and structures of complex tissues and organs. It also contributes to reducing the gap between human body and in vitro tissue models. They adapted this technique to fabricate human skin models with papillary structures at the dermo-epidermal junction. Throughout their study, they fabricated self-organised, 3D-protruded collagen microstructures by seeding fibroblasts within a hydrogel in patterns using inkjet cell printing. By studying the printing parameters, the collagen bed condition, and the cell number in a droplet, fibroblasts could be aligned in patterns with controlled cell numbers. Within the collagen matrices, fibroblasts rearranged and reorganised the surrounding extracellular matrix microenvironment. Moreover, vertically elevated collagen microstructures were formed relevantly to the size and the shape of the printed cell patterns.
Regardless of the bioprinting technology applied, the functionality of the bioprinted skin substitute is highly dependent on the bioink composition and cell type, in terms of rheology, mechanical integrity, biocompatibility, biodegradation, and antimicrobial activity [134]. In reference to collagen, many strategies were carried out to improve the integrity of collagen for printing purposes. We can mention the following: (i) changing collagen properties thanks to additives, inducing partial crosslinking, or chemical modification; (ii) printing collagen into a support such as a thermoplastic scaffold or slurry baths; or (iii) using collagen as the binder/crosslinker [156]. In this sense, Shi et al. [157] prepared a novel bioink constituted by gelatin methacrylamide (GelMA) and collagen doped with tyrosinase for the 3D extrusion-based bioprinting of living skin tissues. Tyrosinase has a dual function since it is an essential bioactive compound in the skin regeneration process, but also an enzyme that facilitates the crosslinking of collagen and GelMA. The crosslinking strategy was adopted to enhance the bioink mechanical strength and printability. In vitro cell culture results have shown that tyrosinase favours human melanocytes proliferation and inhibits the growth and migration of human dermal fibroblasts. In vivo tests showed that the wound healing rates may be accelerated when treated with tyrosinase-doped bioinks.
Furthermore, Bell et al. [158] presented a method that allows multiphoton crosslinking of collagen type I with flavin mononucleotide photosensitiser. This method permits the full 3D printing of crosslinked structures using unmodified collagen type I and uses only biocompatible materials. Complex 3D structures were successfully fabricated, and they obtained a resolution of 1 µm for both standing lines and the high aspect ratio gap between structures. Their work details a 3D printing technique with one of the most widely used tissue scaffold materials: collagen. It is worth noting that high resolution and 3D control of the fabrication of collagen scaffolds facilitates recreation with a higher fidelity of the native extracellular environment for tissue engineering.
Additionally, Wei Long et al., [159] reported a single-step bioprinting process that may be useful for the fabrication of complex 3D tissue models for tissue engineering applications. It consists of a bioprinting-macromolecular crowding process (BMCP) and an additional printing cartridge consisting of 1 million fibroblasts/mL in PVP-based bioink that is used to print discrete cell droplets onto each printed collagen layer. Their results indicated that the number of living cells increased over a period of 10 days, indicating that the BMCP is biocompatible and does not exert detrimental effects on the printed cells. Moreover, ImageJ analysis of the stained living cells (cell perimeter and cell area) showed that the elongated fibroblasts are gradually spreading within the collagen matrix. These findings could be attractive for the structural design of collagen-based hydrogels for tissue engineering.
Chitosan
Chitosan is a biopolymer obtained from the deacetylation of chitin. It is a polysaccharide constituted by randomly distributed monomeric units of b-(1-4)-D-glucosamine and N-acetyl-D-glucosamine. This biomaterial is extensively used for tissue engineering purposes and lately, 3D printing of chitosan-based materials have been widely explored because of their excellent biodegradability as well as biocompatibility [141,160,161].
The skin is the largest organ of the body and the first line of defence against external factors including pathogens or mutagenic substances. Skin damage can be caused by any chemical, thermal, or electrical stimuli, and sometimes cutaneous complications or adverse reactions may lead to chronic and hard to heal injuries. In this matter, tissue engineering can provide a promising solution since it attempts to mimic the natural system morphology and, therefore, promotes an effective healing process.
According to a study by Smandri et al., [162] natural-based bioinks for three-dimensional bioprinting have an excellent ability to mimic the three-dimensional microenvironment structure of native skin tissue and to encourage cell adhesion, migration, proliferation, and mobility. Moreover, in vivo studies showed full wound closure four weeks post-surgery, with wellorganised dermal and epidermal layers.
Regarding the chitosan biopolymer, it has been previously reported that chitosanbased functional constructs are appropriate for tissue engineering because chitosan is nontoxic, biocompatible, and biodegradable, and it can be modified to obtain multifunctional constructs that are similar to the natural matrix [163]. It is worth mentioning that for 3D printing purposes, chitosan hydrogels are not that suitable as ink of 3D printers to construct complex patterns, because the formation of chitosan hydrogel involves the neutralisation of chitosan acidic solutions. Nevertheless, by controlling the rheological property of chitosan solutions and solvent evaporation, the 3D printing of complex structures obtained of chitosan ink were reported [164].
In 3D printing technology, two aspects in the 3D ink development must be considered. Firstly, the hydrogel precursor features to accomplish proper injectability and shape fidelity to the digital design, and secondly, suitable mechanical properties of the hydrogel after crosslinking, to allow scaffold integrity and cell proliferation. In this sense, Heidenreich et al. [165] studied the rheological properties and printability of hydrogel precursors containing different proportions of chitosan (chi) and collagen (col), seeking proper inks for extrusion 3D bioprinting. Three inks with different polymer ratios (col:chi 0.18:1.50, col:chi 0.36:1.00, and col:chi 0.54:0.50), presented acceptable printability values under printing flows between 0.19 µL/s and 0.42 µL/s. The best formulation, col:chi 0.36:1.00, was chosen to print mono-layered scaffolds. They demonstrated stability after 44 h in a buffer of PBS with collagenase at a physiological level, and had no cytotoxic effect towards the NIH-3T3 fibroblasts.
An innovative extrusion-based 3D printing technique worth mentioning has been used by Intini et al. [26] for the preparation of novel 3D chitosan scaffolds presenting controlled and reproducible macro and microstructures to be applied in the regenerative skin tissue field. Their manufacturing approach combines the freeze-gelation method together with an advantageous modification of chitosan solution with raffinose. They evaluated the 3D chitosan scaffolds in terms of cytocompatibility, biocompatibility, and toxicity towards the human fibroblasts and keratinocytes. In vitro results showed that 3D cell cultures achieved after 20 and 35 days of incubation had significant qualitative and quantitative cell growth. Additionally, the tests of 3D printed scaffolds in wound healing performed on streptozotocin-induced diabetic rats demonstrated that 3D printed scaffolds improved the quality of the restored tissue in comparison to commercial patches and spontaneous healing.
Another approach to note is adding chitosan as particles in the bioink. For instance, Andriotis et al., prepared biodegradable 3D-printable inks based on pectin biopolymer as a system for direct and indirect wound-dressing applications, suitable for 3D printing manufacturing. The 3D-printable inks obtained formed free-standing transparent films upon drying, revealing fast disintegration upon contact with aqueous media. To enhance the antimicrobial and wound-healing activities of the inks, particles were added, comprised of chitosan and cyclodextrin inclusion complexes with a propolis extract. The in vitro studies exhibited that 3D-bioprinted patches enhanced the in vitro wound-healing process, while the incorporation of chitosan and cyclodextrin/propolis extract inclusion complexes further enhanced wound healing, and also the antimicrobial activity of the patches [166].
Celullose
Cellulose is an abundant bio-based homopolymer in nature. It plays a crucial role in preserving the structure of plant cell walls, is present in tunicates, and supports flocculation processes in bacteria, such as Acetobacter xylinum [167]. Cellulose is a water-insoluble polysaccharide composed of d-glucopyranose moieties joined by b-1,4 linkages by oxygen atoms [168,169]. Depending on how these chains of β-(1,4 )-D-glucopyranose are assembled, cellulose can have different structural allomorphs, i.e., cellulose I, II, and III [170]. Cellulose I is the natural form of cellulose composed of parallel glucose-based chains, giving two crystal structures: cellulose Iα that is present in high quantities in bacteria and alga, and cellulose Iβ that is predominant in higher plants. Cellulose II and III are synthetic-derived celluloses, the first one with an antiparallel arrangement and the second one characterised by hydrogen bonds between separate sheets.
Due to its diverse and tunable mechanical, structural, chemical, and physical properties, cellulose is a perfect alternative for a wide range of applications, especially for biomaterial fabrication for tissue engineering [171][172][173]. Besides, its high biocompatibility, changeable biomechanics, biodegradability, high availability in nature, and moisture conservation make cellulose-based bioink an effective and low-cost material for skin regeneration, drug delivery, and wound healing [140,142,162,174]. In this regard, in recent years, several researches that exploit cellulose to develop bioinks with good printability properties and bioactive characteristics have been reported [175,176]. For example, cellulose nanofibrils (CNFs) have been crosslinked with different metallic cations (Fe 3+ , Al 3+ , Ca 2+ , and Mg 2+ ) to develop hydrogel-based inks for 3D printing applications [177]. For this, cellulose pulp was mechanically disintegrated and oxidised with 2,2,6,6-tetramethylpiperidine-1-oxyl (TEMPO) to obtain the CNFs. Then, the deprotonated, TEMPO-oxidized CNFs were crosslinked with the divalent and trivalent metal cations, and the corresponding hydrogel was formed. They found that by varying the nature of the cations they could modify the properties of the hydrogel-based inks. In fact, hydrogels containing divalent cations Ca 2+ and Mg 2+ had good 3D printing performance, while the hydrogels incorporating trivalent cations Fe 3+ and Al 3+ were unprintable. Gatenholm has also used cellulose nanofibrils to yield a bioink for 3D bioprinting of tissue and organs with a special design [178]. He introduced a novel bioink, CELLINKTM, composed of crosslinked nanofibrillated cellulose with desired morphological and rheological characteristics. In this invention, a purification step is crucial for adjusting osmolarity of the material and sterilisation to produce a cytocompatible biomaterial that can incorporate living cells, such as fibroblasts, chondrocytes, and stem cells. The biocompatibility and biomimicry properties of these new bioinks based on nanocellulose fibrils make them promising candidates for applications in cell cultures, tissue engineering, and regenerative medicine.
The advantage of designing a bioink composed of different hydrogels lies in the possibility of printing uniform 3D structures with high resolution and shape integrity. In this sense, Rastin et al. developed a cell-laden bactericidal bioink based on a hybrid methylcellulose/alginate hydrogel (MC/Alg) for skin regeneration [143]. The particularity in the design of this bioink was the use of gallium (Ga 3+ ) after printing 3D structures by extrusion of the MC/Alg hydrogel. Immersion of the three-dimensional MC/Alg mul-tilayered scaffolds in the Ga 3+ solution led to stabilisation of cellulose-based bioink by crosslinking with alginate chains. Furthermore, due to the broad antibacterial activity of Ga3+, the gallium-crosslinked bioink demonstrated potent bactericidal action against both Gram positive and Gram negative bacteria. In addition, the bioink exhibited high biocompatibility, supporting fibroblast cellular functions. Taken together, the excellent printability, good rheological properties, effective bactericidal activity, and high biocompatibility make this MC-Alg-Ga bioink a potential candidate for skin tissue engineering. Zidarič et al. also combined cellulose-based materials with alginate to design a novel hybrid bioink for 3D bioprinting of a dermis layer [179]. To prepare the bioink, they mixed the viscoelastic CNFs with the fast-crosslinking Alg and carboxymethyl cellulose (CMC), and incorporated human-derived skin fibroblasts (hSF) before the extrusion process. In this case, to support cell proliferation after the 3D bioprinting, the designed bioink formulation must yield a quasi-scaffold structure, thus a Ca 2+ crosslinking, post-printing treatment was crucial. As a result, they obtained an outstanding printability of hSF-laden bioink, which made possible 3D bioprint complex structures with a precise cell density and well-defined porosity. Furthermore, these 3D-printed scaffolds exhibited shape and size stability and cell viability for around one month. The bioactive features coupled with excellent printability properties make this alternative hybrid bioink an attractive biomaterial for skin tissue engineering, wound healing, and drug testing platform.
Hyaluronic Acid
Hyaluronic acid (HA) is a natural heteropolysaccharide from the glycosaminoglycans groups (GAGs) [180], that was first isolated from bovine eyes by Meyer and Palmer in 1934 [181]. As well as other GAGs, HA is composed of repeating disaccharide building blocks consisting of a uronic sugar (β−1,4-D-glucuronic acid) and an amino sugar (β−1,3-N-acetyl glucosamine) [182]. However, HA differs from other GAGs as it is not sulfated; it is synthesised by hyaluronan synthases and it can have a wide range of molecular weights, depending on the source [183,184]. Under physiological conditions, HA exists in the form of the negatively charged hyaluronate macromolecule and its corresponding salts. This polyanionic hyaluronan is highly hydrophilic since it interacts with water a thousand times more than the neutral polymer, improving its combination with different intra and extracellular tissues components [185].
HA is one of the most important constituents of the extracellular matrix (ECM) and due to its capability to retain water in the ECM, it plays a key role in filling organ spaces (vitreous humor and skin), absorbing shock impacts (cartilage), and lubricating moving tissues (joints) [186]. In addition to contributing to the structure and physiological properties of connective tissues and body fluids, HA participates in various biological processes, such as morphogenesis, inflammation, tissue restoration and regeneration, homeostasis, maintenance of ECM integrity, and mediation of cellular functions [187]. Furthermore, HA acts as a signaling molecule controlling cell adhesion, migration, and proliferation [188].
Due to its favourable features, such as biocompatibility, biodegradability, bioresorbability, high viscosity, and mechanical stability, HA is an ideal biomaterial for designing and developing non-adhesive, non-thrombogenic, and non-immunogenic scaffolds for tissue engineering and wound dressing purposes [189][190][191]. For this reason, HA has been extensively used as bioink for 3D printing for materials fabrication with biomedical applications [192][193][194]. To be employed as a 3D printable bioink, HA requires being chemically modified and mixed with other polymers to improve the rheological and mechanical properties. In a recent work, Hauptstein et al. studied different printable bioink compositions based on HA to achieve homogeneous ECM distribution for engineered constructs with biological properties [195]. For this, thiolated HA and allyl-modified poly(glycidol) were UV-crosslinked and supplemented with a 1 wt% unmodified high-molecular weight HA (HWHA) to adapt bioink to polycaprolactone(PCL)-supported 3D bioprinting. As a result, using an extrusion-based printing process, they obtained gels with a low concentration of polymers (3 wt%) and supplemented them with HWHA, showing an enhanced stiffness and homogeneous ECM distribution in 3D bioprinted, PCL-supported scaffolds. The multifunction of this HA-based bioink supplement, that both allows PCL-supported bioprinting and increases the quality of the developing 3D scaffolds, is promising for many applications in biofabrication.
Another group developed an alternative bioprinting gel by combining HA with hydroxyethyl acrylate (HEA) and gelatin-methacryloyl (GM), HA-g-pHEA-GM, to be used as a bioink in tissue engineering [196]. In this study, bioink synthesis consisted in a first graft polymerisation of HA and HEA and then a second grafting of GM via radical polymerisation mechanism. After that, the bioink printing ability was evaluated by using a home-built, multi-material 3D bioprinting system with pneumatic and piston extrusion. HA-based hydrogel demonstrated excellent properties such as good swelling, printability, morphology, biocompatibility, stable rheology, and drug delivery capabilities. This study proved that the HA-g-pHEA-GM hydrogel can be successfully 3D printed and has a strong potential to be used as a bioink for tissue regeneration applications. Closely related to this, Lee et al. also used acrylated HA to develop a dual function hybrid bioink with a short gelation time and biological functions [197]. To achieve mechanical integrity and fast gelation time, HA was conjugated with tyramine (HA-tyr) and mixed with acrylated HA in a ratio of 1:9 to achieve a storage modulus G' of 1 kPa that enables higher cell proliferative activity. Once the hybrid hydrogel was obtained, they tested the printability of the viscous bioink using a lab-made 3D microextrusion bioprinter and evaluated the stem cell viability after printing. They observed that printed hydrogels conserved their mechanical properties and preserved the viability of incorporated stem cells. As well as this, an optimised HA-tyr bioink was obtained by a mechanism of two consecutive crosslinking steps comprising a first enzymatic crosslinking reaction mediated by horseradish peroxidase (HRP) and hydrogen peroxide (H 2 O 2 ), followed by a green light crosslinking triggered by Eosin Y photosensitiser [198]. For cell-laden bioinks, fibroblasts, chondrocytes, or MSCs were added before the enzymatic crosslinking step, after which the printing process starts. Combining different concentrations of HRP and H 2 O 2 , viscoelastic properties of the new HA-tyr bioink were easily tunable, achieving a soft bioink that could be extruded through a thin needle. Finally, by exposing the bioink at 505 nm during the printing procedure, 3D constructs carrying viable cells were obtained. Due to their simplicity and versatility, these novel bioinks based on HA-tyr can be exploited for biofabrication of a wide variety of tissue-engineered constructs using an ECM component combined with different cell types.
Alginic Acid
Alginic acid salt, commonly known as alginate, is one of the most popular and abundant biopolymers available in nature [199]. It is derived from the cell wall of brown seaweed, and from the capsule of some micro-organisms, such as Azotobacter sp. and Pseudomonas sp. Alginate is an anionic polysaccharide composed of linear copolymers including (1,4)-linked β-D-mannuronic (M) and (1,4)-α-L-guluronic (G) acid units that are arranged in M-blocks, G-blocks, and in heteropolymeric sequences of alternating M and G residues. The sequence and ratio of G and M, as well as the molecular weight (32,000 to 400,000 g/mol) of alginate depend on the type of the natural source. Purified alginates have the capability to generate hydrogels by the crosslinking of carboxylate groups of G residues with divalent cations (Ca 2+ , Ba 2+ , Sr 2+ , and Mg 2+ ). Thus, alginic acids with a high G concentration tend to form stiffer hydrogels, while alginates with low G content yield softer elastic materials [200,201].
The similar structure of alginate to the extracellular matrix coupled with its biocompatibility, nontoxicity, biodegradability, low cost of extraction, and ease of gelation processes, make alginate-based hydrogels ideal candidates in the design and fabrication of bioinks [145,202], for several biomedical applications, such as wound healing, regenerating human tissues, drug delivery, and cell culture [203][204][205]. For example, in 2019, Wang and cowokers printed alginate directly into viscous pre-polymers of hydrogels including gelatin methacrylate, agarose, and gelatin to form microchannels for the creation of a vascular network for drug screening, tissue engineering, and organ-on-a-chip [206]. As well as this, Freeman et al. study how the mechanical properties of 3D printed constructs can be tuned by changing the molecular weight of alginate bioinks, gelling conditions, and choice of ionic crosslinker [207]. Besides, they discovered that by modulating the stiffness of 3D bioprinted, alginate-based hydrogels, mesenchymal stem cell differentiation can be regulated and, hence, complex tissues can be engineered.
Despite alginic acid being a frequently used bioink in 3D bioprinting, due to its poor stability and soft mechanical properties, alginate is commonly combined with other materials, like distinct natural or synthetic polymers, to form new composites with improved characteristics. As an example, for increasing viscosity, methylcellulose or gelatin are usually added to alginate to enhance printability and degradation kinetics [208]. For instance, Luo et al. mixed CNF with gelatin-alginate thermal-responsive bioinks to improve the bioprinting properties of the hydrogels [209]. They prepared six different hydrogels with varying contents of gelatin and CNF, and examined their printability by a home-made microextrusion bioprinter. Mechanical properties were evaluated before and after crosslinking with CaCl 2 , and viability and metabolic activity of cells entrapped in the bioprinted structures were also tested. As a result, they found that bioinks composed of 20% (w/v) gelatin, 1.25% (w/v) alginate, and 0.25% (w/v) CNF presented better distribution of cells and an increased viscosity compared to the hydrogels without CNF, indicating that the combination of the three components are crucial to obtain a scaffold with superior printability and higher biocompatibility. In another study, calcium alginate was mixed with agar to prepare a new bioink with improved printing resolution and to enhance the mechanical properties of the 3D bioprinted structures [210]. Agar had the function of increasing the viscosity of the ink and thus its rheological properties, while alginate connected different layers by crosslinking with Ca 2+ to give a better interface in the 3D printed hydrogels. Furthermore, by introducing a soft polyacrylamide network into the 3D printed, alginate-based hydrogels, interfacial defects were minimised, obtaining 3D constructs with outstanding mechanical properties, high biocompatibility, shape fidelity, and high permeability. Chitosan was also used as an enhancer of the rheological properties of alginate bioinks. As chitosan is insoluble in aqueous solutions, Liu et al. proposed to incorporate chitosan powders into the alginate solution to make a 3D printing ink with superior viscosity [211]. After printing the bioink by a 3D-BIOPLOTTERTM, a hydrochloric acid (HCl) solution was added to the deposited fibres to solubilise the chitosan and enable the formation of alginate-chitosan hydrogels. As a result, they observed that physicochemical properties of the alginate-based bioink could be manipulated by modifying the concentration of chitosan, and the obtained 3D printed hydrogels provided an appropriate environment for cell growth and differentiation. This strategy not only allows the use of 3D printing to develop neo tissues or organs, but it also to repairs damaged ones. In a similar approach, the incorporation of carboxylated cellulose nanocrystals and/or xanthan gum in sodium alginate hydrogel inks provided improved post-printing fidelity, and rheological and mechanical properties. Furthermore, good viability of the human skin fibroblast was observed highlighting the potentialities of the developed 3D bioprintable hydrogel inks [212]. More recently, 3D printed dressings composed of gelatin methacrylate and xanthan gum with the incorporation of the antimicrobial N-halamine and TiO 2 nanoparticles were reported. The incorporation of the N-halamine provided a wide-spectrum of antimicrobial activity, while the nanoparticles improved the ultraviolet stability of N-halamines. This three-dimensional antibacterial wound dressing presented good antibacterial activity, outstanding biocompatibility, and significantly accelerated the wound healing in a mouse model [213].
Other Biopolymers
Apart from polymers exploited from natural resources, synthetic biopolymers are also used as bioinks for 3D bioprinting. Synthetic polymers are manmade polymers usually obtained by chemical reactions with tunable chemical and physical features [214]. Compared to natural biopolymers, synthetics have superior mechanical properties and have a crucial role in conserving cellular and biomolecular functions before, during, and after the 3D printing procedures. They can be easily modified for improving physicochemical properties, and also functionalised with different molecules to meet particular requirements [215]. Among the synthetic polymers that are commonly printed are polylactic acid (PLA) [216], polyethylene glycol (PEG) [217], polycaprolactone (PCL) [218], polyglycolic acid (PGA) [219], polyurethane (PU) [220], and polylactic-co-glycolic acid (PLGA) [221].
The proper mechanical properties of synthetic bioinks are advantageous to withstand the stresses suffered during 3D printing stages and in vivo implantations. Furthermore, synthetic polymers have controllable degradation kinetics and are easy to process, light weight, non-toxic, inexpensive, and abundant, which might be convenient when choosing a material to print scaffolds for biomedical applications. Although they can be successfully used as bioinks for 3D printing in their pure form, synthetic polymers are also combined with reinforcing materials to develop mechanically superior structures with optimised regenerative action and higher printability [222]. However, it is of paramount importance to study the fate and effect of nanomaterials [223][224][225]. Indeed, a recent report evaluated the use of reinforcement materials (carbon nanotubes, copper, and steel). Using a condensation particle counter, it was possible to measure 10 5 -10 6 particles emissions per cm 3 from these materials. Furthermore, the authors provide important insights about cellular metabolic alterations, intracellular mitochondrial stress, and toxicity as a result of particle emissions [226].
It is worth mentioning that both natural and synthetic polymers have been simultaneously printed for the fabrication of advanced scaffolds for tissue engineering [227][228][229].
Role of Bioinks on Skin Bioprinting
The 3D bioprinting technique represents a promising alternative approach to produce scaffolds that can be employed as a personalised therapeutic method to accelerate wound healing and protect against infections. In this sense, different scaffolds combining biopolymers, nano-objects, cells, and therapeutic molecules have been successfully reported. For example, it was possible to facilitate the extrusion printing process of gelatinmethacryloyl-based bioink with the addition of an ulvan type polysaccharide. The 3D bioprinted, cell-laden scaffolds support cell viability and proliferation of human dermal fibroblasts [230]. Alternatively, tyrosinase was employed to crosslink gelatin methacrylamide and collagen bioinks, and consequently improve their mechanical strength. The enzyme also plays an important role in the skin regeneration process [157]. Furthermore, a novel PLA scaffold combined with chitosan and loaded with Cu-carbon dots, rosmarinic acid, and hyaluronic acid was produced employing a 3D bioprinting method. This complex, bioprinted structure includes antimicrobial agents (i.e.: Cu-carbon dots, rosmarinic acid, chitosan), biocompatible polymers (i.e., PLA and chitosan), and a natural polymer existing in skin (hyaluronic acid). The resulting bionanocomposite scaffolds possess antimicrobial activity and non-toxicity, and significantly increase the expression of genes involved in wound healing (i.e., GAP, PDGF, TGF-β, and MMP-1), and improve wound healing properties in vivo [231]. Similarly, an antibacterial bioink based on alginate and methylcellulose, loaded with Ga +3 , was developed. In this case, the Ga +3 ions contribute to stabilise the bioink through the formation of ionic crosslinks with alginate, and the resulting material possesses potent antimicrobial activity against both Gram positive and Gram negative bacteria [143].
A skin model mimicking the dermis and the epidermis with its cellular, molecular, and macromolecular features was produced using a bioink formulation composed of a mixture of gelatin, alginate, and fibrinogen [232]. An in-house-built, open-source machine was used for the 3D printing of a 5 mm-thick artificial dermis with extension in the centimetre range by an extrusion process in a matter of minutes. Each bioink component had a role on the skin bioprinting. Gelatin offered appropriate rheology during the extrusion process, strength when the formulation is printed on a cooled substrate, and solubility for being eliminated in subsequent steps. Alginate gave structural stiffness and stability once the gelation was eliminated, owing to its calcium-based hydrogel formation. Fibrinogen, on its side, offered structural stability by the crosslinking with alginate and promoted cellular maturation based on the presence of RGD domains. Bioprinted dermis was achieved by printing objects composed of primary human dermal fibroblasts immersed in the bioink formulation with a subsequent culture. Primary human epidermal keratinocytes were then seeded on top of the bioprinted dermis for the generation of the bioprinted skin ( Figure 13A). Although in contrast to normal dermis, the bioprinted dermis only contained fibroblasts, cellular morphology, viability, and organisation, and epidermal proliferation and differentiation in the bioprinted skin resembled that of normal human skin ( Figure 13B). The expression of several epidermal markers (Ki67, cytokeratin 10, filaggrin, and loricrin), extracellular matrix proteins (collagen I and V, vimentin, fibrillin, and elastin), and laminin 332 at the dermal-epidermal interface supported these observations ( Figure 14). Ultrastructural analysis of the bioprinted skin revealed the presence of corneodesmosomes in the stratum corneum, keratohyalin granules in the stratum granulosum, several desmosomes in the stratum spinosum, and many hemidesmosomes linked to keratin filaments in the basement membrane and mature collagen fibres. The 3D bioprinting capability of the reported process was also evaluated by producing an adult-sized ear by printing the fibroblasts containing bioink, which retained the organisation after the culture [232]. It is still a challenge to develop biocompatible bioinks with rapid gelation kinetics and tunable mechanical properties. A bioink suitable for rapid printing of bio-inspired 3D tissue constructs has been recently reported. The bioink was composed of gelatin methacrylate (GelMA), N-(2-aminoethyl)-4-(4-(hydroxymethyl)-2-methoxy-5-nitrosophenoxy) butanamide (NB), linked hyaluronic acid (HA-NB), and photo-initiator lithium phenyl-2,4,6 trimethylbenzoylphosphinate (LAP). Interestingly, after UV irradiation, the hydrogel was rapidly formed at t ≈ 1.384 s, while without the addition of LAP, the gel was formed at t ≈ 33 s. A significantly higher compressive modulus was achieved with this formulation when compared to single crosslinking hydrogels. Moreover, it was possible to prepare (c.a. 3 min.) a dense upper layer and a porous lower layer mimicking the epidermal layer and corium layer, respectively. The hydrogel possesses remarkable biocompatibility with cell viability rates superior to 95%, provoking limited inflammation after subcutaneous implantation and facilitating wound healing in vivo [233].
Role of Cell Seeding
Alternatively, seeding and cultivating cells in 3D printed scaffolds is becoming an active field of research. Especially, 3D printing technology allows the introduction of multiple cell types within specific positions in the scaffolds and the survival rates would be really high [3,5]. Indeed, adult human dermal fibroblasts and adult human epidermal keratinocytes can survive and grow after being 3D bioprinted with a hydrogel scaffold [234]. In addition, the efficacy in full-thickness burn wound healing in a rat model of a 3D bioprinted collagen and alginate scaffold was reported. The material was arranged layerby-layer with and without the addition of adipose-derived mesenchymal stem cells. The burn wound healing results of employing cellularised materials were far more effective than acellularised treatments [235]. Meanwhile, it was reported that an autologous homologous adipose tissue, prepared employing 3D bioprinting, successfully accelerated diabetic wound healing with complete wound closure and re-epithelialisation within four weeks [236]. Interestingly, Zhang et al., developed a skin model with sweat glands and hair follicles. According to the authors, the difficulties in simultaneously inducing sweat glands and hair follicle regeneration have been overcome with this model [237]. In parallel, gelatine-sodium alginate hydrogel loaded with adipose-derived mesenchymal stem cells was constructed by 3D bioprinting. The incorporation of a NO donor, such as S-Nitroso-Nacetyl-D, or L-penicillamine, successfully protects against ischaemia and reperfusion injury, and improves the proangiogenic potential of the cells. Indeed, the bioprinted scaffold effectively promotes wound healing in a severe burn model [238]. Another biomimetic skin model which was qualitatively and quantitatively characterised was constructed by a 3D-printing-assisted electrohydrodynamic jetting process [239]. The construct was composed of an acellular polycaprolactone/collagen scaffold that served as a resting layer for consecutive fibroblast/collagen and keratinocyte/collagen layers. The extrusion process of the cellular bioinks was previously simulated and modelled in order to find the experimental conditions that preserved cellular viability and function. The bioprinted model was qualitatively and quantitatively compared with manually seeded skin equivalents. Metabolic activity and cell viability assays in both skin equivalents revealed the positive effect of the fibroblast layer on keratinocyte layer. Moreover, keratinocyte differentiation and the formation of orthokeratinised epidermis were evidenced and the morphology of full-thickness skins was similar to normal human skin. Immunohistochemical analysis showed the specific localisation of differentiation markers: vimentin in the dermal layer and keratin K14 y K10 in the lower and upper layers of the epidermis, respectively. Moreover, keratin K2 was colocalised with filaggrin in the upper layers, which is associated with skin barrier function. Laminin V and collagen IV showed a robust presence in the dermal-epidermal junction. Occludin, E-cadherin, and plakoglobin were also evidenced in both skin equivalents, demonstrating intact organisations and architectures. Skin barrier function assays also revealed similar results for 3D printed and manually seeded models. Stress (keratin 16), water channel activity (aquaporin 3), DNA damage (γ-H2AX), and oxidative stress (catalase) markers showed similar patterns to those observed in normal human skin. However, both skin constructs did not show normal full differentiation after two weeks in the culture, and further improvement of the 3D bioprinting methodology would be needed.
Role of Incorporated Therapeutic Agents
The advances in this field have led to the development of biocompatible scaffolds with incorporated specific cell types, but a next step would be the generation of interconnected functional vessels to mimic the sophisticated architectural and biological structure of the skin [240]. In this sense, the incorporation in the bioinks of therapeutic agents with the ability to stimulate blood vessel formation is highly desirable. Silica-based materials are known to stimulate collagen deposition and blood vessel formation during the wound healing process [148,241,242]. Similarly, Sr ions can stimulate the expression of angiogenic factors in cells and, thus, promote the angiogenesis [243,244].
With the conviction that the recapitulation of dermal vasculature is an essential step for the generation of optimal bioprinted skin substitutes, strontium silicate microcylinders were integrated in a bioink to achieve an enhanced vascularisation [245]. High crystalline microparticles with a diameter of 15 µm were synthesised by a hydrothermal method and showed a continuous release pattern of strontium and silicon ions, which have proven to stimulate collagen deposition and angiogenesis during wound healing. The strontium silicate microcylinders were incorporated in a bioink composed of a mixture of gellan gum (GAM), sodium alginate, and methyl cellulose, with good flexibility and printability. The preparation of the biomimetic skin scaffolds included the air-pressure-induced extrusion of the microcylinder-doped bioink and the overlaid spraying of human umbilical vascular endothelial cells or human dermal fibroblasts using a piezoelectric pipette. This was performed in a cyclical manner reaching layer-by-layer structures of bioink and cell suspensions. In turn, the two cell types were included in two different major layers, with human umbilical vascular endothelial cells in the bottom and human dermal fibroblasts in the top, in order to emulate the vascularised native skin structure (Figure 15). The gene expression of several angiogenic markers, such as vascular endothelial cadherin, endothelial nitric oxide synthase, vascular endothelial growth factor, and hypoxia-inducible factor-1α, were detected in the printed cells and showed higher levels than in the same cell-seeded bioprinted scaffolds. In view of these results, the in vivo vascularisation and skin regeneration in acute and chronic wounds of the prepared biomimetic skin substitutes were tested. When the multicellular scaffolds were subcutaneously implanted in nude mice, a large number of blood vessels with a CD31 (endothelial cell junction marker) protein expression and an enhanced collagen I deposition were found. In an acute wound mouse model, a completed epithelialisation and dermal structure recovery with enhanced angiogenesis and active proliferation of regenerated skin was observed in the transplanted animals after 15 days ( Figure 16). Meanwhile, a diabetic mouse model was used to study the level of skin regeneration induced by the grafted, bioprinted skin substitutes. The results showed a high healing rate with cells on wound beds that actively recovered and had a rebuilt dermis vasculature, demonstrating a prominent repair of complex skin chronic wounds.
Full-Thickness Functional Skin Models
Full-thickness skin wounds are physiologically complex and require biomaterials that mimic the inherently sophisticated structure and function of the dermis. For this purpose, researchers designed and printed a bilayer membrane scaffold consisting of: (i) an outer poly (lactic-co-glycolic acid) (PLGA) membrane which maintained the moisture content of the hydrogel and prevented bacterial invasion, and (ii) a lower alginate hydrogel layer which promoted cell adhesion and proliferation in vitro. This structure was designed to mimic the skin epidermis and dermis. This scaffold successfully improved collagen I/III deposition, neovascularisation, and skin regeneration [246].
Jin et al. developed a full-thickness functional skin model. The bioprinted scaffold is formed by gelatin methacrylamide with HaCaTs cells as an epidermal layer, acellular dermal matrix with fibroblasts as the dermis, and gelatin methacrylamide mesh with HU-VECs cells as the vascular network and framework. This bioprinted skin model stimulates dermal extracellular matrix secretion and angiogenesis, promotes wound healing and reepithelisation, and, overall, improves wound healing quality [247]. Tuener et al., reported a promising strategy to produce prevascularised regenerative scaffolds for wound care. For this purpose, a bioink containing a core of a peptide-functionalised, succinylated chitosan and dextran aldehyde, cell-laden material was covered by a shell of gelatin methacryloyl. Two cell types were delivered with the bioink: (i) in the core of HUVECs and (ii) in the shell of hBMSCs. Wound closure with this system was increased two-fold [248]. In an effort to improve the structural complexity of bioprinted skin and produce a model more similar to the native human skin, a perfusable and vascularised structure composed of epidermis, dermis, and hypodermis strati was achieved [249]. This model was 3D printed following several steps of fabrication ( Figure 17). First, a transwell based on polycaprolatone was extruded to reach a 15 × 15 × 6 mm structure, followed by the extrusion of a sacrificial gelatin hydrogel for the filling of the construct pores. A hypodermal compartment 2 mm high was constructed on top by the generation of a microporous polycaprolatone mesh and the extrusion of a preadipocytes-embedded, adipose-fibrinogen bioink. Afterwards, a bioink containing human umbilical vein endothelial cells and thrombin-embedded gelatin hydrogel was printed as cylindrical vascular channels. The gelatin component allows the cylinder shape to endure during fabrication as well as further liquefaction at 37 • C, leaving perfusable hollow channels and promoting the attachment of the human umbilical vein endothelial cells to the surface of the generated channel. A dermal compartment of 3 mm high was then printed using a bioink composed of human dermal, fibroblastencapsulated skin-fibrinogen. The crosslinking of the fibrinogen component was triggered when the vascular bioink containing thrombin was liquefied. Two different culture media were used for the maturation of the skin structure and could be infused, owing to the model design: fibroblasts growth/preadipocyte differentiation medium and endothelial growth medium. Primary human epidermal keratinocytes were deposited onto the dermal stratum by injecting 3D cell printing. The porous transwell and the generated vascular channel were used to infuse proper media, to promote differentiation and maturation of the different cellular components and the final conformation of the skin construct, including lipid droplet-associated hypodermis, extracellular matrix-secreted dermis, and stratified epidermis. Functional markers in the structures, which are characteristic of each layer (i.e., the stratified structure of the epidermis, the dermal-epidermal junction, and extracellular matrix of the dermis, as well as the lipid droplets of the hypodermis, and the endothelium of the vascular channels), were evaluated ( Figure 18). The maturation of the epidermic layer was demonstrated by the expression of keratin 10 and filaggrin at early and late stages of cellular differentiation. The epidermal-dermal junction formation was revealed by the expression of laminin, collagen type I, and fibronectin in the interface between layers. The lipid droplets in mature adipocytes were exposed by their staining with boron-dipyrromethene in the hypodermis. Vascular channels showed full coverage with human umbilical vein endothelial cells, as revealed by the expression of the CD31 marker. Further epidermal compartment evaluation was realised in comparison with skin models including only the dermis and the epidermis, and with native skin. The expression of the p63 stemness marker and the K19 follicular stem cell marker demonstrated that the full-thickness skin model had epidermal stratification and hypodermis/epidermis crosstalk, similar to the native skin. However, the ki67 proliferation marker expression revealed a possible non-sufficient provision of nutrients and oxygen through the straight bioprinted vascular channel. Although this perfusable platform would be useful for incorporating other cells for advanced biomimetic skin models, the hypodermis in this study was thinner than that corresponding in the native skin. This could limit the substantial influence of the hypodermis. Another multicellular and multilayer biomimetic skin structure was achieved [250] by a 3D printing process using gelatin methacryloyl and alginate-based bioinks. The 3D skin substitute was composed of three main compartments. The bottom one was prepared by extrusion of a bioink containing gelatin methacryloyl and alginate, and including human umbilical vein endothelial cells on a polyester porous membrane, in order to guarantee the access of media. The mixture of gelatin methacryloyl and alginate allowed for obtaining a bioink with enhanced gelation, printability, and rheological properties, and for maintaining the viability of printed cells. The middle compartment was generated by pouring a gelatin methacryloyl matrix containing human dermal fibroblasts, followed by a UV-crosslinking process. The stiffness of the matrix was adjusted to guarantee the growth and function of the dermal cells. In fact, these were revealed by the positive staining of Ki-67 and F-actin, high levels of Pro-Collagen I alpha 1, and low levels of Matrix Metalloproteinase I. The top compartment was achieved by seeding multiple layers of human epidermal keratinocytes with a gelatin coating to achieve a c.a. 200 µm-thick biomimetic epidermis. The entire skin structure showed a good organisation of layers with no mixture within cell layers. However, the angiogenic activity of the human umbilical vein endothelial cells, as well as the differentiation of human epidermal keratinocytes and the formation of the stratum corneum have not yet been assessed using this skin model.
The 3D Bioprinted Alternative Skin Models
The 3D bioprinting technology allows the production of intricate structures with desired patterns, biological activities, and physiological functions, providing a unique approach for the fabrication of artificial tissues. Particularly, the combination of precise cell deposition, reproducibility, high yields, versatility, and high efficiency of 3D bioprinting offers the opportunity to reproduce the complex human skin heterogeneity. Engineered skin not only provides advanced constructs to better replicate human skin, but also agrees with policies that tend to reduce the use of laboratory animals as in vivo models [251,252]. In agreement with 3R principles (replacement, reduction, and refinement) defined by Russell and Burch in 1959 for animal use in research [253], the current regulations and ethical concerns on animal testing and the need of skin substitutes with more physiological functions explain the significant advances in the development of 3D, innovative skin models in the last years [254]. For example, at present, the European Union has decreed the prohibition of the use of animals for testing of cosmetic ingredients. In this context, the French cosmetics company L'Oreal has partnered with the US-based bioprinting firm, Organovo, to develop 3D bioprinted human skin for the testing of their products without using people or animals.
The 3D human-based cell cultures in vitro have many advantages over the use of animal models. For example, variables are better controlled than in the case of in vivo complex organisms, enhancing reproducibility and simplifying the research of cellular and molecular processes. Furthermore, as bioprinted human skin models contain human cells, they can better mimic the in vivo environment, replicate cell morphology and adhesion, and promote cell differentiation, proliferation, and migration, providing an accurate platform to obtain more predictable results for humans [255]. This also offers the possibility to develop a personalised medicine through the use of autologous patients' cells or tissues, avoiding the risk of immunological rejection that is very common when animal tissues are transplanted into humans [256].
Madiedo-Podvrsan et al., reported the first human-patterned epidermal model created by the use of a high-precision 3D bioprinting approach [257]. In order to mimic pathological human skin and improve research into damaged skin without using animal models, they bioprinted separate populations of keratinocytes with normal or low filaggrin expressions in a single model insert, to reproduce healthy skin and human epidermal disorders, respectively. This technique has the potential to create a heterogeneous and stable reconstructed model of two skin conditions in a single sample, better reflecting native human skin, reducing results variability, and avoiding the use of a large number of animal models in vivo that often do not accurately predict human responses.
Another alternative bioprinting approach used to replicate complex papillary dermis structures and reduce the gap between in vitro and in vivo models was reported by Park et al. [258]. They designed a technique to fabricate self-organised 3D collagen microstructures through inkjet fibroblasts bioprinting. By using drop-on-demand inkjet printing, they could seed a controlled number of fibroblast cells in aligned patterns onto a collagen substrate prepared by microextrusion printing. The formation of a vertically elevated collagen-based 3D microstructure was obtained after cells interacted and rearranged the surrounding extracellular matrix. Finally, they inkjet-printed human keratinocytes onto the fibroblast-mediated, 3D-protruded collagen microstructures to fabricate a bilayered skin model that could mimic the papillary interface at the dermo-epidermal junction. As a result, they obtained 3D microstructures containing fibroblasts that were covered by the printed human keratinocytes. This approach to create 3D cell-laden collagen microstructures offers an innovative way to reproduce the structure and functions of human skin, making an important contribution to replace animal testing and shorten the distance between in vitro and in vivo skin models.
Due to the limited control of three-dimensional structures and contraction of engineered materials achieved by current protocols, Derr et al. developed another bioprinting method for the fabrication of skin equivalents (SEs) with comparable morphology and functions of native skin tissue, providing a new platform for improving wound healing therapies, transplants for regenerative medicine, and testing of skin products [259]. The Ses were fully bioprinted on an open-market printer in a layer-by-layer model, in a multiwellbased platform. The material structure consisted in three levels: the dermis, containing neonatal human dermal fibroblasts; laminin/entactin basal layer; and the epidermis loaded with neonatal normal human epithelial keratinocytes. The constructs were validated by immunohistochemistry, impedance measurement, permeation assays, and cell viability assays, showing a viable material with optimal barrier function and reproducibility that allowed their used as tissue models for diseases screening. The production of SEs in an automatised and standardised manner was also approached by Cubo et al [255]. In this case, they used a free-form fabrication 3D bioprinting technique to engineer a human plasma-derived bilayer skin using human fibroblasts and keratinocytes. The most innovative result of this method was the capability to reproducibly print large areas of human skin, which is imperative to improve the actual treatments of different skin pathologies such as burns, ulcers, and surgical wounds.
Although 3D bioprinted skin models appear as an attractive substitute for animal use, most of them have certain drawbacks, such as the absence of immunologic components. In this sense, several attempts have been made to improve 3D models by incorporating immunogenic components, such as immune cells, to obtain new immunocompetent, threedimensional materials with human immune system features that will benefit the treatment of infections, the study of inflammatory pathologies, and the development of novel therapies for other skin diseases [260]. For example, BASF Care Creations ® and CTIBiotech laboratories recently developed the first 3D bioprinted skin models, including immune macrophages, to reconstruct skin tissues for the development and testing of bio-actives for advanced skin care applications [261]. This technology provides a new system with more human physiological properties that will allow the study of the activity of macrophages in a complete reconstructed skin. Furthermore, as macrophages are essential for wound healing, tissue regeneration, and inflammation control, this novel immunocompetency will improve the research, development, and evaluation of skin care products. To reproduce the protective skin functions, Poblete Jara et al. also developed a 3D bioprinted human skin equivalent with immune responses by including macrophages, keratinocytes, and fibroblasts in a collagen matrix [262]. For this, they first created a bilayer skin model by the extrusion of a bioink composed of collagen I and primary fibroblasts, followed by the extrusion of the keratinocyte solution on top of the fibroblast-collagen layer. After 11 days, they induced a wound in the centre of the skin model and printed a fibrin clot-macrophages bioink, and the healing process was evaluated from 0 to 10 days. As a result, they observed that the SE containing macrophages showed a complete re-epithelisation after 10 days post-wounding, compared to the SE without the macrophage bioink treatment, which revealed an incomplete wound closure. Furthermore, they contrasted these results with a murine dermal wound model and observed that in both cases, a new layer of keranocytes was formed in the wound centre, indicating that the 3D human skin platform can replace animal models and guarantee comparable results.
Besides the lack of immune components, 3D bioprinted skin models often lack vascularisation, which is also essential for the graft take. The group of Baltazar et al. produced by 3D printing a vascularised SE through the incorporation of human foreskin dermal fibroblasts and endothelial and placental cells to form a dermis, and human foreskin keratinocytes to construct an epidermis [263]. For this, the dermal and epidermal layers were printed in two steps. Firstly, the vascularised dermis was bioprinted with endothelial cells and fibroblasts and cultured for four days to stimulate vascularisation; secondly, the epidermis was bioprinted with keranocytes on day four and cultured in a skin differentiation medium. As well as human skin, the bioprinted skin models showed positive Ki67 and CK14 expressions in the epidermis, indicating a regular keranocytes proliferation. They also found that by allowing endothelial cells to self-assemble into vessels, a complex structure similar to natural tissues is formed. In fact, it was crucial to have an in vitro maturation time to obtain SE with an equivalent human skin arrangement. This shows that despite 3D bioprinted skin still being in its early stages and requiring improved of many factors, it provides a new potent alternative for developing human skin replicates for tissue engineering.
Four-Dimmensional Printing
Stimuli-responsive materials represent an emerging type of materials employed for wound healing. Recently, Municoy et al. summarised that a variety of stimuli such as magnet fields, temperature, redox-state, pH, and light were employed to change a material's structure, dimensions, and properties for tissue engineering and drug delivery [264]. In a step forward, stimuli-responsive materials have been employed with 3D bioprinting technology in the so-called 4D bioprinting, where printed objects change their structure or properties with time, when an external stimulus is applied [265]. Different 4D bioprinting strategies would be employed to produce these 4D bioprinted structures that undergo shape or functional transformations over time [266]. This disruptive technology, which allows printing responsive materials that can change their shape, or materials that can reorganise with cellular self-organization, has broadened the applications of 4D bioprinting in various biomedical fields, such as tissue engineering and drug delivery [267][268][269].
The principle of dynamic movement was recently achieved employing hydroxybutyl methacrylated chitosan as a temperature-responsive polymer. The expansion of the 4D structure was provoked by the expansion of water at a low temperature. On contrary, when the temperature rose, deswelling and, consequently, a decrease in the volume, occurred [270]. Biomedical 4D scaffolds were developed with renewable plant oils (soybean oil-epoxidised acrylate). The material fixed a temporary shape at a low temperature (−18 • C) and at 37 • C it fully recovers the original shape. In addition, it supports cell addition and proliferation, which confirms the great potential for biomedical applications [271]. Similarly, 4D printed hierarchy scaffolds with high biocompatibility, a microporous structure, and tunable shape recovery speed for tissue engineering applications were reported [272].
A multifunctional ionic skin was fabricated by the 3D printing of a thermo-responsive hydrogel (composed of n-octadecyl acrylate and poly-dimethylacrilamide) into a capacitor circuit [273] for the monitoring of body temperature, finger touch, and bending motion. The proposed hydrogel exhibited elastic activity with a volume phase transition temperature around 30 • C, and it is ionically conductive in the presence of salt solutes. The hydrogel's viscosity decrease through heating allowed its 3D printing using an ink extrusion system. A skin-like capacity sensor was constructed by surrounding a dielectric polyethylene layer with two grid-structure hydrogel films with a sub-millimeter resolution. These films exhibited an enhanced capacitive response compared to the bulk hydrogel with the temperature increasing, which depends on the film's area. The fabricated sensor offered wearability and looked transparent when deposited onto human skin. The capacitive response was sensitive to temperature and compressive pressure changes with a reversible behaviour. When the sensor was subjected to changes in both the temperature and the pressure, the sensor did not show a linear response towards these parameters in a wide range, which, according to the authors, could be improved by tunning the structure of the sensor.
Conclusions
The increasing demand for tissue engineering scaffolds cannot be achieved by traditional technologies such as natural scaffolds or tissue donors. In this sense, a combination of materials enhances the properties such as biocompatibility, biodegradability, tensile strength, and design development for the additional cell seeding. In this sense, Liang et al., recently reported recent developments in advanced, functional hydrogel dressings with outstanding properties such as antioxidant, anti-inflammatory, antimicrobial, therapeutic delivery, self-healing, stimuli-response, conductivity, and wound monitoring properties [274]. In parallel, Guo et al. highlighted the advantages and limitations of haemostatic materials that aid wound healing [275]. This is an important step in the wound healing process and a great advance involving natural and synthetic polymers, siliconbased materials; and metal-containing materials in the form of particles, fibres, sponges, and hydrogels have been reported. Even though, the growing world needs a break-through in scaffold fabrication techniques. In this vein, 3D printing technology is a promising tool due to its versatility and capacity to offer different synthesis strategies with a wide variety of materials and their combinations. It is important to combine traditional knowledge with the current new technologies and give rise to multifunctional developments.
3D bioprinting technologies have enormous potential in tissue engineering, regenerative medicine, and drug development. The last few years have seen how 3D bioprinting technologies have evolved and become more sophisticated to fabricate specific human organs and tissues such as skin. However, as the goals for printing more complex tissues progress, new challenges arise, including bioprinting of soft materials, printing resolution, and speed and reproducibility of the printing process to develop high-throughput 3D bioprinting. Another important field of research is the development of bioinks with suitable properties and characteristics of the desired tissue.
The lack of self-availability and the time between scaffold bioprinting and their use raises various concerns. In this sense, a recent work described the possibility to print and, at the same time, freeze, the biomaterials. This crybioprinting method developed by Zhang and col. allows the direct fabrication and in situ freezing of tissue constructs, maintaining the functionality of the cells and making them shelf available [276].
The speed of 3D printing is highly influenced by the complexity of the structure and the number of required voxels since most of the 3D printers produce materials point-bypoint or layer-by-layer. To overcome this issue, Yang et al. employed wavelength-sensitive photoresins which can be cured simultaneously by employing visible and UV sources in a tomographic volumetric printing process to offer fast 3D printing [277].
Finally, since the bioprinting process has a lot of complexities, a future goal could be the application of machine learning (ML) and a computational method collection, which contains mathematical functions of the real world based on historical data. In this sense, ML could overcome the complexity of representing biological tissue models from tissue images into a 3D tissue model with cellular resolution and tissue properties, and the compatibility of different materials used could be predicted [278,279]. In addition, the combination of ML with Big Data, related to modern clinical images, could help solve the multiscale and multiparameter complexities when the number of changing parameters is exceeded in the processing and post-processing process. In this vein, Big Data sources for 3D bioprinting could be the different diagnostic images, experimental data, and the scientific literature [280]. As a concluding remark, 3D bioprinting is a promising tool in tissue engineering that could be improved with the addition of ML.
Funding: This research received no external funding.
Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.
Acknowledgments: Pablo E. Antezana is grateful for his doctoral fellowship granted by the Universidad de Buenos Aires. The authors would like to acknowledge the Universidad de Buenos Aires and grants from the Universidad de Buenos Aires, UBACYT 20020150100056BA and PIDAE 2019 (Martín F. Desimone), which supported this work. Gorka Orive wishes to thank the Spanish Ministry of Economy, Industry, and Competitiveness (PID2019-106094RB-I00/AEI/10.13039/501100011033) and technical assistance from the ICTS NANBIOSIS (Drug Formulation Unit, U10) at the University of the Basque Country. We also appreciate the support from the Basque Country Government (Grupos Consolidados, No ref: IT907-16).
Conflicts of Interest:
The authors declare no conflict of interest. | 2022-02-24T16:16:44.156Z | 2022-02-01T00:00:00.000 | {
"year": 2022,
"sha1": "c70c70f21529261d04f62f0a5a65ce14b9c6b8db",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4923/14/2/464/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d69fb338f9433e5175a3a51f32862ca9ca6fd348",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240125583 | pes2o/s2orc | v3-fos-license | The Cartesian evil demon and the impossibility of the monstrous lie
Creative Commons Atribuição 4.0 Internacional. ABSTRACT In this paper, I address the issue of whether the evil demon could have caused the idea of God. In order to determine the capabilities of the evil demon, I perform a thought experiment in which I reaffirm the conclusion that an imperfect being could have never caused an idea of perfection and infinitude, i.e., the idea of God. The article is divided into five sections and a conclusion. While the first section is introductory, the second looks at the problem of God and knowledge certainty. Elucidating how reality is gradual according to Descartes, in the third section I address the distinction between objective, formal and eminent reality. In turn, in the fourth section, I argue that if the objective reality of God exists, that is, an idea of perfection, the imperfect evil demon could have never caused it. The last section examines the reverse argument of the fourth section, viz, whether God could have caused the existence of evil and imperfection.
3/12
18-20). In contrast, and to reach an indubitable truth, Descartes provides a standard of truth that is twofold in the Meditations. In the first Meditation, he claims that all doubtful ideas are to be considered false. The rationale for this proposal is that doubtful ideas are not reliable in so far as they may deceive us, and what has deceived us once could deceive us again. The latter, which can be named Mr. Distrust's principle, implies that it is not necessary to show that all doubtful ideas are false, since proving their falsity is technically impossible. Indeed, one would need more than a lifetime, because doubtful ideas are countless. Rather, one can show that the source of doubtful ideas is unreliable. For example, perceptual beliefs are doubtful because the senses have deceived us once. According to Mr. Distrust's principle, then, all perceptual beliefs are doubtful and, accordingly, they are considered by Descartes false, at least in the first Meditation. 1 In the third Meditation, Descartes completes his standard of truth. According to the French philosopher, In this first item of knowledge there is simply a clear and distinct perception of what I am asserting; this would not be enough to make me certain of the truth of the matter if it could ever turn out that something which perceived with such clarity and distinctness was false. So I now seem to be able to lay down as a general rule that whatever I perceive clearly and distinctly is true (Descartes, 2008, p. 24, AT VII 35, my emphasis).
Nevertheless, the main theme of the Third Meditation is not the standard of truth; rather, the main focus is upon the a posteriori proof of the existence of God. As it becomes more evident with the a priori and the a posteriori proofs of the existence of God, 2 Descartes adopts the vocabulary of scholastic philosophers such as Aquinas. Within the Summa Theologica, Aquinas distinguishes between demonstrations a priori versus those that proceed a posteriori. In particular, Aquinas asserts that, Demonstration can be made in two ways: One is through the cause, and is called "a priori," and this is to argue from what is prior absolutely. The other is through the effect, and is called a demonstration "a posteriori"; this is to argue from what is prior relatively only to us. When an effect is better known to us than its cause, from the effect we proceed to the knowledge of the cause. And from every effect the existence of its proper cause can be demonstrated, so long as its effects are better known to us; because since every effect depends upon its cause, if the effect exists, the cause must pre-exist. Hence the existence of God, in so far as it is not self-evident to us, can be demonstrated from those of His effects which are known to us (Aquinas, 1981, p. 14, I, Q. 2, Art. 3).
This passage shows that one can prove the existence of God by proceeding from the effect to the cause. But, one can prove the existence of God by proceeding from the cause to its effect too. Here I will only concentrate upon the a posteriori proof (the so-called cosmological argument), because it has key assumptions for the other sections of this paper. Even so, I only mention the core of the latter proof: the notion of a clearly perceived substance, requires existence. God's perfection can be clearly and distinctly perceived qua substance. Therefore, God must exist. In this sense, the a priori proof in the Meditations is crucial to grasp the relation between God's perfection, qua substance, and his necessary existence. In other words, the existence of God is inseparable from his being as a substance (Descartes, 2008, p. 46, AT VII 67). Or, the existence of God is included in our concept of God (Descartes, 2007, p. 197, AT VIIIa 10).
1 Surprisingly, the reader will discover in the sixth Meditation that Descartes considers that the hyperbolic doubts of the past days are laughable, especially those regarding the senses (Descartes, 2008, p. 61, AT VII 89). 2 It is worth noting that the a priori proof is a complement of the a posteriori proof, in the sense that Descartes has to rule out that the idea of God can be fictive. For this reason, in the fifth Meditation Descartes aims at proving that the idea of God necessarily implies existence. In other words, God's perfection is not separable from his existence, because the idea of a supremely perfect being is not grasped as a fiction of the intellect, but as part of his very nature (Descartes, 2008, p. 85, AT VII 120). In fact, the idea of God can be compared to mathematical truths in that such truths and the idea of God are necessarily true and eternal. I am grateful for the opportunity to clarify this point to an anonymous reviewer.
In contrast, according to the a posteriori proof, the existence of God is an effect of a cause which is more real and powerful than the very idea: God himself. The a posteriori argument, which I take from Brecher (1976, p. 418-419), can be summarized as follows: Assumptions: 3 1) Something cannot come from nothing.
2) There cannot be more reality in the effect than in the cause.
3) The objective reality of any idea will be adequate to or correspond to the formal reality of the thing of which it is the idea. Proof: 1) I have an idea of God (an infinite, eternal, unchanging, etc., substance). 2) Something must have caused that idea (ass. 1).
3) I cannot be the cause of that idea (being finite), nor any other finite thing (ass. 2). 4) So then some existent thing equally powerful to this idea must have been the cause of it, for otherwise we obtain an infinite regress (ass. 3). 5) But such a thing is God. 6) Therefore, God exists. However, some commentators on the Meditations simply assume that appealing to the existence of God is alien to the so-called order of reasons provided by the French philosopher (See for further discussion Gueroult, 1984, p. 209). As I have shown elsewhere (González, 2017), it is the very order of reasons that has a central objective, namely, the refutation of the Pyrrhonic skeptic as well as the atheist. Furthermore, it is not possible to ensure the certainty of knowledge without proving the existence of God. The Cartesian rationale is simple: if the existence of God is not proved, the evil demon could deceive us about everything, including clear and distinct ideas. That is, the evil demon could deceive us not only about ambiguous ideas (such as perceptual beliefs), but also about clear and distinct mathematical truths. For example, the evil demon could have arranged my cognitive apparatus, so I always add 2+3=6, when in fact it is 5. And, likewise, the evil demon could have arranged that I conceive triangles with more than three sides. Can the demon deceive us concerning the existence of God? (I will return to this issue in the following sections).
Even though mistakes such as 2+3=6 and the triangle with four sides are blatantly contradictory, they cannot be ruled out if the existence of God is not proved and, furthermore, if it cannot be shown that God is perfect and benevolent. Take, for example, this passage, in which Descartes suggests that, although certain truths can be clear and distinct, they can be a deceit caused by the evil demon: Yet when I turn to the things themselves which I think I perceive very clearly, I am so convinced by them that I spontaneously declare: let whoever can do so deceive me, he will never bring it about that I am nothing, so long as I continue to think I am something; or make it true at some future time that I have never existed, since it is now true that I exist; or bring it about that two and three added together are more or less than five, or anything of this kind in which I see a manifest contradiction. And since I have no cause to think that there is a God at all, any reason for doubt which depends simply on this supposition is a very slight, and so to speak, metaphysical one. But in order to remove even this slight reason for doubt, as soon as the opportunity arises, I must examine whether there is a God, and, if there is, whether he can be a deceiver. For if I do not know this, it seems that I can never be quite certain about anything else (Descartes, 2008, p. 25, AT VII 36). This explains why Descartes offers the a posteriori and the a priori proofs of the existence of God. For if knowledge is to be shown to be certain, God must exist. Moreover, if He does not exist, the evil
5/12
demon could deceive us about everything. This has inspired Newman (2016) to put forward the socalled non-atheistic-knowledge-thesis, which holds that certainty depends upon the existence of God. In relation to the difference between the awareness of an atheist about truths versus genuine knowledge of truths, Descartes claims that: The fact that an atheist can be "clearly aware [clare cognoscere] that the three angles of a triangle are equal to two right angles" is something I do not dispute. But I maintain that this awareness [cognitionem] of his is not true knowledge [scientiam], since no act of awareness [cognitio] that can be rendered doubtful seems fit to be called knowledge [scientia]. Now since we are supposing that this individual is an atheist, he cannot be certain that he is not being deceived on matters which seem to him to be very evident (as I fully explained) (Descartes, 2008, p. 101, AT VII 141).
Thus, there is a very important difference between being clearly aware of a truth, a state in which the atheist can be, and having indefeasible knowledge about truth. The next section focuses upon the objective reality of God, and his formal reality, which contains necessary elements of his perfect existence.
Objective, formal, and eminent reality
Substances are real as independent separable things. 4 According to Descartes, reality is not binary, that is, it is not the case that reality can either be attributed or denied to something. On the contrary, there are degrees of reality. In fact, the French philosopher clarifies the way reality has various degrees as follows: VI. There are various degrees of reality or being: a substance has more reality than an accident or a mode; an infinite substance has more reality than a finite substance. Hence there is more objective reality in the idea of an infinite substance than in the idea of a finite substance (Descartes, 2008, p. 117, AT VII 165f).
He insists on the same ideas in the Third Set of Objections with Replies: […] I have also made it quite clear how reality admits of more and less. A substance is more of a thing than a mode; if there are real qualities or incomplete substances, they are things to a greater extent than modes, but to a lesser extent than complete substances; and, finally, if there is an infinite and independent substance, it is more of a thing than a finite and dependent substance. Al this is completely self-evident (Descartes, 2008, p. 130, AT VII 185).
The first type of reality of a thing is objective reality. Don Quixote has objective reality, because he exists independently, qua substance, in our intellect. Don Quixote has no existence outside our intellect, and independently from it. Thus, Don Quixote is something that is represented by an idea, i.e., a thought, or an immediate perception which makes me aware of the thought (Descartes, 2008, p. 113, AT VII 160-161).
Many things that belong to our minds are objective realities, that is, they have objective being in the intellect (See for further clarification Descartes, 2008, p. 74, AT VII 102). For example, the sun is an objective reality, even though it is a formal reality as well. When it comes to the sun's objective reality
6/12
what counts is whether the sun exists qua substance in our minds as an idea. By contrast, when it comes to the formal reality of a thing, it needs to have existence independently from our intellect. Interestingly, the sun is an objective and a formal reality, because it exists in our intellect, but it can also exist independently from it.
On the other hand, truth lies in what can be perceived clearly and distinctly. In the case of the sun, we have two ideas. One of them comes from the senses, depicting a bright circle in the sky, which is hard to see directly with the naked eye. The other, in turn, is an idea that comes from astronomy, namely, from the reasons that make us believe that the sun is different from the bright circle in the sky. What idea of the sun is truer? Obviously, as Descartes is a rationalist, he argues for the truth of the reasons of astronomy (Descartes, 2008, p. 27, AT VII 39). That is, there is a great disparity between the idea and the object. The sun is what the astronomy determines, despite what our senses perceive. Then, there are two ideas of the sun: one that originates in our senses, which is different from the thing itself, and one that is related to the reasons of astronomy, which pins down what the thing really is. But, now that the objective reality of an entity is relatively clear, the question that arises is: what is the formal reality?
Although Descartes does not refer too many times about the term, he gives the following definition: "IV. Whatever exists in the objects of our ideas in a way which exactly corresponds to our perception of it […]" (Descartes, 2008, p. 114, AT VII 161). He also adds what follows about the eminent reality: "Something is said to exist eminently in an object when, although it does not exactly correspond to our perception of it, its greatness is such that it can fill the role of that which does so correspond" (Descartes, 2008, p. 114, AT VII 161). God, for example, is an eminent reality.
However, an example offered by Descartes clarifies the crucial difference between objective and formal reality: Descartes' image (Descartes, 2008, p. 201-202, AT VII 289). Descartes' image can either be looked in a mirror or in a painting; either way, the image is fundamentally linked to causes. More precisely, in the case of the mirror, Descartes' semblance is the cause, while in the case of the painter, the latter is the cause of the image on the canvas. Now, if Descartes' image is transmitted to other people, and their eyes and intellect, Descartes is the primary cause of the idea formed in the minds of those people, even if those minds reduce or amplify the image. What is applied to Descartes can be applied to any other external object. But, what reality can be attributed to Descartes' image?
In view of this example, it is clear that the formal reality of Descartes is a kind of substance that flows from him to other people's minds. In particular, Descartes' substance serves to fashion an idea of him in the intellect of other people. By contrast, the objective reality is "nothing but the representation or likeness of me which the idea carries, or at any rate the pattern according to which the parts of the idea are fitted together so as to represent me" (Descartes, 2008, p. 202, AT VII 290). This idea does not seem to be anything real, because it is a relation of various parts, i.e., between Descartes' parts and himself; that is, objective reality is a mode of the idea's formal reality, "in virtue of which it has taken on this particular form" (Descartes, 2008, p. 202, AT VII 290). Note that formal reality is the cause of the idea of Descartes' body (which has various parts). Since this example can be applied to any other external object, formal reality, which is a whole substance, causes the idea, viz., the objective reality of the image. Now, recall that, according to the second self-evident second assumption of the a posteriori argument, there cannot be more reality in the effect than in the cause. More precisely, as something cannot come from nothing, the effect cannot have more reality than the cause. When applied to the difference between objective and formal reality, it is clear, then, that the formal reality, Descartes himself, is the cause of the idea one only has in the intellect. But it cannot be that the objective reality causes the existence of a formal reality, since the former is a mode of the latter and, as analyzed above, modes or accidents have less reality than substances, i.e., Descartes' idea in the intellect has less reality than Descartes himself. In the case of the objective reality of God, there must be a cause of it, which has more reality than the idea in Descartes' mind. The cause is the formal reality of God, which causes the objective reality of the idea of God; thus, God exists. In this sense, the attributes of God, such as his 7/12 omniscience, omnipotence, infinitude, eternity, immutability, perfection, cannot have been caused by Descartes. Hence, the French philosopher concludes that he is not alone in this world (Descartes, 2008, p. 29, AT VII 42).
Briefly put, the idea of God is an objective reality, an idea that cannot have been caused by Descartes. Why? Because this very idea of God cannot have come from nothing, and the cause has had more reality than the effect. Thus, the very idea of God must have been caused by supreme God, who is the formal reality that contains all the elements of anything whatever. For this reason, God not only exists; He also creates and maintains everything in the world.
In the next section, I offer a thought experiment according to which the existence of God cannot be a lie of the evil demon. That is, if God did not exist, it would still have not been possible that the evil demon lied about the existence of God.
How evil the Cartesian evil demon can be?
In 1605, Miguel de Cervantes y Saavedra published The Ingenious Gentleman Don Quixote of La Mancha. Briefly, the plot revolves around a noble from La Mancha, Don Alonso Quijano, who becomes insane after reading too many chivalry romances, and not sleeping for this reason. According to the plot, Don Alonso Quijano decides to become a knight-errant to revive chivalry, as Don Quixote de la Mancha. Sancho Panza, a simple farmer who is his squire, employs an earthy wit in dealing with Don Quixote's monologues on knighthood, which were totally old-fashioned at that time. Although Don Quixote believes that he is in a knightly story, he and Sancho embark on several adventures. One of them, perhaps the most well-known, is one about the windmills who were confused with giants by Don Quixote. This is part of the dialogue between him and Sancho Panza, in the adventure of the windmills: -'Fortune disposes our affairs better than we ourselves could have desired; look yonder, friend Sancho Panza, where you may discover somewhat more than thirty monstrous giants, with whom I intend to fight, and take away all their lives: with whose spoils we will begin to enrich ourselves; for it is lawful war, and doing God good service to take away so wicked a generation from off the face of the earth.' -'What giants?' said Sancho Panza. -'Those you see yonder,' answered his master, 'with those long arms; for some of them are wont to have them almost of the length of two leagues.' -'Consider, Sir,' answered Sancho, 'that those which appear yonder, are not giants, but windmills; and what seem to be arms are the sails, which, whirled about by the wind, make the millstone go.' (Cervantes, 2008, p. 59).
The adventure, which is hilarious, depicts Don Quixote's crash with the sails of a windmill. Owing to the crash, Don Quixote rolls over the plain. After being told off by Sancho, he retorts that an evil sage named Freston metamorphosed the giants into windmills, to deprive his glory of vanquishing the giants. The sage is regarded as mighty by Don Quixote, as he can supposedly deceive him and Sancho in many different senses. Did Descartes read Don Quixote and the story of the evil sage who supposedly enchants Don Alonso Quijano occasionally? Does the sage, Freston, have similar powers than the Cartesian evil demon? Although it is likely that Descartes knew Cervantes' novel (Cf. Ihrie, 1982, Cascardi, 1984and, especially, Wagschal, 2012, it seems that Freston and the Cartesian evil demon have different powers. While the former intervenes reality mostly with regard to time and space (Allen, 1969), changing certain scenarios, the latter exercises his power in relation to Descartes' senses, memory and, especially, to beliefs (Nadler, 1997). For this reason, I regard the Cartesian evil demon as a liar: he distorts reality as though Descartes were in a dream. All beliefs are to be considered unreliable due to the evil demon's powers.
Descartes' evil demon is the main role of a thought experiment aimed to find out whether the indubitable truth does exist. According to this thought experiment, he supposes that no God exists, but an evil demon who deceives us about everything. As such, the evil demon is a liar, a deceiver of a supreme power, who deliberately deceives us (Descartes, 2008, p. 17, AT VII 25). But, according to the French philosopher, how can the evil demon deceive us? The evil demon can cause us to believe that 2+3 is more or less than 5, that the triangles have more or less than three sides, that the sky is red when in fact it is blue, that the trees are blue when in fact they are green and brown, and so on. An as examined above, even though there is a slight reason to assume that the evil demon exists and deceives us, nothing is certain until it is not shown that the deceiver cannot exist. Consequently, in this thought experiment Descartes invites the reader to suppose that there is an evil demon who is a deceiver and a liar.
As Descartes commends the reader, we should be afraid of the evil demon for this reason: "'Will you guarantee that I need have no fear or apprehension or worry about the evil demon? Even if you give me every possible reassurance I am still exceedingly afraid of coming down here" (Descartes, 2008, p. 368 AT VII 539). Then, we should be afraid of a demon who can deceive us about everything. For example, he could deceive us about our senses, about our memory, about our body, and even about mathematical and necessary truths. The evil demon is, indeed, quite dangerous from an epistemological point of view: he seems to have enough power to undermine any evidence about every belief, so no certainty is guaranteed.
One may change Descartes' though experiment slightly, though, to make the evil demon even more cunning and powerful. In fact, one may suppose that no God exists, and that the evil demon has a supreme power, one that is enough to make us believe a monstrous lie, that is, the evil demon may cause us to believe that God do exist when in fact He does not. Why would the evil demon want to cause us to believe that God exists when in fact He does not? The demon could become an infamous big liar in order to be as evilest as he could be. By causing us to believe that God exists when in fact He does not, we would be caused to believe in heaven, when there is no heaven, in benevolence and the good, when these are mere illusions. By doing so, the evil demon would accomplish the evilest task, namely, he would cause a monstrous lie: he would laugh at us by making people believe that God exists when he does not. Could the evil demon accomplish such a feat? It seems that, in so far as he is cunning and powerful, he could deceive us, even regarding the existence of God.
Perhaps a key to determine whether the feat can be accomplished by the evil demon lies in the careful examination of the third section. As analyzed, the idea of God in our intellect, the objective reality, needs to be caused by a cause that has more reality than its effect. Since the idea of God entails infinitude and perfection (among other attributes), and we human beings are finite and imperfect, the idea of God cannot have been caused by the human being. Nor can it have been caused by the evil demon. In fact, the cause of the idea must be more real than the idea itself. Therefore, only God can be the cause, the formal reality, of the idea that we have about the existence of God, the objective reality. In other words, only God, the formal reality, may have caused the idea of God, the objective reality.
Surely, a cause with less power than God cannot have produced the idea of God. But, the evil demon, although powerful and cunning, cannot be infinite and perfect. Indeed, the evil demon is mighty and powerful, but he is imperfect. How do we know this? If the demon were perfect, he would not be a liar. Indeed, lies and errors are imperfections, according to Descartes: 2+3=6 is an imperfection, like a lie, for example (Descartes, 2008, p. 258, AT VII 376). By contrast, mathematical truths and eternal truths are the creation of a perfect and supreme being. Against this background, it turns out that the evil demon cannot have us wrongly believe that God exists, when in fact he does not. In order to be the ultimate liar, the evil demon would need to be perfect, but since he is not perfect, as a liar, he cannot have caused us to believe that God exists. Therefore, there is only one lie that the demon cannot tell, viz., God exists, when He does not. In view of such imperfection, the following question arises: Can God have created evil and imperfection, if He is supreme and perfect? The last section will precisely deal with this problem, which is the mirror argument of the fourth section.
9/12
The other side of the coin: has God caused the existence of evil and imperfection?
According to a self-evident moral principle, which is like the principle of non-contradiction, good is to be done and evil must be avoided. However, "even a perfectly virtuous person is afflicted by a proneness to evil, for which the medicine of suffering is still necessary and important" (Kretzmann;Stump, 1993, p. 263). Despite this view, the creation of evil has always been a puzzle, especially as to whether a perfect being is the primary cause of everything in the universe. It seems difficult to fathom and grasp how a perfect being could have created evil. Furthermore, it seems difficult to explain how the sin is a possibility and, even worse, how capital sins such as pride, envy, gluttony, greed, lust, sloth, and wrath exist. As I will examine in this section, it is hard to elucidate that God, as a primary cause of everything, is the creator of evil. The analysis carried out here will focus upon the mirror argument of the fourth section. Recall that there it was examined whether the evil demon has caused the idea of God. Here I will explore whether God can be the cause of evil given the conceptual apparatus of the third Meditation. 5 Traditionally, the notion of evil as a defect comes from various sources, for example, from the Neo Platonics such as Plotinus (2018, p. 11-112, I, 8,3), the Stoics such as Epictetus (1984, p. 111). All these views aim at debunking the myth that evil is as real as the good. However, the most well-known defense of this thesis is found in Aquinas, who claims that a good thing is desirable, and desirability is a consequence of perfection, because things always desire perfection. In addition, the argument for the existence of evil is as follows: every created thing is an entity, and since evil is something created, evil is an entity (Aquinas, 2003, p. 55). How does Aquinas respond to this argument? Briefly put, evil is a defect, like a limp who has a defective leg (being sane is good; having a defective leg is unhealthy). Defects are not entities. Therefore, evil is not an entity.
For the present discussion, I will mainly focus upon the reverse argument of what I analyzed in the fourth section: whether the evil demon could have caused the idea of God in our minds. Here I analyze what Descartes might have held in relation to God and evil. Apparently, God created evil, because He has created everything. This is especially important: although God is perfection, he created the human being with a will that is prone to evil, sin and imperfection. But there is a Cartesian solution to the puzzle. This passage summarizes how the problem could be tackled by the French philosopher: The more skilled the craftsman the more perfect the work produced by him; if this is so, how can anything produced by the supreme creator of all things not be complete and perfect in all respects? There is, moreover, no doubt that God could have given me a nature such that I was never mistaken; again, there is no doubt that he always wills what is best. Is it then better that I should make mistakes than that I should not do so? As I reflect on these matters more attentively, it occurs to me first of all that it is no cause for surprise if I do not understand the reasons for some of God's actions; and there is no call to doubt his existence if I happen to find that there are other instances where I do not grasp why or how certain things were made by him. For since I now know that my own nature is very weak and limited, whereas the nature of God is immense, incomprehensible and infinite, I also know without more ado that he is capable of countless things whose causes are beyond my knowledge […] It also occurs to me that whenever we are inquiring whether the works of God are perfect, we ought to look at the whole universe, not just at one created thing on its own. For what would perhaps rightly appear very imperfect if it existed on its own is quite perfect when its function as a part of the universe is considered. (Descartes, 2008, p. 38-39, AT VII 55-56).
Thus, evil is a theological problem more than a philosophical one. Still, Descartes briefly touches on the subject in a letter to Mesland in May, 1644:
I do not know that I laid it down that God always does what he knows to be the most perfect, and it
does not seem to me that a finite mind can judge of that. But I tried to solve the difficulty in question, about the cause of error, on the assumption that God had made the world most perfect, since if one makes the opposite assumption, the difficulty disappears altogether […] The moral error which occurs when we believe something false with good reason -for instance because someone of authority has told us -involves no privation provided it is affirmed only as a rule for practical action, in case where there is no moral possibility of knowing better. (Descartes, 1997, p. 232-233, AT IV 113, 115).
Descartes, then, seems to prefer to suspend judgement to avoid the puzzle of how evil exists. His attitude towards theological problems can be summarized in the view according to which God's perfection and infinitude cannot be fathomed and grasped by the human mind, because several attributes of God cannot be fully understood by imperfect beings. He declares in several passages, before and after the Meditations, that the human mind can know what God is, but cannot fathom and conceive his perfection and infinitude. Take, for example, these two passages, one from a letter to Mersenne in May, 1630: I do not conceive them [the eternal truths] as emanating from God like rays from the sun; but I know that God is the author of everything and that these truths are something and consequently that he is the author. I say that I know this, not that I conceive or grasp it; because it is possible to know that God is infinite and all powerful although our soul, being finite, cannot grasp or conceive him. In the same way we can touch a mountain with our hands, but we cannot put our arms around it as we could put them around a tree or something else not too large for them (Descartes, 1997, p. 25, AT I 151-152).
Thus, Descartes proceeds to suspend judgement on certain theological problems, and especially in those involved in comparing God's perfection and us. If asked whether God is the cause of evil, Descartes would completely avoid the problem: it would remain problematic how evil and sin exist. Oddly enough, Descartes was influenced by the Stoics, the Epicurean and Fideism concerning the role of philosophy as the pursuit of the right judgement (Rutherford, 2016, p. 2), which seems for Descartes a means to achieve happiness (Cottingham, 1998, Gueroult, 1984, Pereboom, 1994. This may explain why grasping God's perfection is a problem with which we should not deal. Evil, then, seems fundamentally related to our imperfection and, especially, to our inability to grasp perfection. However, this should not discourage the human being to find happiness. Despite the appearances and in opposition to some Christian thinkers, Descartes' ethics takes philosophy's practical goal to be the realization of a happy life: one in which a human being can hope to pursuit happiness in this life. This can be achieved by loving life without fearing death (Descartes, 1997, p. 131, AT IV 480), and by seeking virtue as a perfected power of judgement. Indeed, Descartes thinks that virtue is sufficient for happiness in the form of tranquility. Although I cannot further analyze Descartes' ethics in this essay, one thing can be said, namely, the way to achieve happiness, by practicing virtue, requires the recognition that God is the creator of all things, and that the soul is immortal. Without these two beliefs, the perfect contentment of mind (See for example Descartes, 1997, p. 256-258, AT IV 264) and the satisfaction that accompanies virtue cannot exist. The evil demon, then, represents a fundamental peril for the exercise of the free will and the soul perfection. For this imperfect being puts in danger a crucial condition for happiness and tranquility, namely, the way designed by God to master ourselves through a rational free will.
11/12
Conclusion In this paper, I have offered an argument about the Cartesian evil demon, viz., the impossibility that he were a liar who lied about the existence of God. In order to provide a theoretical framework in which the argument made sense, I analyzed diverse topics related to Cartesian philosophy.
Firstly, I looked at how Descartes introduces the problem of the existence of God, in the Meditations on First Philosophy. In particular, I revised the form in which God guarantees knowledge certainty: if He does not exist, and there is an evil demon who deceives us about everything, no knowledge claim can be certain. Thus, according to Descartes, the existence of God needs to be proved, since no certainty is guaranteed otherwise.
Secondly, I mentioned that Descartes' offers two arguments of the existence of God, the a priori proof, and the a posteriori proof (or the cosmological argument). In view of the argument discussed in the fourth section, about the impossibility that the evil demon succeeded in deceiving us about the existence of God, I elucidated crucial notions and concepts such as objective, formal and eminent reality. All of them are fundamental to grasp how it is impossible that an imperfect and finite being, the human being for example, has caused the idea of God.
Thirdly, I dealt with the issue of how evil the evil demon can be. More precisely, I offered an argument according to which, since the evil demon is an imperfect being, he cannot be the cause, the formal reality, of the idea of God (which is an objective reality). In other words, even if the evil demon intended to deceive the human being by making them believe that God exists, when in fact He does not, the evil demon would completely fail. The rationale of the argument is that the evil demon, as a liar, cannot be a perfect being, because lies and errors are imperfections.
Finally, I considered what would occur if the argument of the fourth section were reversed. The mirror argument makes us question what would occur if God is the creator, the very cause, of evil. In particular, I gathered textual evidence to show that Descartes tends to avoid theological problems by assuming that we humans, who are finite and imperfect, cannot fully grasp what perfection really implies. That is, even though we know about the existence of God, attempting to fathom what his perfection implies is like trying to embrace a mountain with our arms. A caveat is necessary here, however: Descartes thinks that errors and lies are imperfections or defects; thus, his view seems to be akin to Plotinus and Aquinas' in relation to evil. However, I insist that the French philosopher claims that no complete understanding of divine existence can be fathomed.
The last analysis was on the evil demon and Cartesian ethics. Since Descartes' ethics holds that the goal of wisdom is happiness, and virtue requires knowledge and a rational free will, the evil demon represents a great hindrance. In so far as God did not exist, and the evil demon were a liar, there would be no room for certain knowledge and a rational free will. In other words, there would be no room for a free rational will in case we were preys of the evil demon's lies, as truth would be concealed and misrepresented by him. Therefore, truth and knowledge are crucial for Descartes, which explains why the proofs of the existence of God are so crucial in the Meditations and Principles of Philosophy. For, unquestionably, God's benevolence allows us to exercise our free rational will, the foundation stone of a virtuous, tranquil and content life. | 2021-10-29T15:15:26.198Z | 2021-09-21T00:00:00.000 | {
"year": 2021,
"sha1": "5c228b930b98ae1941c908f8be1fc686dcdecacd",
"oa_license": "CCBY",
"oa_url": "http://revistas.unisinos.br/index.php/filosofia/article/download/20718/60748712",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f8cc8974a244558a075bbb03ab820fa4bbf8f08",
"s2fieldsofstudy": [
"Philosophy"
],
"extfieldsofstudy": []
} |
158378244 | pes2o/s2orc | v3-fos-license | Characterization of Southern Illinois Water Treatment Residues for Sustainable Applications
: Although they are abundantly available, the specific applicability of water treatment residues (WTRs) is dictated largely by the favorability of physicochemical characteristic properties and mineralogical composition. We have suggested that WTRs have a high potential for remediation application. In addition, the relevant properties that define the beneficial reuse of WTRs may be widely variable due to the influence of the dose, type of coagulant/softening agent, and quality of source water. This study investigated the physical, chemical, agronomic, and mineralogical characteristics of three different types of WTRs that were collected from treatment plants in the Midwestern U.S, in order to compare and assess their suitability for remediating impacted ecosystems, such as abandoned mine lands (AML). An analysis of the results showed that the differences in the properties of the WTR samples were significant. The total metal concentrations by inductively coupled plasma mass spectrometry (ICP-MS) revealed the abundance of Fe, Al, Mn, Cu, and other co-occurring metals. The leachability of metal(loid)s, regulated under the Resource Conservation and Recovery Act (RCRA 8 metals), were below their respective US Environmental Protection Agency (EPA) allowable limits of 5.0, 100, 1.0, 5.0, 5.0, 0.2, 1.0, and 5.0 mg/kg, indicating that the WTRs were non-hazardous to the environment. Comparatively, the Al-WTR showed a significant release of arsenic (As), possibly from livestock waste and pesticide application from farms in the catchment area of the raw water source. The WTRs were alkaline (potential of hydrogen [pH] 7.00–9.10), which suggested a high acidity-neutralizing potential. The Ca:Mg ratio was between 1:7 and 1:1.5 (meq basis), which contributed to a cation exchange capacity (CEC) range of 4.6–16.2 meg/100g. The WTRs also showed adequate capability to supply relevant plant nutrients, such as Zn, Ca, Mg, S, Cu, and Fe, although readily available concentrations of NO 3 -N, P, and K were generally low. Thus, the alkalinity, significant CEC, low metal concentration and the presence of X-ray diffraction amorphous phases and calcites suggested that WTRs could be safely applied as low-cost sustainable alternatives for soil improvement and remediating contaminants such as metal(loid)s in AML. acid (DTPA) extractable micronutrients (mg/kg) in WTRs.
Introduction
The treatment processes that are utilized in the production of potable water include coagulation-filtration, granular activated carbon, precipitative softening, ion exchange, and membrane separation. Conventional coagulation-filtration drinking water treatment processes use inorganic chemicals (coagulants), such as alum (KAl [SO 4 ] 2 ·12H 2 O) or ferric salts (FeCl 3 , or Fe 2 [SO 4 ] 3 ), to remove the suspended solid matter and natural organic matter in water by charge neutralization, sweep flocculation, and adsorption onto amorphous metal hydroxide precipitates [1]. Furthermore, precipitative softening plants primarily reduce hardness by raising the potential of hydrogen (pH) in order to enhance the precipitation of calcium carbonate (CaCO 3 ) and magnesium hydroxide Mg(OH) 2, using lime or soda ash. These treatment processes result in the production of byproducts known as water treatment residuals (WTRs) [2].
Mechanically dewatered residues are mostly processed for useful applications or they are disposed of by landfilling. In addition to landfilling as the major disposal method, residues may be discharged into rivers and oceans, injected in deep-wells, or diverted to on-site waste impoundments. Major setbacks to landfill disposal include tightened environmental regulations, declined public acceptance of landfill solutions, increased disposal costs, and decreased landfill capacity. The direct discharge into rivers and oceans is highly discouraged because of the potential impact on the aquatic environment. Deep-well injection, on the other hand, is comparatively cost prohibitive and can also impact underground sources of drinking water. The current high water quality requirements and high pollutant levels in source waters are responsible for the high dose application of coagulants/softeners. Subsequently, over 2 million metric tons of treatment residuals are produced daily from drinking water treatment plants in the U.S. [3]. Local availability of WTRs is an advantage to the sustainable application [4].
The major components of WTRs are soil separates, organic materials, hydrous metal hydroxides (the quantities of which depend on the type and dose of coagulant and/or liming agent that is used), and source water quality. Hence, WTRs exhibit major variabilities in physical, chemical, biological, and mineralogical composition [5][6][7][8][9][10][11]. Knowledge of the WTR material properties is, therefore, essential towards decision making for reuse. Over the last years, relentless efforts have been made toward the beneficial application of regenerated WTRs in order to help reduce the disposal volume. The current beneficial uses of WTR typically include land application for acidity correction, for agricultural or horticultural benefits, and scantily for other engineering applications [12][13][14][15]. Current studies have focused on WTRs for the removal of organic and inorganic contaminants from soil and aqueous systems. They are applied in aqueous solutions as sorbents of perchlorate, phosphate, dichromate, dyes, mercury, hydrogen sulfides, and ions of heavy metal and metalloids [5,[16][17][18][19][20][21][22][23][24][25][26]. The effect of WTR application on the attenuation and mobility of phosphorus heavy metals and the metalloid arsenic in agricultural soils has also been studied extensively [3,6,7,[27][28][29][30]. However, the application of WTRs for remediating mining impacted ecosystems have not received much attention. The current technologies that are used for soil and water remediation-such as vitrification, soil washing, total encapsulation, grouting, air stripping, precipitation, thermal desorption, reverse osmosis, and ion exchange-are costly, labor intensive, and require large amounts of energy resources.
Hence, the sustainable recycle and reuse of WTRs as a cost-effective and efficient alternative material for remediation of soil and water at identified abandoned mines sites. It would also provide a safe and beneficial disposal route for the otherwise solid waste, as regulated under the framework of the Resource Conservation and Recovery Act (RCRA) public law. This study has suggested that WTRs have a high potential for remediation application due to particular inherent properties and that, although WTRs may share common properties, a composition analysis of samples that were collected from different treatment facilities, would reveal significantly different levels of the relevant parameters that define beneficial reuse. WTRs samples were, therefore, collected from three drinking water treatment plants in the Southern Illinois region, in order to explore their potential for remediation application. Currently, to the best of our knowledge, no such assessments have been conducted on WTRs in the region, or on samples from the considered facilities. The study represented a useful addition to the literature by providing important information on regenerative alternatives to primary resources that are used in soil and water remediation projects.
WTR Sample Collection and Preparation
WTR samples were collected across the length of the dewatering ponds from three community drinking water treatment facilities in Southern Illinois in the United States, including the City of Carbondale Water Treatment Plant in Jackson County, IL; Rend Lake Conservancy District Water Treatment Plant in Franklin County, IL; and Saline Valley Conservancy District Water Treatment Plant in Saline County, IL. The City of Carbondale Water Treatment Plant utilized an alum coagulant to remove part of the total suspended solids and biochemical oxygen demand from the raw water. The Rend Lake Conservancy District Water Treatment Plant utilized a FeCl 3 coagulant with CaO for precipitation and softening purposes. The Saline Valley Conservancy District Water Treatment Plant treated groundwater and produced a softening residue with high concentrations of CaO. Figure 1 shows the county map of the State of Illinois with sample sources identified by their counties.
drinking water treatment facilities in Southern Illinois in the United States, including the City of Carbondale Water Treatment Plant in Jackson County, IL; Rend Lake Conservancy District Water Treatment Plant in Franklin County, IL; and Saline Valley Conservancy District Water Treatment Plant in Saline County, IL. The City of Carbondale Water Treatment Plant utilized an alum coagulant to remove part of the total suspended solids and biochemical oxygen demand from the raw water. The Rend Lake Conservancy District Water Treatment Plant utilized a FeCl3 coagulant with CaO for precipitation and softening purposes. The Saline Valley Conservancy District Water Treatment Plant treated groundwater and produced a softening residue with high concentrations of CaO. Figure 1 shows the county map of the State of Illinois with sample sources identified by their counties.
For identification purposes, the alum, ferric salt/lime, and lime residues were represented by the dominant constituent element as Al-WTR, Fe-WTR, and Ca-WTR, respectively. The sampling and characterization were performed during the summer of 2015. The collected individual samples were air-dried at a room temperature of ~25 °C, on paper laid on a clean concrete floor, prior to the subsequent chemical analysis. This was because oven drying of the samples at temperatures higher than 49 °C could possibly affect the results of the extractable cation analysis [31]. The air-dried WTR samples were crushed, homogeneously mixed, and sieved through a 2 mm mesh (as shown in Figure 2) for storage and subsequent analysis. The samples were stored at an ambient temperature and the pH and electrical conductivity (EC) analysis were conducted not more than 4 weeks after sampling. The nutrient and other elemental analyses were conducted within 8 weeks of sampling. For identification purposes, the alum, ferric salt/lime, and lime residues were represented by the dominant constituent element as Al-WTR, Fe-WTR, and Ca-WTR, respectively. The sampling and characterization were performed during the summer of 2015. The collected individual samples were air-dried at a room temperature of~25 • C, on paper laid on a clean concrete floor, prior to the subsequent chemical analysis. This was because oven drying of the samples at temperatures higher than 49 • C could possibly affect the results of the extractable cation analysis [31]. The air-dried WTR samples were crushed, homogeneously mixed, and sieved through a 2 mm mesh (as shown in Figure 2) for storage and subsequent analysis. The samples were stored at an ambient temperature and the pH and electrical conductivity (EC) analysis were conducted not more than 4 weeks after sampling. The nutrient and other elemental analyses were conducted within 8 weeks of sampling.
Analytical Methods
The dry WTR samples were analyzed for total ionic strength or electrical conductivity (EC), and pH using calibrated probes connected to a PASCO advanced water quality sensor (PS-2230). The pH was measured in deionized (D.I.) water using a two-point calibrated Pasco pH probe (PS-2102) with an accuracy of ±0.1 and resolution of 0.01. A WTR:D.I. water ratio of 1:2.5 (w/v) was used as an extract. The mixture was stirred with a glass rod and the WTR was allowed to settle during the 45 min contact time before measuring from the supernatant. The EC of the WTR samples was measured with a Pasco conductivity probe (10X PS-2571), in a 1:5 (w/v) WTR:D.I. water solution at 25 °C, in order to determine the concentrations of salts [32].
The specific gravity of the WTR particles was determined with a standard test method for solids, using a water pycnometer (ASTM D 854-00), where the mass of a solid sample was determined by weighing, and the volume was determined by calculation from the mass and density of water displaced by the sample. A high-resolution particle size analysis of the fine-grain WTR samples was performed by laser diffractometry using the patented Microtrac ® Tri-Laser Technology Bluewave particle size analyzer. The particle size classification was based on the Wentworth scale [33]. Extractable cations in the WTR samples were measured by the neutral ammonium acetate (NH4OAC) extraction method in order to assess the amount of plant available potassium, magnesium, calcium, and sodium in each sample. Although the Na availability was generally not essential for plant growth [34], the test was required in order to diagnose sodic and sodic-saline conditions of the soil samples. The method involved shaking the sample with a solution of 1 M NH4OAc, which was adjusted to a neutral pH, after which the filtrate was analyzed for K + , Na + , Ca 2+ , and Mg 2+ by inductively coupled argon plasma (ICAP) detection. The cation exchange capacity (CEC), which measures the quantity of readily exchangeable cations that neutralize negative charges in the soil, was then estimated through the summation of the extractable cations. It must, however, be mentioned that errors are most likely to occur in the "sum of bases" method, which is used for calculating the CEC of alkaline materials like WTRs, which contain significant amounts of free CaCO3 and high concentrations of soluble Ca. The release of calcium carbonate from the WTRs into the ammonium acetate solution limited the saturation of exchange sites by the ammonium ion, therefore, result in artificially low CEC. Hence, the reported values were treated as indexes of the relative minimum CEC rather than the quantitative measures of CEC in WTR.
The organic matter content (% of OM) in WTR samples was determined by the loss on ignition (LOI) method by comparing the weight difference before and after combustion in a furnace. The representative samples that were finer than 2 mm were oven-dried at 105 °C for 24 h. The oven-dried
Analytical Methods
The dry WTR samples were analyzed for total ionic strength or electrical conductivity (EC), and pH using calibrated probes connected to a PASCO advanced water quality sensor (PS-2230). The pH was measured in deionized (D.I.) water using a two-point calibrated Pasco pH probe (PS-2102) with an accuracy of ±0.1 and resolution of 0.01. A WTR:D.I. water ratio of 1:2.5 (w/v) was used as an extract. The mixture was stirred with a glass rod and the WTR was allowed to settle during the 45 min contact time before measuring from the supernatant. The EC of the WTR samples was measured with a Pasco conductivity probe (10X PS-2571), in a 1:5 (w/v) WTR:D.I. water solution at 25 • C, in order to determine the concentrations of salts [32].
The specific gravity of the WTR particles was determined with a standard test method for solids, using a water pycnometer (ASTM D 854-00), where the mass of a solid sample was determined by weighing, and the volume was determined by calculation from the mass and density of water displaced by the sample. A high-resolution particle size analysis of the fine-grain WTR samples was performed by laser diffractometry using the patented Microtrac ® Tri-Laser Technology Bluewave particle size analyzer. The particle size classification was based on the Wentworth scale [33]. Extractable cations in the WTR samples were measured by the neutral ammonium acetate (NH 4 OAC) extraction method in order to assess the amount of plant available potassium, magnesium, calcium, and sodium in each sample. Although the Na availability was generally not essential for plant growth [34], the test was required in order to diagnose sodic and sodic-saline conditions of the soil samples. The method involved shaking the sample with a solution of 1 M NH 4 OAc, which was adjusted to a neutral pH, after which the filtrate was analyzed for K + , Na + , Ca 2+ , and Mg 2+ by inductively coupled argon plasma (ICAP) detection. The cation exchange capacity (CEC), which measures the quantity of readily exchangeable cations that neutralize negative charges in the soil, was then estimated through the summation of the extractable cations. It must, however, be mentioned that errors are most likely to occur in the "sum of bases" method, which is used for calculating the CEC of alkaline materials like WTRs, which contain significant amounts of free CaCO 3 and high concentrations of soluble Ca. The release of calcium carbonate from the WTRs into the ammonium acetate solution limited the saturation of exchange sites by the ammonium ion, therefore, result in artificially low CEC. Hence, the reported values were treated as indexes of the relative minimum CEC rather than the quantitative measures of CEC in WTR.
The organic matter content (% of OM) in WTR samples was determined by the loss on ignition (LOI) method by comparing the weight difference before and after combustion in a furnace. The representative samples that were finer than 2 mm were oven-dried at 105 • C for 24 h. The oven-dried samples were then weighed into crucibles and combusted for 1 h at 375 • C. The representative samples of the WTRs were analyzed for concentrations of RCRA metals and other metals, including Cu, Mn, Ni, Zn, Co, Li, Fe, and Al, by inductively coupled plasma mass spectrometry (ICP-MS). Furthermore, −0.5 g samples were digested in aqua regia (a mixture of HNO 3 and HCl, optimally in a 1:3 molar ratio) at about 90 • C in a microprocessor controlled digestion block for 2 h. The suite of metal analysis was performed by the Acme Labs (Bureau Veritas) commercial laboratory (http://acmelab.com). The standard deviation for the repeated analysis was consistently <8% of the mean of most of the elements, except for Hg, Ag, Ni, Li, and Al (15-28%).
The determination of plant available micronutrients, including Zn, Mn, Fe, and Cu, were determined by extraction with diethylenetriaminepentaacetic acid (DTPA), a chelating compound [35] using ICAP detection. Boron was extracted using DTPA/sorbitol with ICAP detection. Percentage carbon and percentage nitrogen were determined by combustion, using a Thermo Scientific Delta V Plus isotope ratio mass spectrometry (IRMS) with a dual inlet and Conflo IV interface connected to a Costech 4010 elemental analyzer (EA) at Southern Illinois University analytical chemistry laboratory. The water soluble, inorganic nitrate-nitrogen (NO 3 -N) was determined by the reduction of nitrate in a segmented flow analysis (SFA) system. The phosphorus in the WTRs was determined after the Olsen-or bicarbonate-and the Bray extraction tests. In the Olsen technique, the dry WTR samples were extracted with a weak solution of NaHCO 3 , which was adjusted to pH 8.5, whereas the Bray extraction test used a mild acid with ammonium fluoride, followed by colorimetric detection. Technically, the Olsen and Bray approaches only extracted a portion of total sample P, and were therefore treated as indexes of relative WTR sample P availability, rather than the quantitative measures of P content [36].
The bulk mineralogical data of randomly oriented WTR powder samples was (e.g., the authors of [37] described using X-ray diffraction [XRD] patterns from a Rigaku Ultima IV X-ray diffractometer with CuKα radiation) and intensity data were collected in the 2θ angular range of 2 • -60 • with a scanning step of 0.02 • , using Cu-Kα radiation. The identification of mineral phases was conducted by comparing calculated d-spacing values with published crystal structure data [38].
Physicochemical Analysis
The analytical results of the selected properties of the WTR samples are shown in Table 1. The measured pH of WTRs ranged from 7.0 to 9.1. Specifically, the Ca-WTR and Fe-WTR samples were higher than typically reported pH ranges of WTRs mostly between 5.10 and 8.00 [11,39,40]. The alkaline pH of the Ca-WTR (pH~9.1) and Fe-WTR (pH~8.4) produced a high acid neutralizing potential. The effect of drying on the particle size characteristics of the collected Al-WTR sample, in particular, was found to be significant, as dried samples formed hard, stable, and coarse aggregates. Hence, the particle sizes of the Al-WTR were not determined, as it would not have been representative. The textural analysis (shown in Figure 3) of Ca-and Fe-WTRs showed 76.1% and 75% of silt, respectively, in the Wentworth grain size range of 0.39-63 microns. The Ca-WTR recorded 12.5% fine sand particles, which was more than Fe-WTR, which recorded 5.0%. Since coagulants enhanced settling of fine particles out of the solution, the residues were expected to exhibit high clay and silt fractions. However, fine particles coagulated into larger stable aggregates (flocs), which became stable and coarse once dried, through the influence of interparticle bonds. The particle size analysis was not performed for the Al-WTR, because of the formation of a coarse texture when air-dried although, it was likely that the dried Al-WTR would disintegrate into its constituent fractions when in contact with water for a longer period. Such scenarios might lead to the clogging of the soils pores and result in reduced hydraulic conductivity and water retention in soils that were amended with Al-WTR. According to Titshall and Hughes [11], this might increase the reactive sites for the release of potentially toxic elements from the WTR into the soils systems. [41]. The samples were also analyzed for soluble salts; a measure of the salt concentration that the WTRs could induce in a solution. Soluble salts in the WTRs were between 0.5 and 0.9 dS/m, with the Fe-WTR indicating the highest, primarily as a result of the ferric salt coagulants that were used in the water treatment process at the plant, which produced the Fe-WTR. However, the effect of salinity could be considered to be negligible if the analysis of soluble salts was less than 1.0 dS/m. It is important to ensure that the amendment of WTRs to soil does not result in soluble salts that exceed 1.0 mmhos/cm, so as to avoid the effects on salt-sensitive plants, or 2.0 dS/m, at which salt-tolerant plants would be required. The WTRs, therefore, did not pose potential problems that would have compromised plant health and yield when it was applied to soils.
Total Metal Analysis
The quantitative determination of the major elemental composition of the WTRs is shown in The EC of the WTRs (0.32-0.67 dS/m) were within the reported ranges of 0.22-1.10 dS/m for 17 Oklahoma WTRs [41]. The samples were also analyzed for soluble salts; a measure of the salt concentration that the WTRs could induce in a solution. Soluble salts in the WTRs were between 0.5 and 0.9 dS/m, with the Fe-WTR indicating the highest, primarily as a result of the ferric salt coagulants that were used in the water treatment process at the plant, which produced the Fe-WTR. However, the effect of salinity could be considered to be negligible if the analysis of soluble salts was less than 1.0 dS/m. It is important to ensure that the amendment of WTRs to soil does not result in soluble salts that exceed 1.0 mmhos/cm, so as to avoid the effects on salt-sensitive plants, or 2.0 dS/m, at which salt-tolerant plants would be required. The WTRs, therefore, did not pose potential problems that would have compromised plant health and yield when it was applied to soils.
Total Metal Analysis
The quantitative determination of the major elemental composition of the WTRs is shown in Table 2. There were significant variabilities in the chemical composition of the WTRs, as was suggested. The analytical results of the metal concentrations of the three WTR samples indicated Ba and As as the dominant regulated RCRA 8 metals, whereas Hg and Ag were found in negligible amounts. The Sustainability 2018, 10, 1374 7 of 14 significant abundance of heavy metals in the WTRs included Fe, Mn, Al, and Cu. Comparatively, the concentrations of heavy metals in the Ca-WTR was found to be the lowest. Trace elements of some rare earth elements were also observed, including La, Sc, Nb, Y, and Ce, in all of the WTRs. Table 2. Total heavy and trace metal concentrations in the water treatment residue samples ± S.D. (standard deviation). RCRA-resource conservation and recovery act.
Toxicity Analysis
The total concentrations of the Resource Conservation and Recovery Act (RCRA) metals, especially As and Ba were significant for the WTRs. However, only a fraction of the total concentration was estimated to have leached from the WTRs. The Toxicity Characteristic Leaching Procedure (TCLP) estimate was used to predict whether the hazardous components of the WTR were likely to leach out of the waste, under simulated landfill conditions, becoming a threat to public health or the environment. The results showed in Table 3 indicated that, although significant amounts of the RCRA 8 metals could be released from the WTRs, none was above the allowed USEPA Part 503 concentration limits. However, the release of significant amounts of As, especially from the Al-WTR, raised concerns for land application. Arsenic (As) could be found in the atmosphere, soils, and rocks, and natural waters, among other sources. It is mostly introduced in the environment through natural processes, such as weathering reactions and biological activity, as well as through anthropogenic routes, including mining, combustion of fossil fuels, pesticides, herbicides, and the use of arsenic as an additive to animal feed. All drinking water treatment technologies that remove arsenic create residuals with concentrated arsenic and other co-occurring contaminants. The comparatively high concentration of As in the Al-WTR, which treated water from a reservoir, might have been because of obvious point sources of As contamination within the catchment area, such as crop, livestock, and poultry farms. However, the observed concentration in the Fe-WTR might have come from similar sources to the Al-WTR, since the facility treated water from a lake close to many farms. Observed concentrations of As in the Ca-WTR might have also been a result of transport from a nearby abandoned mine close to the Saline Valley water treatment facility.
Analysis of WTR Nutrient Compositions
The percent organic matter measured the amount of plant and animal (organic) residue in the WTRs. Since the Ca-WTR was produced as a byproduct of groundwater treatment, there was very low amounts of coagulated organic sediments and, hence, recorded the least organic matter content of 1.3%. The Al-WTR indicated a very high organic matter content and was less dense, showing a particle density of 1.54 g/mL compared with the Ca-WTR of 2.47 g/mL and Fe-WTR of 2.32 g/mL. Organic matter served as a reserve for many essential nutrients, especially nitrogen, which was made available to the plant through bacterial activity. The CEC of WTR indicated the capability of WTRs to hold cationic nutrients, such as K, Mg, Ca, and Na. The measure of CEC was dependent on the concentration of the exchangeable divalent Mg and Ca, and on the amount of clay minerals and organic matter that were present. Exchangeable Ca of 8.54 meq/100 g in Ca-WTR was comparable to the concentration of 8.47 meq/100 g in Fe-WTR, however, it was lower in the Al-WTR. Extractable K and Na were low for all of the WTRs. The Ca and Mg resulted in the CEC of the WTRs, which ranged from 4.6 to 16.2 meq/100 g WTR. The Ca:Mg ratio of the Al-, Ca-, and Fe-WTRs were 1:7, 1:2, and 1:1.5 (meq basis), respectively. Although the Ca-WTR showed very high concentrations of total Ca (Table 2), it was clear that the Ca was not in a readily available form, as suggested in Table 4. In all of the WTRs, however, extractable Ca and Mg were in adequate plant available concentrations. Although percentage C in both the Fe-(1.01%) and Ca-WTR (1.68%) were relatively low, it did not deviate significantly from the reported values of similar samples. Basta [40] reported values ranging from 0.8% to 6.5%, and the authors [41] reported values from 1.7 to 14.9%. A total organic carbon content of 3.0% was recorded by Elliott et al. [42] from a composition analysis of WTRs from seven Pennsylvania water treatment facilities. Percentage N was 0.14% for the Fe-WTR and did not exceed 0.03% of the Ca-WTR. Nitrate-N, which used to indicate direct plant available nitrogen, was low, ranging between 1.0 and 5.0 mg/kg (loading rate of 2 to 9 lbs/ac). Generally, plant available nitrogen, phosphorus, and potassium were found to be low in the WTRs. However, when they are amended into nutrient-poor soils, they might have been able to help to improve soil conditions, enhancing plant growth.
Influence of WTR Application on Phosphorus
Many research findings had indicated that WTR application might have presented many disadvantages, because of the potential adsorption of plant-available P, which might have resulted in reduced yields of maize [43], tomato [44], and more recently, wheat [45]. However, other researchers had also contrasted earlier findings of WTR adsorption of available P levels in amended soils [29]. Geertsema et al. [46] found no significant differences in bioavailable and total P concentrations, following 30 months of WTR application for growing pine. Mahdy et al. [47] also applied WTR to three types at varying field equivalent rates. Although they found decreased P concentrations at higher rates of WTR (>67 Mg/ha) in calcareous and clay soils, the P concentration continued to increase proportionally with increased application rates. Although much interest was expressed in studying the impact of WTR application on soil P, with respect to crop productivity, previous researchers concluded differently.
Mineralogical Analysis
The identification of mineral phases is shown in Figure 4. Generally, the samples showed the presence of clay and non-clay minerals, as well as a significant amount of amorphous materials. The dominant minerals in the WTRs samples were mostly calcite, quartz, and feldspar. Although it was not recorded in the Al-WTR, calcite was the most dominant mineral in the Fe-WTR and, obviously, the Ca-WTR, which resulted from the lime that was used at the respective treatment facilities. The presence of quartz in the WTR samples was possibly a result of the varying amounts of suspended sand particles that were removed by the coagulants. The Ca-WTR, which was collected from the facility that treated groundwater, did not show any quartz in the XRD spectra. However, the Al-and Fe-WTRs that were collected from the facilities that treated surface water showed various peaks for quartz. Aside from the amounts of quartz in the Al-WTR, the XRD pattern consisted largely of disordered materials, possibly amorphous hydrous metal oxides-Al(OH) 3 from the reaction of alum in water with bicarbonate-and organic fractions. This finding was consistent with Titshall and Hughes [11], and Ippolito et al. [48]. A similar reaction was reported for iron-salt coagulants. The less-crystalline nature of the WTRs, the presence of metal (hydr)oxides, and their porous structure were major characteristics that influenced their metal ion adsorption capacities. Hence, WTRs could be presented as competitive options for remediating heavy metal contaminated aqueous solutions and soil.
in water with bicarbonate-and organic fractions. This finding was consistent with Titshall and Hughes [11], and Ippolito et al. [48]. A similar reaction was reported for iron-salt coagulants. The less-crystalline nature of the WTRs, the presence of metal (hydr)oxides, and their porous structure were major characteristics that influenced their metal ion adsorption capacities. Hence, WTRs could be presented as competitive options for remediating heavy metal contaminated aqueous solutions and soil.
DTPA Extractable Micronutrient
The use of a total element content did not give the correct indication of the toxicity of waste materials because it did not reflect the labile or available the fractions. Hence, the DTPA extracted micronutrients were used to assess the ease of the release of some heavy metals and boron under acidic conditions. The DTPA concentrations, shown in Figure 5, raised some concerns with regards to the relatively high exchangeable Mn and Fe in the WTRs. Although both the Fe-and Al-WTRs showed elevated concentrations of exchangeable Mn compared with the Ca-WTR, the Mn concentration of the Al-WTR was exceptionally high. Manganese was considered an essential plant nutrient that helped in metabolic processes, such as photosynthesis, and as an enzyme antioxidantcofactor [49]. The inadequate or excess manganese that was available to plants could have affected various metabolic processes. Manganese toxicity was mainly observed through the reduction of biomass and photosynthesis. Bioavailability of Mn in soils was influenced by soil pH and redox conditions [50]. The most available form-Mn(II) in the soil for plants-became more mobile at an
DTPA Extractable Micronutrient
The use of a total element content did not give the correct indication of the toxicity of waste materials because it did not reflect the labile or available the fractions. Hence, the DTPA extracted micronutrients were used to assess the ease of the release of some heavy metals and boron under acidic conditions. The DTPA concentrations, shown in Figure 5, raised some concerns with regards to the relatively high exchangeable Mn and Fe in the WTRs. Although both the Fe-and Al-WTRs showed elevated concentrations of exchangeable Mn compared with the Ca-WTR, the Mn concentration of the Al-WTR was exceptionally high. Manganese was considered an essential plant nutrient that helped in metabolic processes, such as photosynthesis, and as an enzyme antioxidant-cofactor [49]. The inadequate or excess manganese that was available to plants could have affected various metabolic processes. Manganese toxicity was mainly observed through the reduction of biomass and photosynthesis. Bioavailability of Mn in soils was influenced by soil pH and redox conditions [50]. The most available form-Mn(II) in the soil for plants-became more mobile at an acidic pH, whilst the enhanced Mn adsorption onto soil particles decreased its availability to plants [51] at a higher soil pH. In that regard, the high pH of the WTRs, particularly the Ca-and Fe-WTRs, when amended with soil at optimum amounts, might have helped to immobilize excess Mn, whilst creating a near-neutral soil pH for adequate plant growth. Achieving this would have maintained the favorability of the Fe-and Ca-WTRs for safe land applications. Under similar conditions, the direct use of the Al-WTR would present some challenges, primarily because of its comparatively low acid neutralizing.
The concentration of the DTPA exchangeable Fe was studied in the WTRs. Again, the Fe-WTR released considerable amounts of exchangeable Fe, followed by the Ca-WTR and Al-WTR, which released the least amount under the test condition. This suggested that an appreciable amount of the total Fe concentration in the Ca-and Fe-WTRs could have been made available to plants under acidic conditions. Excess available plant Fe might have been injurious to plants' health and growth (Peña-Olmos et al., 2014), although the severity of Fe toxicity and yield had not been established. An identifiable symptom linked to Fe (II), included browning or yellowing of the leaves and subsequent growth inhibition, which might have resulted in reduced productivity in some crops [52]. Since excess iron toxicity was associated with low soil pH values [53], the high pH of the Ca-and Fe-WTRs might have been able to maintain a near neutral pH, in order to reduce the mobility of the Fe ions in soil.
The DTPA extracted Cu concentrations in the Fe-WTR were slightly higher compared with the concentrations in the Ca-and Al-WTRs. There was an estimated 109% difference between the Cu concentration in the Fe-WTR, and the Ca-WTR and was about 78% higher than in the Al-WTR. This increment could have possibly been because of the composition of the Fe-salt that was used for the water purification process. There was, however, very little variation in B and Zn concentrations in the WTRS. Irrespective of the presented possibilities of the WTRs to counteract the potential releases of high levels of Fe and Mn, the release of metals under sustained acidic conditions represented a major concern, which needed proper monitoring prior to applications. Although it was not an essential element for either plants or animals, aluminum (Al) was highly abundant in the WTRs in the total concentrations. Research indicated that Al toxicity was related to the amount of A1 3+ in the solute solution [54]. The test results from Melich-3 and other stronger extractants, such as DTPA, however, did not have a strong relationship to Al 3+ . Furthermore, the forms of Al that were extracted by large volumes of salt solutions were not necessarily equivalent to available forms in the soil, because the Al was well mixed to form precipitation with ligands, such as fluoride (F), sulfate (S0 4 2+ ), and oxalate and, hence, became non-toxic [54]. Therefore, the pH of the WTRs was probably the single most important factor that was considered to control the amount of A1 3+ in the WTR solution. The amount of soluble Al increased as the pH dropped below pH 5.0. The high pH of the WTRs and their high neutralizing capability meant that, when amended to acidic soil to create pH conditions above 5.0, the Al toxicity would rarely become a problem. The concentration of the DTPA exchangeable Fe was studied in the WTRs. Again, the Fe-WTR released considerable amounts of exchangeable Fe, followed by the Ca-WTR and Al-WTR, which released the least amount under the test condition. This suggested that an appreciable amount of the total Fe concentration in the Ca-and Fe-WTRs could have been made available to plants under acidic conditions. Excess available plant Fe might have been injurious to plants' health and growth (Peña-Olmos et al., 2014), although the severity of Fe toxicity and yield had not been established. An identifiable symptom linked to Fe (II), included browning or yellowing of the leaves and subsequent growth inhibition, which might have resulted in reduced productivity in some crops [52]. Since excess iron toxicity was associated with low soil pH values [53], the high pH of the Ca-and Fe-WTRs might have been able to maintain a near neutral pH, in order to reduce the mobility of the Fe ions in soil.
The DTPA extracted Cu concentrations in the Fe-WTR were slightly higher compared with the concentrations in the Ca-and Al-WTRs. There was an estimated 109% difference between the Cu concentration in the Fe-WTR, and the Ca-WTR and was about 78% higher than in the Al-WTR. This increment could have possibly been because of the composition of the Fe-salt that was used for the water purification process. There was, however, very little variation in B and Zn concentrations in the WTRS. Irrespective of the presented possibilities of the WTRs to counteract the potential releases of high levels of Fe and Mn, the release of metals under sustained acidic conditions represented a major concern, which needed proper monitoring prior to applications. Although it was not an essential element for either plants or animals, aluminum (Al) was highly abundant in the WTRs in the total concentrations. Research indicated that Al toxicity was related to the amount of A1 3+ in the solute solution [54]. The test results from Melich-3 and other stronger extractants, such as DTPA, however, did not have a strong relationship to Al 3+ . Furthermore, the forms of Al that were extracted by large volumes of salt solutions were not necessarily equivalent to available forms in the soil, because the Al was well mixed to form precipitation with ligands, such as fluoride (F), sulfate (S04 2+ ),
Concluding Remarks
This work has demonstrated the favorable potential for the beneficial reuse of WTR through composition analysis of relevant physical, chemical, agronomic, and mineralogical properties. It was also confirmed that the properties of the WTRs were mostly site-specific, owing to the significant variabilities in the properties, which were mostly influenced by the source water quality and type of coagulant of softener that was used.
The qualitative determination of the major elements showed varying acceptable concentrations of RCRA 8 metals in the WTRs. Nevertheless, the concentrations of DTPA extractable Mn and Fe raised concerns of possible contamination under sustained acidic conditions, which would require frequent monitoring when applied. The WTRs, however, showed an adequate capability to supply the relevant plant nutrients such as Zn, Ca, Mg, K, S, Cu, and Fe, although readily available concentrations of NO 3 -N, P, and K were generally low. The X-ray diffraction analysis of the samples showed a significant amount of amorphous phases, mostly metal hydroxides. The Ca-WTR was almost entirely composed of calcite, whilst the Al-WTR contained mostly quartz. The Fe-WTR contained a mix of calcite and quartz. The minor occurrence of clay minerals in mixed layers was observed from the diffractograms. In retrospect, the alkaline nature, relatively high CEC, and satisfactory concentrations of heavy metals, as well as the RCRA 8 metals of the WTRs, suggested the potential for safe and beneficial land application. The generally amorphous and porous nature of the WTRs also suggested good metal ion adsorption capabilities in soil and aqueous solution. | 2019-05-20T13:04:48.197Z | 2018-04-28T00:00:00.000 | {
"year": 2018,
"sha1": "1943219e2e965c32c04bf043e8fb07f52e7c2073",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2071-1050/10/5/1374/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "89927154a5fe859c50c9afa701bfe6fea947f86b",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Economics"
]
} |
152282709 | pes2o/s2orc | v3-fos-license | Three-state Majority-Vote Model on Barab\'asi-Albert and Cubic Networks and the Unitary Relation for Critical Exponents
We investigate the three-state majority-vote model with noise on scale-free and regular networks. In this model, an individual selects an opinion equal to the opinion of the majority of its neighbors with probability $1 - q$ and opposite to it with probability $q$. The parameter $q$ is called the noise parameter of the model. We build a network of interactions where $z$ neighbors are selected by each added site in the system, yielding a preferential attachment network with degree distribution $k^{-\lambda}$, where $\lambda \sim 3$. In this work, $z$ is called growth parameter. Using finite-size scaling analysis, we show that the critical exponents associated with the magnetization and magnetic susceptibility add up to unity when a volumetric scaling is used, regardless of the dimension of the network of interactions. Using Monte Carlo simulations, we calculate the critical noise parameter $q_c$ as a function of $z$ for the scale-free networks and obtain the phase diagram of the model. We find that the critical noise is an increasing function of the growth parameter $z$, and we define and verify numerically the unitary relation $\upsilon$ for the critical exponents by calculating $\beta /\bar\nu$, $\gamma /\bar\nu$ and $1/\bar\nu$ for several values of the network parameter $z$. We also obtain the critical noise and the critical exponents for the two and three-state majority-vote model on cubic lattices networks where we illustrate the application of the unitary relation with a volumetric scaling.
Introduction
Regular networks and random graphs have been used to study and describe the topology of diverse systems investigated in condensed matter physics, but they do not represent several behaviors of real networks found in nature. [1][2][3][4][5][6] Using complex networks, physicists studied a wide variety of physical systems such as the internet, the world wide web, the cellular networks, Protein-protein interaction networks, the scientific collaboration network, airline networks, economic and financial markets, among others. [7][8][9][10][11][12][13][14][15] Many real systems are ordered in networks that present a universality of topology, showing the same architectures of assembly. One of the most investigated kinds of networks present in real-world systems are the scale-free networks or Barabási-Albert networks. [1][2][3] These networks can be built by starting with an initial number of interconnected nodes, and newly added nodes have a higher probability to attach to the more connected nodes in a mechanism known as preferential attachment. In this process, highly connected nodes acquire more links than those that have fewer connections, yielding sites with a high number of connections, or hubs of the network. The degree distribution of these networks presents a power-law decay with exponent λ ∼ 3. In Fig. 1 we illustrate the preferential attachment algorithm for a network where each newly added site connects to others with growth parameter z = 3.
The three-state majority-vote model with noise defined on a regular square lattice is a system of spins, where each one is allowed to be on three states only. [16][17][18][19][20] In this three-state model, each spin assumes the state of the majority of its neighboring spins with probability (1 − q) and the opposite state with probability q, which is known as the noise parameter of the model. The increase of the parameter q promotes the formation of opposite opinion configurations in the model, and q acts as a social temperature, disordering the opinion system. When the social interactions of the model are modeled on a regular square lattice network, it presents an order-disorder phase transition at the critical value q c ∼ 0.118 for the noise parameter.
In this work, we investigate the influence of a network with preferential attachments on the three-state majority-vote model Figure 1. The sequence shows five subsequent steps of the Barabási-Albert model for z = 3. Empty circles mark the newly added node to the network and dashed lines represents its new links, which decides where to connect using the preferential attachment.
with noise. We use Monte Carlo simulations and standard finite-size scaling techniques to determine the critical noise parameter q c and to obtain the phase diagram of the system, as well as the critical exponents for several values of the growth parameter z of the networks investigated. We propose a unitary relation to verify the criticality of the system obtained by a volumetric scaling. We conjecture that this relation is universal regardless of the network of social interactions. We also perform simulations for the three-state majority-vote model in cubic networks that confirms our results and obtain the critical noise for this system and its critical exponents. This work is organized as follows. In section II we describe the non-equilibrium three-state majority-vote model with noise, the network construction process and introduce the relevant quantities used in our simulations. Section III contains our results in complex and regular networks, along with a discussion. In section IV we present our conclusions and final remarks.
The Model The Barabási-Albert Network
The three-state majority-vote model with noise consists in a set of spin variables {σ i } with i = 1, 2, ..., N, where each spin can assume one of the values σ = 1, 2, or 3, representing the opinion for an individual. We place the individuals in the nodes of a scale-free network with N sites. In this context, we build our network from a core of z fully connected nodes, where we add new nodes -one at a time -with z free links which will be connected by preferential attachment to the existing nodes of the network. In other words, the probability Π(k i ) that a link of the new node j connects to node i depends on the degree k i of the node i. Thus, for Barabási-Albert networks with linear preferential attachment we write where the summation is equal to the total number of existing links in the network. We keep adding nodes to the network until it reaches a total of N sites, where a double connection to the same site is forbidden. In Fig. 2 we show a Barabási-Albert network representation for N = 100 sites with growth parameter z = 5 (left). Note that some nodes have a high number of connections, despite the average small value of connections, or degree, per site in the network. We also present the histogram of the degree of the nodes for networks with N = 20000 nodes and different values of the growth parameter z (right). We verify that our scale-free networks present the characteristic power-law decay with exponent λ ∼ 3, even for different values of the average degree per node z as expected. 2
Dynamics and Numerical Quantities
The dynamics of the system consists in a generalization for three states of the two-state majority-vote model. [21][22][23][24] For each randomly selected spin σ i we determine the opinion of the majority of the spins that are linked to it. With probability 1 − q the selected spin adopts the same opinion of the majority of its neighbors, and with probability q adopts one opposite opinion. In the case of a tie between all the three states, the selected spin σ i changes to any opinion with the same probability equal to 1/3. For the case of a tie between two majority opinions, σ i assumes one of these tied opinions with probability (1 − q)/2, and the minority with probability q. Finally, for the case of a single majority opinion, σ i assumes one of the two minority opinions with equal probability q/2, and the majority with probability 1 − q. That is, if n α is the number of neighbors of the spin σ i in a given state α = 1, 2, 3, we can write the following probabilities for σ i to assume the opinion 1: These transition rules present the C 3ν symmetry with respect to the simultaneous change of all opinions, and the probabilities for the other states σ = 2 and 3 are obtained by the symmetry operations of the C 3ν group. The total number of individuals connected to σ i is n = n 1 + n 2 + n 3 . The probability q is the noise parameter of the model and all probabilities satisfy To investigate the critical behavior of the three-state majority-vote model, we first calculate the average opinion, defined in analogy to the three-state Potts model whose normalized components are given by where the sum is over all sites in the network of social interactions and δ (α, σ i ) is the Kronecker delta function. In this way, to study the critical behavior of the model we consider the magnetization M, the magnetic susceptibility χ, and the Binder's fourth-order cumulant U defined by where q is the noise parameter, z is the growth parameter of the network, N is the total number of sites, ... t denotes time averages taken in the stationary regime and ... c stands for configurational averages. The critical behavior of the model is investigated by performing computer simulations and using finite-size scaling analysis. The three-state majority-vote model evolves in time according to the probability rules given by Eqs.
(2) and eventually reaches a steady state that can be of two types. For q = 0 the system exhibits an ordered steady state characterized by the predominance of individuals in one of the possible opinions. Assuming that σ = 1 for all sites, one can write m 1 = 2/3, m 2 = m 3 = −1/ √ 6 and m = 1, yielding M = 1 for q = 0. The upper limit for q is obtained when the probability of a given spin agreeing with the majority of its neighbors is equal to the probability of it agreeing with any of the other two minority states, thus 1 − q = q/2 ⇒ q = 2/3. In this case, any state σ = 1, 2 or 3 can be found with equal probability [Eqs. (2)], leading to m 1 , m 2 , m 3 0 and M = 0 for q = 2/3 in the thermodynamic limit N → ∞.
Monte Carlo Simulations
We perform Monte Carlo simulations on Barabási-Albert networks with sizes ranging from N = 1000 to 20000. For each value of the pair q and z, we set all spins to point to one opinion, i.e., σ i = 1 for all i in the network. We next select a randomly chosen individual and update its opinion in accordance with the rules given by Eqs. (2). A Monte Carlo step (MCS) is accomplished after updating all N spins. We skip 10 5 MCS in the simulation to overcome transients and allow the system to reach a steady state. The time averages were estimated from the next 2 × 10 5 MCS and we generated at least 100 independent random samples in order to calculate the configurational averages. Starting from different initial opinion configurations, we find that the final steady state of the system falls into becoming ordered or disordered, depending on q, z and N. In the ordered phase the majority of opinions are found in one of the possible states 1, 2 or 3. In the disordered phase, the three opinions are equally distributed in the network of social interactions.
In Fig. 3 we illustrate the effect of the network of interactions with preferential attachment in the consensus (order) of the system. We plot the magnetization M(q, z, N), the susceptibility χ(q, z, N) and the Binder's fourth-order cumulant U(q, z, N) as a function of the noise parameter q for N = 20000 and z = 2, 3, 4, 5, ..., 10. We find that for small values of the noise parameter q the system presents an ordered state where M(q, z, N) >> 0. Increasing q the magnetization will continuously decrease to zero near a critical value q c , denoting the second-order phase transition of the system. The magnetic susceptibility χ(q, z, N) exhibits a peak near the critical value of the noise parameter q c where the transition occurs, also denoted by the rapid decrease of the Binder's fourth-order cumulant U(q, z, N). These results show that the critical noise parameter q c that drives the system to a disordered state is an increasing function of the growth parameter of the network z.
Next we consider the finite-size effects on our measured quantities. In Fig. 4 we show the (a) magnetization M(q, z, N), (b) the susceptibility χ(q, z, N) and (c) the Binder's fourth-order cumulant U(q, z, N) versus the noise parameter q for z = 10 and different system sizes N. Note that M(q, z, N) = 0 for high values of the noise parameter q due to finite-size effects. The susceptibility χ(q, z, N) exhibits a sharper peak as we increase the system size and the position of its maximum in the horizontal axis depends on N. Thus, we write the pseudocritical noise as q c (z, N). In Fig. 4(c) we show the Binder's fourth-order cumulant U(q, z, N) as a function of q for different system sizes. The critical noise parameter for the system q c (z) can be estimated as the point where the curves of U(q, z, N) for different sizes N intercept each other. We estimate for this set of parameters q c = 0.513(1). Figure 5(a) shows the dependence of the Binder's fourth-order cumulant on the noise parameter q for z = 5 and different values of N. Note that the curves with different system sizes intercept when 0.43 < q < 0.44. Figure 5(b) shows a magnification of the Binder cumulant data and a third order polynomial fit in the region near the curve interception, where the critical noise does not depend on the system size and is found to be q c = 0.4326(4). We calculate the cumulant U(q, z, N) for other z values and in Fig. 5(c) we show the phase diagram obtained for the three-state majority-vote model on Barabási-Albert networks. The orange region denotes the ordered phase of the system, where one of the three opinions is the majority state of the system. In this result, the error bars are smaller than the thickness of the line.
The Unitary Relation and Scaling Results
To obtain the critical exponents in complex networks, we propose that near the critical noise q c the correlation length ξ scales with the actual volume of the system 25, 26 as ξ ∼ N.
Thus, the pseudocritical noise q c (N), the magnetization M(q, z, N), the susceptibility χ(q, z, N), and the Binder cumulant U(q, z, N) satisfy the finite-size scaling relations where ε = q − q c is the distance to the critical noise, b is a constant, and M, χ, and U are scaling functions that only depend on the scaled variable x = εN 1/ν . For regular networks, we recall that N = L d , where d is the effective dimension of the network and L is an effective linear size of the system. In this case, we obtain for the magnetization and for the magnetic susceptibility χ(q, z, N) = L dγ/ν χ(εN 1/ν ). We use the notationν instead of ν since we changed the correlation length scaling relation from the usual linear scaling ξ ∼ L to the volumetric scaling ξ ∼ L d . In this case, the hyperscaling relation now reads 2β d/ν + γd/ν = d. Thus, we obtain regardless of the effective dimension d of the network. This result allows us to remark that the hyperscaling relation cannot be used to estimate the dimension of these networks when using the volumetric scaling ξ ∼ L d , in contrast to the results of previous studies. 18,22,23,27 Nevertheless, the unitary relation (Eq. 16) was verified in these works for random graphs and scale-free networks. In this context, we rewrite the unitary relation by denoting a new exponent upsilon υ, defined as where we conjecture that υ = 1 for any network under the condition of the volumetric scaling of Eq. (9). In this work, we denote the equation υ = 1 as the unitary relation for critical exponents. We validate the consistency of this result according to the comparison with the numerical findings for the critical exponents β /ν and γ/ν for regular and complex networks. By calculating the logarithm of Eqs. (10), (11) and (12) at the critical point q c , we obtain an explicit relation involving the critical exponents, the measured quantities and the system volume N and we use the Equations (18), (19) and (20) to obtain the critical exponents of the system. Figure 6 shows the logarithm of the (a) magnetization, of the (b) susceptibility and of the distance between the pseudocritical noise and the critical noise [q c (N) − q c ] versus the logarithm of the volume of the system N, where q is set to be equal Table 1 provides the critical noise, the critical exponents and the unitary relation values for each growth parameter investigated in the model. Note that the critical exponents for the magnetization (susceptibility) is a decreasing (an increasing) function of the growth parameter z. The critical noise q c increases with z, while the critical exponent 1/ν decrease with z. For all values in the table, we obtain υ ∼ 1 as expected. Table 1. The critical noise q c , the critical exponents β /ν, γ/ν and 1/ν, and the unitary relation υ, for the three-state majority-vote model on Barabási-Albert networks with growth parameter z.
By using the critical exponents, we obtain the unitary line of the model showed in Figure 7. Here, we plot the values of the critical exponents γ/ν versus 2β /ν. The linear fit of the data yield y = −0.96(1)x + 1.02(1), with an averaged unitary exponent υ = 1.02(1) for the three-state majority-vote model on Barabási-Albert networks. We conjecture that this line is universal, regardless of the geometric structure of the network used in the model. Thus, one can use the volumetric scaling ξ ∼ N with the unitary relation (17), and the Equations (18), (19) and (20) to obtain the critical exponents and the unitary line for a spin model being investigated, with or without a linear system size length clearly defined. Here, we used β /ν = 0.300, γ/ν = 0.44 and 1/ν = 0.45 with q c = 0.5282. Other growth parameter values z exhibit the same features and the same qualitative results for the data collapse of the magnetization, susceptibility, and Binder cumulant. From our simulation results and analysis, we conclude that the three-state majority-vote model defined on Barabási-Albert networks and on the Erdős-Rényi random graphs belong to different universality classes when the volumetric scaling ξ ∼ N is used. 18
Unitary Relation on Regular Networks
To confirm the validity of our statements, we performed Monte Carlo simulations for the majority-vote model with two and three states on regular square lattices and on cubic networks. We also obtain the critical noise and the critical exponents for the three-state majority-vote model in cubic networks.
For the three-state majority-vote model on cubic networks with L = 10, 20, 30 and 40, with periodic boundary conditions. We use 3 × 10 5 MCS to averages our quantities, 10 5 MCS as thermalization time and we generated 100 independent random samples in order to calculate the configurational averages. Figure 9 shows the our results for the (a) magnetization M(q, L), the (b) susceptibility χ(q, L) and the (c) Binder cumulant U(q, L) versus the noise parameter q, where N = L 3 . We observe some familiar results such as M(q, L) → 0 for q > q c , with L → ∞, and χ(q, L) that also exhibits a sharper peak as we increase L. From Binder parameter, we find the critical noise of the model q c = 0.25230 (2).
By performing numerical simulations for the majority-vote model with two and three states on regular square lattices and on cubic networks we build the Table 2 with the critical exponents for the magnetization and susceptibility. We also obtain unitary relation υ and effective dimension d obtained for each model with volumetric (ξ ∼ N) and linear (ξ ∼ L) scalings, respectively. We conclude that the critical exponents with linear and volumetric scalings relate by β /ν = dβ /ν and γ/ν = dγ/ν, as expected by recalling that ξ ∼ L d . Table 2. The critical noise q c , the critical exponents volumetrically rescaled for ξ ∼ L d , the unitary relation υ, the regular critical exponents β /ν and γ/ν, and the effective dimension d when ξ ∼ L for the majority-vote model with two and three states on regular networks. In Figure 10 we plot the logarithm of the magnetization and magnetic susceptibility show the critical exponents obtained for the majority-vote model with two and three states on square lattice and cubic networks with the volumetric scaling. Our results confirms that the unitary relation holds for this model on these networks, and it points that the effective dimension obtained by previous works with the majority-vote model on random graphs and on Barabási-Albert networks might be not equal to unity. 18,22,23,27 | 2019-05-11T21:42:02.000Z | 2019-05-11T00:00:00.000 | {
"year": 2019,
"sha1": "0c9a839077fd0807e1e7d4db44d27a39543e3a11",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "1679ce8b8c927ed1672ae5cbc08a6db668b4ffd3",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Physics"
]
} |
228829048 | pes2o/s2orc | v3-fos-license | Are Vegetation Dynamics Impacted from a Nuclear Disaster? The Case of Chernobyl Using Remotely Sensed NDVI and Land Cover Data
: There is a growing interest for scientists and society to acquire deep knowledge on the impacts from environmental disasters. The present work deals with the investigation of vegetation dynamics in the Chernobyl area, a place widely known for the devastating nuclear disaster on the 26th of April 1986. To unveil any possible long ‐ term radiation effects on vegetation phenology, the remotely sensed normalized difference vegetation index (NDVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS) was analyzed within the 30 km Exclusion Zone, where all human activities were ceased at that time and public access and inhabitation have been prohibited ever since. The analysis comprised applications of seasonal trend analysis using two techniques, i.e., pixel ‐ wise NDVI time series and spatially averaged NDVI time series. Both techniques were applied in each one of the individual land cover types. To assess the existence of abnormal vegetation dynamics, the same analyses were conducted in two broader zones, i.e., from 30 to 60 km and from 60 to 90 km, away from Chernobyl area, where human activities were not substantially altered. Results of both analyses indicated that vegetation dynamics in the 30 km Exclusion Zone correspond to increasing plant productivity at a rate considerably higher than that of the other two examined zones. The outcome of the analyses presented herein attributes greening trends in the 30 km and the 30 to 60 km zones to a combination of climate, minimized human impact and a consequent prevalence of land cover types which seem to be well adapted to increased radioactivity. The vegetation greening trends observed in the third zone, i.e., the 90 km zone, are indicative of the combination of climate and increasing human activities. Results indicate the positive impact from the absence of human activities on vegetation dynamics as far as vegetation productivity and phenology are concerned in the 30 km Exclusion Zone, and to a lower extent in the 60 km zone. Furthermore, there is evidence that land cover changes evolve into the prevalence of woody vegetation in an area with increased levels of radioactivity.
Introduction
Human casualties and associated increases in human mortality rate from natural and humaninduced disasters have been extensively studied [1,2]. Nuclear disasters, in particular, have resulted in lasting impacts on human populations in the areas surrounding the disaster sites [3]. Those impacts are related to the acute radioactivity dose over a relatively short period, usually a few months, transforming into chronic exposure to decreasing levels of radioactivity, which are, however, lower than the lethal dose [4]. Furthermore, during the 35-year period since the nuclear accident in Chernobyl, numerous research articles have been published, aiming to highlight impacts from such an unprecedented event on humans as well as on plant and animal species. Results indicated the severity of impacts to people exposed to acute radioactivity immediately after the accident, whereas there is still no indisputable conclusion on the chronic effects of the exposure to lower radioactivity doses on life [4,5].
Several studies conducted within a decade of the accident found considerably higher mutation rates in both mice and human populations [6,7]. However, later studies contradict these earlier findings, showing inconclusive results on the long-term effects of such catastrophic events on biota [3,8]. Scientists that initially concluded on devastating effects on many organisms have gradually changed their position and acknowledge the lack of sound scientific data that could support reliable results for science and society [4]. Recently, researchers have demonstrated that the Chernobyl ecosystem supports a highly diverse and efficient vertebrate scavenging community, which is an indicator of abundant wildlife populations [9]. Surprisingly, recent research has documented thriving wildlife in exclusion zones, such as those associated with the Chernobyl and Fukushima nuclear disasters, and it is shown that rewilding of affected ecosystems can happen which is attributed, to a major extent, to the absence of humans after disastrous events [10][11][12][13]. Concerning plant species, there are considerably less works published mainly demonstrating the effects of chronic low-dose exposure in places like Chernobyl, affected by a serious radioactive accident, which can destabilize the genetic structure of plants [14]. An interesting finding is that plants seem to have developed a stress response and a defense strategy that prevents genome instability, which allows them to survive in extreme environments, like those contaminated by nuclear accidents [15].
While all previously mentioned works focus on field experiments and collection of in situ data, either for human or animal and plant impact studies, there is a great potential of remotely sensed information to contribute to an environmental impact assessment. Currently, there are numerous studies being published on environmental monitoring and impact assessment using mainly satellite derived remotely sensed data. Environmental remotely sensed information has been available to scientists for almost fifty years, e.g., the Landsat missions, or the Terra and Aqua satellites from the NASA's Earth Observing System Data and Information System (EOSDIS), among many others. It is, however, only after the advances of IT and especially the evolution of the internet, when scientists achieved easy access to a long enough time series of environmental data at a global scale, that scientists have been able to perform regional and global assessments concerning climate change [16][17][18][19], hydrology [20][21][22][23][24], vegetation productivity and changes [25][26][27][28][29]. The aim of the present work is to identify whether vegetation productivity has been affected by the catastrophic failure of Chernobyl Power Plant on the 26th of April 1986, i.e., exactly 34 years before the present work. Vegetation productivity was assessed using MODIS remotely sensed normalized difference vegetation index (NDVI) data from 2000 to 2020, in three distinct zones in the Chernobyl Power Plant area, i.e., the 30 km Exclusion Zone and two surrounding zones, i.e., the 30 to 60 km zone (hereafter 60 km zone) and the 60 to 90 km zone (hereafter 90 km zone). The magnitude and direction of vegetation productivity (increasing or decreasing trends) were computed applying seasonal trend analysis at the pixel level and with spatially averaged time series of NDVI values, in various land cover types in each one of the three zones. Results were evaluated considering also the dominant land cover changes that took place in each zone.
Concerning Chernobyl's vegetation monitoring and assessment, an interesting work is presented by [30], where vegetation conditions in the Chernobyl Exclusion Zone are studied before and after the accident using Landsat observations, and it concluded that NDVI showed to be independent from the radiation values measured and that the increase in NDVI values long after the accident is attributed mostly to land abandonment. Analogous conclusions also using Landsat data were reported in [31]. The present work, however, is considerably different from those two publications, since it focuses on comparison of vegetation productivity trends in Chernobyl's Exclusion Zone compared to two surrounding zones where human activities were not interrupted after the accident. It also analyzes the NDVI time series of those three zones making use of the MODIS NDVI and land cover products. It does not aim to determine pre-and post-accident vegetation conditions such as the works of [30,31].
Study Area Description
The Chernobyl area is located in the north part of Ukraine, bordering Belarus. The region is a rural area, covered mostly by woodlands and marshlands. Major urban centers used to be the cities of Chernobyl and Pripyat which have been uninhabited for the last 35 years. Chernobyl is widely known for the nuclear accident that occurred on 26 April 1986. The disaster resulted in the evacuation of 116,000 individuals from the area. The wildlife that remained in the abandoned area was further affected by exposure to very high doses of radioactivity immediately following the accident. Studies related to the long-term effects of radiation on mammal abundance in the Chernobyl area did not support evidence of a negative influence of radiation [10], whereas the need for further research on the ecological consequences of radiation on specific vertebrate and invertebrate species is highlighted in relevant studies [3]. Many of the people who evacuated the Chernobyl area moved to a new city, 50 km east of Chernobyl, namely Slavutych. The Chernobyl accident released radioactive isotopes into the atmosphere, polluting a major portion of land in Europe [3]. Immediately after the accident, a 30 km radius area around Chernobyl Nuclear Power Plant was established by the Soviet Union, known as the Chernobyl Exclusion Zone ( Figure 1). The study area is the bounding region of approximately 32,000 km 2 of the Chernobyl Power Plant. It comprises the 30 km Exclusion Zone, and two additional bounding zones, i.e., the 60 km zone, formed as a region covering the area from 30 to 60 km around the Chernobyl site, and the 90 km zone covering the area from 60 to 90 km from Chernobyl ( Figure 1). Land cover types based on the MODIS 2018 MCD12Q1 product [32] (MODIS/Terra+Aqua Land Cover Type Yearly L3 Global 500 m SIN Grid V006 DOI: 10.5067/MODIS/MCD12Q1.006) in the three examined zones can be seen in Figure 2. Land cover classes are defined according to the International Geosphere-Biosphere Program (IGBP) classification scheme. Table 1 provides information on the area occupied by the individual land cover types in each zone. Vegetation dynamics were determined in all three individual zones in the different land cover types and the results for the 30 km Exclusion Zone (abandoned area where no human activities take place) were compared against the 60 and 90 km zones (zones where human activities did not cease).
NDVI and Land Cover Change Analyses
NDVI is a non-dimensional index calculated from the remotely sensed reflectance measurements in the spectral regions of red (visible) and near-infrared and is a measure of the vegetation greenness but also health and productivity. It ranges from −1 to 1 [33]. Negative or very low positive NDVI values (<0.05) correspond to barren land, desert, snow or presence of water bodies. Moderate and high positive NDVI values correspond to various types of vegetation. The higher NDVI values correspond to locations where the more photosynthetically active vegetation is present, with NDVI > 0.6 typically highlighting areas of dense forests or tropical rainforests [34,35]. Various ecosystem changes have been studied at local and global scales such as plant growth [34], greening and browning patterns in specific areas [36][37][38], seasonal vegetation productivity [39], detecting and predicting vegetation anomalies [40], assessment of human impact on vegetation [41], monitoring of forest conditions [42], drought monitoring [43] and changes in length of growing seasons [44]. Known limitations of the satellite-derived NDVI are cloud cover, presence of very steep topographic features, presence of snow or ice and adverse atmospheric conditions [25].
Various satellite-derived NDVI products are today available. Among them, the NDVI from the Global Inventory Monitoring and Modeling System (GIMMS) project ("National Center for Atmospheric Research Staff (Eds). Last modified 14 Mar 2018. 'The Climate Data Guide: NDVI: Normalized Difference Vegetation Index-3rd generation: NASA/GFSC GIMMS.' Retrieved from https://climatedataguide.ucar.edu/climate-data/ndvi-nor") has been widely used due to its long temporal coverage of approximately 30 years. GIMMS NDVI is produced from the Advanced Very High Resolution Radiometer (AVHRR) and provides high-quality NDVI data at a 2-weekly temporal and 8 km spatial resolution [45]. High-quality NDVI products are also released from the Earth Observing System (EOS) Terra and Aqua platforms carrying the MODIS instrument. MODIS NDVI products have been released at 16-day time steps at a spatial resolution from 1 km to 250 m from 2000 until today. MODIS NDVI products have been widely used and tested in numerous areas all over the world and are nowadays considered as a "state-of-the-art" standard dataset [46,47]. The high spatial resolution, the long coverage period and the extended validation over a wide range of representative conditions place MODIS NDVI products among the best NDVI datasets.
In the present work, the NDVI was retrieved from Collection 6 of MOD13Q1 [48], a product generated every 16 days at 250 m spatial resolution. Each MODIS NDVI product is released with pixel-level metadata in binary encoding [49], i.e., the VI quality layer, describing the quality, usefulness, atmospheric conditions, presence of clouds, snow or ice and presence of shadow or other conditions that constitute questionable pixel quality. In this work, the VI quality layer was used to filter pixels that were covered with clouds, snow or ice or whose quality was reported as low due to atmospheric conditions. More specifically, as the filtering procedure is conducted using the 16-bit flags provided in the VI quality layers of the MOD13Q1 product, information from the quality assurance bits/fields in the VI quality assessment science datasets table (Table 5 from [48]) was used. Regarding Bits 0-1 of the VI quality layer, those describing pixels produced, but most probably cloudy, or pixels not produced due to other reasons than clouds were excluded. Regarding Bits 2-5, only the highest-quality pixels remained, while for Bits 6-7, pixels with high aerosol quantities were excluded. For Bit 8, only pixels with no adjacent cloud detected were kept, while for Bit 9, all values were accepted. For Bit 10, only pixels with no mixed clouds were kept, while for Bits 11-13, only pixels describing moderate or continental ocean and deep ocean were excluded (not present in the study area anyway). Bit 14 describes the possible presence of snow/ice, therefore pixels with value of 1 (possible snow/ice) were excluded. For Bit 15, only pixels with a value of 0 were kept, i.e., possible shadows not present. The period of the NDVI data analyzed in this work spanned from 5 March 2000 to 18 February 2020.
Land cover changes in the three examined zones were assessed using the MODIS MCD12Q1 land cover product [50], and the dominant changes in all three zones were highlighted. MCD12Q1 comprises 5 legacy classification schemes, among which the IGBP classification provides the most categories, classifying Earth's surface into 17 land cover types (Table 1). MCD12Q1 is provided globally at an annual time step from 2001 to 2018, with 500 m spatial resolution. The latest collection, i.e., Collection 6 of the MCD12Q1, was used in the present work.
Data Analysis
To determine the direction and magnitude of statistically significant vegetation trends, pixelwise harmonic regression incorporating trend and seasonality was implemented, which is analogous to Fourier analysis and is known as seasonal trend analysis, which can also handle unequally spaced data in time [39,43,45,51]: In Equation (1), t is time and is the frequency of the NDVI time series (i.e., in the present case of 16-day temporal resolution =23), … are amplitudes and … are the phases (i.e., seasons), is the intercept of the NDVI series and is the trend (slope). Phase values range from 0° to 359°, with 30° representing a shift of approximately 30 days, i.e., one month. In the present case, three harmonic terms were employed and therefore 3. This is found to robustly describe variations of the MODIS NDVI time series [43]. As normality of the NDVI time series is a requirement for applying Equation (1), a Box-Cox transformation [52] was applied in those time series where normal distribution was not evidenced. The significance of the trend is estimated from a t-test with a rejection of the null hypothesis of zero trend being reported at a significance level of 5% (p-value < 0.05). Only pixels that did not experience land cover change during the period of analysis were used in the computational process. To avoid any bias in computed means due to different sample sizes in the three zones, equal areas (equivalent to the zone with the least areas with significant trends in the pixel-wise analysis) in each land cover type were selected randomly and their spatial means of NDVI trends in each land cover zone were evaluated.
Only a small portion of pixels in examined areas demonstrates statistically significant trends [38], especially in areas of high altitude or high latitude where snow cover is frequent. To overcome this problem, the spatial means of NDVI values were extracted for each date in each individual land cover category, excluding pixels that have undergone land cover change during the analysis period. Equal numbers of pixels in each land cover category were selected randomly in each one of the three examined zones, with total area in each land cover category being the smallest area among the three examined zones (lowest value from each line of Table 1). Thus, for each land cover category, a time series of spatial means of the NDVI was computed and analyzed using Equation (1) in each one of the three examined zones.
Input data were continuous MODIS NDVI images, at a 16-day time step for the period 2000 to 2020, i.e., the complete time series of MODIS NDVI products, after applying a filter based on the 250 m 16-day VI quality layer, to exclude problematic pixels as described in Section 2.2. It is true that the year 2000 is 14 years after the Chernobyl disaster happened, so the analysis presented herein does not represent the conditions immediately after the accident, which is beyond the scope of the present work. The methodology could have been applied blending multisource NDVI data, e.g., the GIMMS NDVI dataset, or the NDVI from Landsat satellites. However, the different products offer different spatial resolution and time coverage, and although data blending techniques have been developed [53], they all introduce errors to some extent. Therefore, in the present work, for keeping data consistency, one data source was used, even though it cannot capture the full progress of the phenomenon. However, the goal is to detect differences in vegetation dynamics and observe whether vegetation has recovered in the same manner in the 30 km Exclusion Zone and the two surrounding zones, i.e., the 60 and 90 km zones. This can be safely accomplished using the 20-year dataset of MODIS NDVI products.
All computations were conducted with the R open software for statistical computing ("The R Project for Statistical Computing. https://www.r-project.org/"). Spatial analysis was performed with the Raster R package [54].
Land Cover Change Analysis
Analysis of all 18 annual land cover type images revealed that approximately 20% of the 30 km Exclusion Zone changed land cover type. Dominant land cover changes are identified in Savannas (tree cover 10-30%, canopy >2 m) with an increase of approximately 5%, followed by Mixed Forests with an increase of approximately 2.5%, Woody Savannas (tree cover 30-60%, canopy >2 m) with an increase of ~1.5% and Evergreen Needleleaf Forests with an increase of approximately 1%. A considerable decrease in Grasslands losing ~10% of the area is observed, whereas the rest of the land cover changes are related to <1% of this specific zone (Figure 3). It seems thus that land abandonment in this zone has led to an expansion of dense and sparse forest areas at the expense of grassland areas.
Pixel-Wise NDVI Time Series Analysis
Pixel-wise vegetation trend analysis in the three distinct zones provided statistically significant results at the level of 5% (p-values < 0.05) for an area of 6556 km 2 , approximately 20% out of the total 32,000 km 2 of the study area. This is expected, as remotely sensed NDVI acquisitions are restricted by factors such as cloud cover or atmospheric conditions [45]. As the study area is in a high-latitude region, cloud and snow cover days are frequent, resulting in loss of data and consequently areas where statistically significant trends in vegetation (either increasing or decreasing) can be detected are considerably decreased compared to mid-and low-latitude regions. Table 2 provides means of the statistically significant NDVI trends in different land cover categories in the three distinct zones, but also mean NDVI trends that are randomly selected from equal areas in each examined zone in various land cover types. Only land cover types common to all three zones are included in Table 2. Water Bodies and Permanent Wetlands were excluded from this analysis. The weighted average (using equal areas of each land cover type as the weighting factor) of NDVI trends over the 30 km Exclusion Zone is 6.38 × 10 −3 yr −1 , in the 60 km zone it is 4.11 × 10 −3 yr −1 and in the 90 km zone it is 2.61 × 10 −3 yr −1 .
All three zones demonstrate both increasing and decreasing vegetation dynamics. In the 30 km Exclusion Zone, approximately 45% of the area showed statistically significant trends, whereas this percentage drops to approximately 27.5% in the 60 km zone and reaches the lowest value in the 90 km zone, with 11.2% of area demonstrating significant pixel-wise NDVI trends.
Time Series Analysis with Spatial Aggregates of NDVI
To estimate the regression line for each zone but also to highlight changes in seasonality in each land cover category in the three examined zones, the spatially averaged NDVI time series were extracted for different land cover types as described in Section 2.3. The observed and modeled, using Equation (1), time series are presented in Figures 6-13, together with the trend lines and a 95% confidence interval. Trends were considered significant at the level of 5% (p values < 0.05), rejecting the null hypothesis of zero slope of the trend line. Table 3 shows the slopes of the trend lines in each land cover type and the associated examined areas. Averages of trend line slopes of Table 3 weighted based on areas of each land cover type in each examined zone were evaluated and found to be 5.77 × 10 −3 yr −1 for the 30 km Exclusion Zone, 1.64 × 10 −3 yr −1 for the 60 km zone and 1.10 × 10 −3 yr −1 in the 90 km zone. Concerning seasonality, all land cover types demonstrated analogous seasonal patterns of the NDVI in the three examined zones in all land cover types.
Discussion
Comparing values of the NDVI time series trends estimated with the two approaches outlined above, i.e., the pixel-wise approach and the spatially averaged NDVI, in the different land cover types of the 30, 60 and 90 km zones around Chernobyl, similar results concerning the 30 km Exclusion Zone are extracted, whereas lower values of NDVI trends were estimated using the spatial averaging technique for the other two zones. Considering that the spatially averaged NDVI time series over the three examined zones make use of more NDVI data over the extent of the study area, whereas results of the pixel-wise technique are only based on pixels with statistically significant trends, which were found to amount to ~20% of the study area, it is believed that the assessments using the spatially averaged NDVI are closer to reality. It is worth mentioning here that although the Urban and Builtup Lands did not demonstrate statistically significant trends in any of the three zones examined at the pixel level (Table 2), they provided NDVI greening trends with the spatially averaging technique, however at slightly higher p-values (Table 3). Both approaches, however, documented a clear difference of the NDVI trends in the 30 km Exclusion Zone compared to the other two zones examined.
In both approaches, the land cover type with the highest greening trends in the 30 km Exclusion Zone is the Woody Savannas, followed by Deciduous Broadleaf Forests. Deciduous Broadleaf Forests demonstrate the highest greening trend in the 60 km zone, followed by Mixed Forests and Woody Savannas in both approaches. In the 90 km zone, the highest greening pattern is observed in Deciduous Broadleaf Forests in both approaches, followed by Woody Savannas and Mixed Forests in the pixel-wise analysis, whereas Mixed Forests showed slightly higher greening trends than Woody Savannas during the spatially averaged NDVI time series approach. Browning vegetation patterns were detected in Grasslands and Croplands, with the spatially averaged NDVI time series approach, however those results were based on very limited areas as indicated in the results presented in Tables 2 and 3. From Tables 1-3, it can be seen that Woody Savannas is the land cover type occupying the greatest part of the 30 and 60 km zones and is the second largest part in the 90 km zone, after Croplands (land cover type with the largest area in the 90 km zone).
Combining analysis of NDVI time series in various land cover types with the results from the evolution of land cover changes in the three zones, one may conclude that the land cover categories covering most of the area in the 30 km Exclusion Zone, i.e., Woody Savannas and Mixed Forests, both gain areas, expanding at the expense of Grasslands decline, and they also demonstrate substantial greening trends. The same situation can be observed for the 60 km zone, where Woody Savannas and Mixed Forests are the dominant land cover types, both expanding and demonstrating increasing NDVI trends.
Croplands, Woody Savannas, Mixed Forests and Grasslands are the dominant land cover types in the 90 km zone, with Croplands expanding steadily during the study period, Mixed Forests and Woody Savannas maintaining their areas and Grasslands losing areas. Grasslands and Croplands in the 90 km zone express browning vegetation trends, which might be related to human activities in the area and seem to reduce the overall increasing vegetation productivity in that zone.
In all three examined zones, the overall NDVI trend seems to be controlled by the land cover categories that gained area. For the 30 km Exclusion Zone, the expansion of Woody Savannas and Mixed Forests might be attributed to absence of human activities, as afforestation is a fate of abandoned agricultural areas with the savanna-type vegetation being an intermediate stage of this process. Prevalence of woody vegetation types may also indicate their resilience to increased radiation levels. In the 60 km zone, human activities were not totally halted but remained at a low level as indicated by the low coverage of Urban and Built-up Lands and Cropland areas. The fact that the same two land cover categories expand also in the 60 km zone supports the conclusion that the greening trends in both the 30 and 60 km zones can be attributed to a combination of climate, minimized human impact and prevalence of land cover types resilient to increased radioactivity. The third zone, i.e., the 90 km zone, is indicative of the combination of climate and increasing human activities, with Croplands but also Urban and Built-up Lands expanding. This area could be considered as indicative of the vegetation conditions that would have been present in the 30 km Exclusion Zone if the nuclear accident had not happened. Although the present work documents land cover changes in the 30 and 60 km zones around Chernobyl corresponding to a transition to woody vegetation, it cannot provide evidence or document changes in species composition and dominance over the study period, and this an interesting topic for further research.
Previous studies on NDVI trends revealed a mean global greening trend of 0.46 × 10 −3 per year from 1982 to 2012 [55], which is close to the ones found with the spatially averaged NDVI time series in the present work for the 60 km zone and the 90 km zone. The 30 km Exclusion Zone demonstrates a considerably higher vegetation greening pattern. The temperate continental zones where the study area belongs are reported [55] to range from 2.08 × 10 −3 per year during 1982-1994, followed by a browning period with annual NDVI trends of −0.39 × 10 −3 during 1995-2004 and a greening period with an annual NDVI trend of 1.352 × 10 −3 from 2005 to 2012. It should be noted, however, that the study of [56] was conducted using the GIMMS NDVI dataset which has a slightly coarser spatial resolution and covers a different period compared to the MODIS NDVI products. Another analogous work is that of [57], focusing, however, on trends of the maximum annual NDVI in North West Siberia from 2000 to 2014, using the same MODIS product (MOD13Q1) used herein as well. That work concluded on various annual NDVImax trends in different biomes of North West Siberia, with approximately 70% of the Tundra and 82% of the Forest Tundra regions demonstrating NDVImax changes from 3 × 10 −3 to 6 × 10 −3 . In the Northern and Middle Taiga, most areas demonstrate a mixed character of annual NDVImax trends, ranging from −3 × 10 −3 to 3 × 10 −3 . Analogous results indicating increasing plant growth in high latitudes were reported in earlier studies [34] and were associated with a lengthening of the active growing season. In any case, the results presented herein indicate that vegetation in the affected area of Chernobyl demonstrates a surprising productivity and an expansion of woody land cover types, i.e., Woody Savannas, and Forest, as a result of land abandonment, which seem to be resilient to radioactivity. Therefore, results of both approaches presented herein did not indicate any abnormal behavior of vegetation productivity as far as its phenological properties are concerned.
This study can be considered as complementary to previous works that focused on various animal and plant species [4,5,9,10,15], highlighting the impacts from the nuclear accident many years after. A general outcome is the surprising resilience of plants and animals to chronic exposure to radiation. The results of the present study are in agreement with the findings of [30,31], that both showed an increase in NDVI values long after the accident, while they highlighted that humanity still has not yet reached a full understanding of the consequences of the Chernobyl nuclear accident. An explanation of the remarkable resilience of plants comes from [58], explaining the various mechanisms of plants to replace dead cells or tissues easier than animals, irrespective of the source of damage, i.e., due to attack by an animal or due to radiation. Rewilding has also been demonstrated in both Chernobyl and Fukushima Exclusion Zones and is, in both cases, related to the absence of human activities in those areas [10,12], highlighting the dominant role of human activities on the environment. However, radiation does harm life and may shorten the lives of individual plants and animals. Nevertheless, if harm is not fatal and resources that support life are plentiful, then life will flourish again [58], and this argument is supported by the findings of the present work as well.
Conclusions
The present work deals with the determination of vegetation dynamics over three zones around the Chernobyl area, i.e., the 30 km Exclusion Zone, the 60 km zone and the 90 km zone, almost 35 years after the nuclear disaster caused by the failure of one reactor of the Chernobyl Power Plant. Remotely sensed MODIS NDVI data for a twenty-year period (2000 to 2020) at a 16-day temporal and 250 m spatial resolution were analyzed using pixel-wise and spatially averaged NDVI seasonal trend analysis. Both techniques were applied in the individual land cover types of the three zones. Results of both analyses indicated greening vegetation trends in all three zones, with the 30 km Exclusion Zone demonstrating considerably higher greening vegetation dynamics, which can be attributed to a combination of climate, the absence of human activities and the consequent expansion of woody land cover types, which seem to be resilient to radioactivity. The results presented herein agree with previous works demonstrating the rewilding of the Chernobyl Exclusion Zone a few decades after the accident, indicating a surprising resilience of plants and animals to chronic exposure to radiation. Further, the primary role of human presence and associated activities on ecosystem dynamics is highlighted. | 2020-11-12T09:07:16.215Z | 2020-11-05T00:00:00.000 | {
"year": 2020,
"sha1": "c584f16091356bc93b9259eb464ff8f4a9fa3ece",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-445X/9/11/433/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "88bc0883bd85aff4cde71f08498b1e5b594f52a0",
"s2fieldsofstudy": [
"Environmental Science",
"Mathematics"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
34285249 | pes2o/s2orc | v3-fos-license | Metronidazole and Tacrolimus Interaction in a Kidney Transplant Recipient
We present a 56-year-old male kidney transplant recipient who was hospitalized for acute kidney injury due to gastroenteritis related volume depletion and had kindey function deterioration secondary to rising plasma levels of tacrolimus after metronidazole administration. Tacrolimus dose was decreased and arranged daily by closely drug level monitoring. His creatinine value returned to his basal levels after metronidazole was stopped and his tacrolimus levels were kept in target range. Tacrolimus is metabolized primarily in the liver by CYP3A enzymes and drugs that affect CYP3A functions such as metronidazole can cause elevated tacrolimus plasma levels that results in nephrotoxicity. This case teach us that calcineurin inhibitors should be closely monitored in kidney transplant recipients during metronidazole treatment.
Introduction
Tacrolimus which acts via the calcineurin inhibition is an important immunsuppressive drug that is used to prevent rejection in the kidney transplant recipient population. Tacrolimus is metabolized in both liver and intestine by the cytochrome P450 enzyme system and P-glycoprotein. CYP3A especially takes an important role in the tacrolimus metabolism pathway [1,2].
Tacrolimus is not only used in transplantation era but also used in different diseases such as myasthenia gravis, autoimmune disorders, atopic dermatitis, inflammatory bowel disease. Due to the patients who were prescribed tacrolimus were usually multiple drug users and the CYP3A enzyme system can be inhibited or activated by many drugs; tacrolimus related adverse events can be seen frequently. There are many drugs of which interactions with tacrolimus are well known by affecting CYP3A enzymes including antifungal agents (Voriconazole, Itraconazole, Fluconazole, Posaconazole), erythromycin, diltiazem, mibefradil and telaprevir [3].
Blood samples for tacrolimus level assessment must be taken immediately before the next dose. Levels higher than 20 ng/mL are associated with toxicity and in the first three months following the transplantation, it is suggested to keep the plasma level between 10 ng/ mL to 15 ng/mL [4].
Herein we present a male kidney transplant recipient who had acute kidney injury due to tacrolimus nephrotoxicity after metronidazole administration.
Case Report
A 56-year-old male kidney transplant recipient patient admitted to our emergency department with the complaint of watery diarrhea which was started after antibiotic use due to wound infection secondary to inguinal hernia surgery. His stool frequency was 4 to 6 per day and he did not suffer from tenesmus. He said that he had seen mucus but not blood in his stool. Acute kidney injury was detected by his blood biochemistry results (creatinine level: 2 mg/dL). Although the patient was not oliguric, renal ultrasound was performed and a 35 mm × 37 mm sized collection compatible with hematoma which was localized anterior to the bladder was detected and post-renal causes were excluded. He was hospitalized to investigate the etiology of diarrhea and to observe his renal functions.
His past medical history revealed that he had been diagnosed end stage renal disease in 2013. His renal ultrasound had been consistent with chronic kidney disease with bilateral atrophic kidneys. The cause of chronic kidney disease could not have been found and he had been treated with hemodialysis for 6 months until March 2014 when a kidney was transplanted him from his son. After transplantation, he was discharged from hospital with a creatinine level of 1.2 mg/dL. His immunosupressive treatment consisting of prednisolone, tacrolimus and mycophenolate mofetil was prescribed. His medical follow-up went on at another hospital till his last emergency department admission. It is learnt that NODAT (New onset diabetes after transplantation) had occured and insulin treatment had been started and his creatinine level had slowly progressed to 1.5 mg/dL by this period.
His physical examination was normal except for decreased turgor pressure. Fractionated Na excretion (FeNa) was calculated as <1%. There was no microscopic hematuria and pyuria in his urine examination . There was 1+ proteinuria by dipstick and spot urine protein to creatinine ratio was 1600 mg/gr while his synchronous serum albumin level was 3.5 mg/dL. He was taking tacrolimus 1.5 mg twice a day and his tacrolimus level was 7 ng/mL. While the clinical condition was accepted as primarily prerenal acute kidney injury due to diarrhea by the lights of physical examination and laboratory findings, intravenous saline infusion was started.
Microscopic evaluation of the stool sample revealed a lot of leukocytes and erythrocytes suggesting bacterial gastroenteritis. There were no amip and parasitic ovas in microbiolgical evaluation and Closrtridium difficile toxin PCR result was negative. Stool sample was sent for culture and metronidazole was started. Owing to patient's past medical history included similar diarrhea attacks; mycophenolate mofetil was switched to azathioprine and prednisolone dose was increased to 10 mg per day. Although his creatinine level was minimally decreased from his admission level while he was taking enough enteral and parenteral fluids and his stool frequency was decreased; his creatinine levels raised up to 4.2 mg/dL progressively in two days after metronidazole administration.
There were no Schistocytes in his peripheral blood smear and his complete blood count was normal, therefore hemolytic uremic syndrome was excluded. Due to his kidney function deterioration occured after metronidazole administration, calcineurin nephrotoxicity was suspected as causative factor. It was detected that his tacrolimus level was raised to 15.3 ng/mL from the level of 7.6 ng/mL which was evaluated before metronidazole initiation. Tacrolimus was stopped, drug level was monitored daily and added lower dose again when its level decreased under the target range . His diarrhea was stopped and creatinine value was returned to the baseline level in the ongoing days. The patient was discharged after his metronidazole treatment course had been completed.
Discussion
Rejection is an important and one of the most common causes of graft loss after kidney transplantation. In most centers; triple immunosuppressive drug combination regimen usually consisting of corticosteroids, calcineurin inhibitors and an antiproliferative agent is preferred for rejection prevention. This regimen yielded a very low rejection rates. There are two calcineurin inhibitors (tacrolimus and cyclosporine) which are used in kidney transplantation era. It is shown by clinical trials that tacrolimus is better for rejection prevention than cyclosporine [5].
Gonwa et al. randomized deceased donor recipients into three immunosuppressive regimens (all consisted corticosteroids); tacrolimus plus azathioprine, tacrolimus plus mycophenolate mofetil and microemulsion cyclosporine plus mycophenolate mofetil. Although acute rejection rates were similar in each group (≤20%), the incidence of corticosteroid resistant rejection was lower in the tacrolimus arms. A 3-year follow-up found no statistically significant difference in renal function, patient or overall graft survival, but improved graft survival in recipients with delayed graft function in the tacrolimus arms [6]. The Elite Symphony trial also demonstrated that the low dose cyclosporine is not as effective as low dose tacrolimus on the rejection prevention [7]. As a result of these trials, the KDIGO Clinical Practice Guidelines suggest that tacrolimus should be the preferred calcineurin inhibitor for renal transplant recipients (Level of recommendation 2A) [8].
Metronidazole which consists 5-nitroimidazole compound is effective on protozoon and anaerobic bacterial infections. It is widely used to treat gastroenteritis suspected to have an infectious origin. Metronidazole is associated with gastrointestinal (nausea, vomiting, metallic flavour in the mouth etc.) and neurological (headache, optic neuritis, peripheral neuropathy etc.) adverse events when it is used for a long period or administered high dose [9].
Metronidazole is metabolized in the liver via CYP 450 enzyme system but specific isoforms which have a role in the drug's metabolism is not known. CYP2C9 inhibiting effect of metronidazole is well described in the literature but although there are some case reports that reveal increased serum levels of CYP3A substrates during metronidazole use, inhibition of the CYP3A enzymes by metronidazole couldn't be demonstrated by basic and clinical researches yet. Interactions between CYP3A substrates and metronidazole must include other mechanisms rather than direct inhibition of the enzyme system by metronidazole [10].
Conclusion
To conclude; as occured in our case, clinicians should be aware of metronidazole administration to a kidney transplant recipient who is under treatment with a calcineurin inhibitor consisting immunosuppressive regimen and should monitor the drug level and kidney functions closely. | 2019-03-17T13:05:59.164Z | 2017-04-27T00:00:00.000 | {
"year": 2017,
"sha1": "86a91781da4224c52896e1ddf7cfac437ae3089b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2165-7920.1000953",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ffdf821cc94e720616c8e9696783097ab2d12eb0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
243957219 | pes2o/s2orc | v3-fos-license | Fluctuating Nature of Light-Enhanced $d$-Wave Superconductivity: A Time-Dependent Variational Non-Gaussian Exact Diagonalization Study
Engineering quantum phases using light is a novel route to designing functional materials, where light-induced superconductivity is a successful example. Although this phenomenon has been realized experimentally, especially for the high-$T_c$ cuprates, the underlying mechanism remains mysterious. Using the recently developed variational non-Gaussian exact diagonalization method, we investigate a particular type of photoenhanced superconductivity by suppressing a competing charge order in a strongly correlated electron-electron and electron-phonon system. We find that the $d$-wave superconductivity pairing correlation can be enhanced by a pulsed laser, consistent with recent experiments based on gap characterizations. However, we also find that the pairing correlation length is heavily suppressed by the pump pulse, indicating that light-enhanced superconductivity may be of fluctuating nature. Our findings also imply a general behavior of nonequilibrium states with competing orders, beyond the description of a mean-field framework.
I. INTRODUCTION
Understanding, controlling, and designing functional quantum phases are major goals and challenges in modern condensed matter physics [1,2]. Among a few successful examples, light-induced superconductivity, above its original transition temperature, has been a great surprise and is believed to be a promising route to room-temperature superconductors. Although this novel phenomenon has been realized experimentally in various materials including cuprates, fullerides, and organic salts [3][4][5][6][7], its mechanism remains controversial. The underlying physics is partly attractive yet mysterious for the enhanced d-wave superconductivity observed in pumped charge-ordered high-T c cuprates La 1.675 Eu 0.2 Sr 0.125 CuO 4 and La 2−x Ba x CuO 4 near 1/8 doping [8][9][10][11][12][13][14], partially due to their original high critical temperatures at equilibrium. As illustrated in Fig. 1(a), the occurrence of the Cooper pairs above T c (reflected by the Josephson plasma resonance in experiments), as a signature of light-induced superconductivity, is observed when these materials are stimulated by a near-infrared pulse laser.
Motivated by the experiments, various theories and numerical simulations have been conducted to explain the observed light-induced phenomena. In the context of conventional BCS superconductivity, simulations with both mean-field and many-body models have demonstrated the feasibility to manipulate and enhance local Cooper pairs (i.e. s-wave superconductivity) [15][16][17][18][19][20][21][22][23][24]. The understanding becomes more challenging for the unconventional d-wave superconductivity in cuprates, due to the two-dimensional geometry and the strongly correlated nature. Insightful theoretical perspectives have been proposed in the context of phenomenological or steady-state theory, including the suppression of competing charge order [25], Floquet engineering of the Fermi surface [26], and (for the terahertz pump) parametric amplification [27]. In contrast to the studies of s-wave superconductivity, rigorous nonequilibrium simulations for nonlocal d-wave pairing instability in microscopic models have been limited to undoped systems with truncated phonon modes [28], distinct from the conditions of existing cuprate-based experiments. The extension to a doped system has been hindered by the difficulty of treating both strong electronic correlation and electronphonon coupling in a quantum many-body simulation. In particular, the spatial fluctuations of phonons and bosonic excitations in a 1/8-doped cuprates are expected to be important due to the lack of a nesting momentum. Therefore, the microscopic theory for light-induced superconductivity in a doped cuprate remains an open question.
Moreover, recent experiments have revealed the presence of fluctuating superconductivity above the transition temperature T c in overdoped cuprates and FeSe [29][30][31][32]. In this regime, the superconductivity gap remains open, but long-range order and zero resistance are absent due to the reduction of correlation length. Recent ultrafast experiments also showed that the pump light can destroy the coherence of Cooper pairs and fluctuate an existing superconductor [33][34][35]. Thus, the resonance and gap in the transient optical conductivity may cause a misassignment of the long-range superconductivity [36][37][38]. Both observations raise the necessity to further investigate the coherence of superconductivity induced by light in a quantum many-body model.
For this purpose, we study the photoinduced dynamics and superconductivity in a light-driven Hubbard-Holstein model relevant to cuprates. To overcome the numerical difficulties of simulating many-body dynamics with strong electronic correlations and electronphonon coupling, we construct on top of the recently developed variational non-Gaussian exact diagonalization (NGSED) and develop the time-dependent NGSED (see Sec. III). This hybrid method leverages the merits of both the numerical many-body solver and the variational non-Gaussian solver: The former is necessary to unbiasedly tackle strong electronic correlations, while the latter avoids the phonon's unbounded Hilbert-space problem and has been demonstrated efficient in describing the ground and excited states of systems with the Fröhlichtype electron-phonon coupling [39][40][41][42]. As an extension of the equilibrium NGSED [43], this time-dependent method provides an accurate description of far-fromequilibrium states through the Krylov-subspace method and the Kählerization of the solvers. These advances of the numerical method allow the simulation of lightinduced dynamics in quantum materials with both electronic correlations and electron-phonon couplings.
As summarized in Fig. 1(b), our results suggest that the d-wave pairing correlation can be dramatically enhanced by a pulsed laser when the system is charge dominant and close to a phase boundary, consistent with optical experiments. However, we also find that the pairing correlation length is heavily suppressed by the pump pulse. Therefore, light-enhanced d-wave superconductivity may be of fluctuating nature. In addition to existing ultrafast reflectivity measurements, our theoretical findings also predict various observations verifiable by future transport and photoemission experiments on pumped La 1.675 Eu 0.2 Sr 0.125 CuO 4 and La 2−x Ba x CuO 4 .
The organization of this paper is as follows. We introduce our microscopic model for the cuprate system and light-matter interaction in Sec. II. Next, we briefly introduce the assumptions and framework of the timedependent NGSED method in Sec. III, while the detailed derivations and benchmarks are shown in Appendix A and B. The main results about light-enhanced d-wave superconductivity and the fluctuating nature are presented in Secs. IV and V. We then discuss the frequency dependence of these observations in Sec. VI. Finally, we conclude our paper and discuss relevant experimental predictions in Sec. VII.
II. PROTOTYPICAL MODELS
Unconventional superconductivity is believed to emerge from the intertwined orders of strongly correlated systems, where both spin and charge instabilities exist [44,45]. It was widely believed that spin fluctuation, induced by electron correlation, is a viable candidate to provide the pairing glue for d-wave superconductivity [46][47][48][49]. The minimal model to represent this strong correlation is the single-band Hubbard model [50]. Based on rigorous numerical simulations, many important experimental discoveries have been reproduced using this model, such as antiferromagnetism [51], stripe phases [52][53][54][55], strange metallicity [56][57][58], and superconductivity [59][60][61]. However, increasing experimental evidence reveals the significant role of phonons in high-T c superconductors [62][63][64][65][66], in addition to the strong electronic correlation. Together with the crucial lattice effects observed in pump-probe experiments [3][4][5], we believe that the minimal model to describe the light-induced d-wave superconductivity must involve both interactions.
The Hubbard-Holstein model is the prototypical model for describing correlated quantum materials with both electron-electron interaction and electron-phonon coupling [67,68]. Its Hamiltonian is written as Here, c iσ (c † iσ ) annihilates (creates) an electron at site i with spin σ, and a i (a † i ) annihilates (creates) a phonon at site i. To reduce the number of model parameters, we restrict the hopping integral t h to the nearest neighbor, electron-electron interaction to the on-site Coulomb (Hubbard) repulsion U , electron-phonon coupling to the on-site electrostatic coupling (Holstein) g, and phonon energy to the dispersionless ω 0 . The electron-electron interaction and electron-phonon coupling can be depicted by the dimensionless parameters u = U/t h and λ = g 2 /ω 0 t h , respectively. Here, we set ω 0 = t h in accord with common choices [69][70][71][72][73][74]. Throughout the paper, we focus on 12.5% hole doping simulated on an N = 4 × 4 cluster, corresponding to the La 1.675 Eu 0.2 Sr 0.125 CuO 4 and La 1.875 Ba 0.125 CuO 4 experiments [8][9][10][11][12][13][14]75].
In an undoped system with a well-defined nesting momentum, one can restrict the phonons to only the q = (π, π) mode [28,73].
However, in a doped Hubbard-Holstein model without commensurability, phonon modes at all momenta should be considered. As shown in Ref. 43, the phase diagram for a doped Hubbard-Holstein model is dominated by a regime with strong charge susceptibility and a regime with strong spin susceptibility, although both are not ordered like the half-filled case. Unlike the striped order demonstrated in the Hubbard model at the thermodynamic limit [52][53][54][55], our system size does not support such a period-8 instability. Therefore, to increase the charge correlation and mimic the charge-dominant cuprate system, we exploit relatively strong electron-phonon couplings.
The light-driven physics is described (on the microscopic level) through the Peierls substitution where the vector potential A(t) of the external light pulse affects the many-body Hamiltonian Eq. (1). In this paper, we simulate the pump pulse with an oscillatory Gaussian vector potential: and fix the polarization as diagonalê pol = 1 √ 2 (x +ŷ) and the pump frequency as Ω = 4t h (close to the 800-nm laser for t h = 350meV). In a strongly correlated model like the Hubbard model or the extended Hubbard model, the ultrafast pump pulse is coupled to electrons and may manipulate the delicate balance between different competing orders [76][77][78][79]. When a finite-frequency phonon is involved, the photomanipulated competition of phases also relies on the retardation effect of the phonons.
Throughout this paper, we restrict ourselves to the two-dimensional Hubbard-Holstein model and transient equal-time correlation functions. The relation between the in-plane Cooper pairs and the c-axis Josephson plasmon resonance has been studied through semiclassical simulations [80][81][82]. Our simulation does not consider specific experimental conditions including the aforementioned probe schemes, material-specific matrix elements, and finite temperature, nevertheless it helps explain the observed phenomena, in general. Therefore, our correlation function analysis in Secs. IV-VI aims to address a matter-of-principle question when the balance between charge-density wave (CDW) and superconductivity, in a strongly correlated material, is altered by a pulsed laser.
III. TIME-DEPENDENT VARIATIONAL NON-GAUSSIAN EXACT DIAGONALIZATION METHOD
The simulation of nonequilibrium quantum many-body systems requires either Green's function or wavefunction methods. The Green's function methods, constructed on the Keldysh formalism and represented by the dynamical mean-field theory [83,84], have achieved great success in solving correlated materials and pump-probe spectroscopies in the thermodynamic limit [85,86]. However, when multiple instabilities compete in systems with strong electronic correlations and electron-phonon couplings, the accuracy cannot be guaranteed through perturbations which may lead to a biased solution. On the other hand, wavefunction methods, such as exact diagonalization (ED) and density-matrix renormalization group, have well-controlled numerical error, but are restricted to small systems or low dimensions [87,88]. This issue becomes even more severe when the electronphonon coupling is non-negligible due to the unbounded phonon Hilbert space [89,90].
To tackle the strong and dynamical electron-phonon coupling and overcome the issue of phonon Hilbert space, we develop a time-dependent extension of the NGSED method. The idea of this method is based on the following observations for electrons and phonons, respectively. Electrons have a complicated form of interactions (four-fermion terms) and intertwined instabilities, which thereby have to be treated by an accurate solver. However, with the Pauli exclusion principle, the electronic Hilbert-space dimension is relatively small, which allows an exact solution for finite-size or lowdimensional systems. In contrast, the phonon-phonon interaction (anharmonicity) is usually weak and electronphonon coupling has a linear form, with the complexity coming instead from the unbounded local phonon Hilbert space. Therefore, if one can find an entangler transforma-tion involving a general form of entanglements between electrons and phonons, the solution of the correlated Hamiltonian can be mapped (after the transformation) to a factorizable wavefunction as a product state consisting of electrons and phonons separately. Using this trick, we can take advantage of the distinct properties mentioned above and solve the two subsystems using different techniques.
In equilibrium, such an efficient entangler transformation has been found and benchmarked, in terms of the generalized polaron transformation [39]. This leads to the wavefunction ansatz [43] with the electron density operator ρ q = iσ n iσ e −iq·r i , the phonon momentum where N is the number of lattice sites in the calculation. Here, the right-hand side is a direct product of electron and phonon states: The electronic wavefunction |ψ e is treated as a full many-body state, while the phonon wavefunction |ψ ph is treated as coherent Gaussian state (see Appendix. A for details). The entangler transformation involves momentum-dependent variational parameters λ q , describing the polaronic dressing. Physically, a larger dressing λ q ∼ g/ω 0 accurately describes the phonon and coupling energies, known as the Lang-Firsov transformation [91], while a smaller dressing allows a precise solution for the electron energy. Thus, the variational parameters λ q s are optimized numerically as a balance between these two effects. Note that all λ q s are independent real numbers and entangle the phonon momentum with electron density. This entangler transformation is demonstrated to be sufficient in equilibrium [39,42]. For systems with both strong electron-phonon coupling and electronic interactions, the electronic part of wavefunction |ψ e can be solved by ED and the above framework becomes the NGSED method. The application of the NGSED to the equilibrium 2D Hubbard-Holstein model successfully reveals the novel intermediate phases with superconducting instability [43,92,93], which is consistent with the recent DQMC-AFQMC simulations at the thermodynamic limit [94].
In the real-time evolution, the electron density is driven by the pump laser, the consequence of which is the varying force acting on the lattice and the finite momentum of phonon. To characterize the electrondensity-dependent phonon momentum, we introduce an extra cubic coupling between the position and the phonon density in the non-Gaussian transformation to construct the variational ansatz Here, the additional phase parameter ϕ q controls the ratio of position and momentum displacements. The time dependence of λ q and ϕ q allows dynamical fluctuation of the polaronic dressing effect. At the same time, these two sets of parameters, as a whole, disentangle the electrons and phonons with the Fröhlich-type coupling and allow a relatively accurate description of the transformed wavefunction via the direct-product state in the rightmost of Eq. (5). The price one pays for the transformation is the complication of the effective electronic Hamiltonian [see Eq. (A9) in Appendix A], which is solved by the time-dependent ED with well-controlled numerical errors [see discussions in Appendix B]. As a minimal extension to the equilibrium wavefunction ansatz Eq. (3), the structure of Eq. (4) involves an implicit Kählerization, which guarantees the minimization of errors throughout the evolution (see discussions in Appendix A). In each numerically small step of the time evolution, i.e. |Ψ(t) → |Ψ(t+δt) , we simultaneously evolve both the variational parameters and the full electronic wavefunction |ψ e (t) , as explained in Appendixes A and B, respectively.
IV. LIGHT-ENHANCED PAIRING CORRELATIONS
Figure 2(a) shows the square of local moment m 2 z = i (n i↑ −n i↓ ) 2 /N calculated for different u and λ of the 12.5% doped Hubbard-Holstein model, where N is the system size. m 2 z provides an indication of the overall spin moment and spin correlation. An abrupt change of m 2 z with varying u and/or λ can signal a phase transition. At the large-u regime, the system is dominated by the spin correlations inherent from the Hubbard model. In this regime, d-wave superconductivity is identified in a quasi-1D system [61,95], although inthe 2D thermodynamic limit its existence is not fully established [96]. In contrast, at the large-λ regime, the electrons are bound with phonons as bipolarons, exhibiting a local moment lower than that of noninteracting electrons. This is also reflected in the average double occupation n d = i n i↑ n i↓ /N presented in Fig. 2(b). There is an intermediate regime between these two limits, where the charge correlation slightly dominates (reflected by the comparison to non-interacting-electron local moment m 2 z 0 ), while the spin correlations remain finite. Based on recent studies using NGSED and QMC methods, this intermediate regime persists at the thermodynamics limit and reflects the realistic phases in a competing-order system [43,94]. We focus on this intermediate regime, which we believe corresponds to the situation in cuprate experiments [8][9][10][11][12][13][14]28].
We select two sets of model parameters inside this intermediate regime. Set 1 (u = 6.6 and λ = 4) lies in the center of the intermediate regime, while Set 2 (u = 7 and λ = 4) lies close to the boundary. We first focus on Set 1 and examine the light-induced dynamics for the charge structure factor N (q) = ρ −q ρ q /N and spin structure factor S(q) = ρ s −q ρ s q /N , where ρ s q = i (n i↑ − n i↓ )e −iq·r i . We focus on the momentum (c1-c2) The evolution of charge correlation N (π, π) after a linear pump of various strengths A0 ranging from 0.1 to 1 (denoted by different colors) with λ = 4 and (c1) u = 6.6 and (c2) u = 7, respectively. (d1-e2) The same as (c1-c2) but for (d1-d2) the spin correlation S(π, π) and (e1-e2) the d-wave pairing susceptibility P (d) . The pump pulse is sketched above. q = (π, π), due to the important role of spin fluctuations at this momentum on superconductivity; data at other momenta are presented in the Appendix C. Figures 2 (c1) and (d1) show the pump dynamics of N (π, π) and S(π, π) for Set 1. The same as the half-filled case [28], the pump pulse suppresses the charge correlations and enhances spin fluctuations, which possibly serve as the pairing glue, at short time (t < 0). This leads to the rise of d-wave superconductivity instability [see Fig. 2(e1)], characterized by the pairing susceptibility where the d-wave pairing operator reads As Set 1 is relatively far from the spin-dominant regime, spin fluctuation is unstable, manifest as the drop of S(π, π) for strong pump (A 0 > 0.5) at longer timescales. As a consequence, the d-wave pairing susceptibility stops increasing and starts to decrease for these strong pump conditions. This nonmonotonic behavior reflects that the transient d-wave pairing emerges from the dedicated balance among charge, spin, and electronic itineracy. While the latter two can be induced by light and enhance the d-wave pairing, their internal competition may reduce the enhancement if the pump strength keeps increasing. Noticeably, when the nonequilibrium state finally melts into such a "metallic" state, lattice distortions are released. In these cases, the dynamics exhibit a period ∼ 2π/ω 0 . This is reflected in the corresponding Fourier spectra (see also the Appendixes).
The situation becomes different if one drives the system with Set 2 (u = 7 and λ = 4), where the system resides in proximity to a phase boundary. Because of the existing strong magnetic fluctuations, we find that the d-wave pairing correlation is easier to enhance. As shown in Figs. 2(c2-e2), the pump pulse increasingly enhances S(π, π) and P (d) after suppressing the charge fluctuations. Although the enhancement of S(π, π) saturates at large pump strengths, spin fluctuations are more robust against the increase of pump, due to proximity to a quantum phase boundary. Therefore, they never drop below the equilibrium values. This change of spin fluctuations causes the saturation of d-wave pairing correlations in Fig. 2(e2). Similarly, this enhanced dwave pairing concurs with a melting of the competing CDW order, which leads to an oscillation of phonon energy. Since the CDW order is already very weak near the phase boundary, the intensity of the amplitude mode The increase of pairing correlation reflects the formation of Cooper pairs induced by the pump pulse. The experimental reflection of this correlation is the opening of a gap (or Josephson plasma) characterized by ultrafast optical reflectivity [8][9][10][11][12][13][14]. However, we emphasize that the opening of a gap does not necessarily reflect the onset of superconductivity. As recently shown in cuprates and FeSe experiments, above T c there exists a fluctuating superconductivity phase that exhibits a superconducting gap but also a finite resistance, reflecting preformed Cooper pairs [29][30][31][32]. This is a signature of strong correlations in quantum material distinct from the meanfield notions, due to strong thermal or quantum fluctuations. For the nonequilibrium superconductivity in this paper, we focus on the zero-temperature dynamics and fluctuations may arise from quantum instead of thermal origins.
To better clarify the nature of this photoinduced manybody state, we further consider the spatial fluctuation through the FFLO pairing correlation with finite momentum P [97,98]. For k = 0, it recovers to the BCS pairing correlation, reflecting the total number of Cooper pairs. The correlation length ξ can be estimated through based on the definition of correlation length satisfying ∆ Figures 3(a) and (b) present the evolution of ξ, estimated using the smallest momentum accessible in our cluster k = (π/2, 0) and normalized by the equilibrium correlation length ξ 0 . For the u = 6.6 system where charge order is robust [3 (a)], the correlation length is dramatically suppressed for A 0 < 0.5, where the P (d) is enhanced in Fig. 2(e1). When the pump strength increases to A 0 ≥ 0.5, ξ even drops to zero at the center of the pump, indicating the destruction of local Cooper pairs and consistent with the drop of P (d) in Fig. 2(e1). This overall drop of correlation length can be visualized in the real-space distribution of ∆ shown in Fig. 3(c): The Cooper pairs become strongly localized and spatially decoherent after the pump. What is more interesting is the system near the phase boundary (u = 7), where the BCS pairing correlation P (d) always increases after the pump, as discussed in Fig. 2(e2). However, this enhancement of total Cooper pairs is always accompanied by a drop of the correlation length, as shown in Fig. 3(b). This means that the photoinduced Cooper pairing is more fluctuating than the equilibrium one. Different from the u = 6.6 system, here the decrease of ξ saturates at a moderate value, about half of the equilibrium ξ 0 . To filter out the oscillatory dynamics, we extract the average enhancement of pairing correlation P d (t)/P d (−∞) and the suppression of ξ(t)/ξ(−∞) during the time window 10t −1 h < t < 30t −1 h , long after the pump pulse disappears. As already shown in Fig. 1(b), the relative changes (suppression or enhancement) in these two quantities are comparable for all pump strengths. Considering that long-range correlations asymptotically approach ∆ ∼ P (d) e −|ri−rj |/ξ , our simulation reflects an enhancement of short-range superconductivity but a suppression of the long-range one [99].
The difference between a short-range enhancement and long-range suppression can be visualized via the spatial distribution obtained by the Fourier transform of P (d) k . As shown in Fig. 3(c), the local pairing correlation increases and the range where the correlation is visible remains finite after the pump. Both facts suggest that this type of short-range superconductivity is visible in pump-probe spectroscopies, albeit with a finite broadening. Thus, the fluctuating nature of light-enhanced superconductivity does not conflict with existing optical experiments in La 1.675 Eu 0.2 Sr 0.125 CuO 4 and La 1.875 Ba 0.125 CuO 4 [8][9][10][11][12][13][14]. Such a suppression of the coherence is associated with the nonlocal nature of d-wave Cooper pairs, which is in contrast to the lightenhanced strength and coherence of local pairing [16,17,22,23]. This anticorrelation between the strength and coherence is also distinct from previous light-induced competing orders [76][77][78][79], possibly resulting from the fact that superconductivity arises as a third instability emergent from the competition.
VI. FREQUENCY DEPENDENCE
We further investigate the impact of the pump frequencies on superconductivity and the competing orders. With a fixed pump amplitude A 0 = 0.8, Figs. 4(ac) present the dynamics of various quantities discussed above for different frequencies.
As Ω increases from t h (350 meV) to 10t h (3.5 eV), the suppression of the charge correlations is weaker. This can be attributed to the loss of resonances to the phonon dynamics, which plays the crucial role in the melting of the charge order. Interestingly, these suppressions of CDW do not always translate into a monotonic increase of the dwave superconductivity. As shown in Fig. 4(b), the lowfrequency pump with Ω = 2t h leads to a slightly weaker enhancement and a dramatic decrease of the correlation length ξ. The transient correlation length is suppressed to zero in the center of the pump [see Fig. 4(c)], indicating the loss of superconductivity. We tentatively attribute this anomaly to the fact that the parametric driving condition Ω ∼ 2ω 0 is reached [18,100], leading to strong fluctuations of phonons and overwhelming the d-wave pairing [74,101].
Excepting the two frequencies which correspond to parametric driving, the enhancement of the d-wave pairing susceptibility drops monotonically with the pump frequency, consistent with the melting of the charge correlations. We stress that such a phenomenon reflects that the dynamics are driven by light-induced quantum fluctuations instead of a thermal effect, as the fluence increases (instead of decreases) with the pump frequency. Although some nonequilibrium effects are simply attributed to the sudden heating of the electronic states caused by the pump pulse, this is not the situation we found in the light-enhanced d-wave superconductivity. More rigorous discrimination of the thermal effects relies on the calculation of pump-probe spectroscopies and a fitting with the thermal distribution function [35,[102][103][104][105], which is beyond the scope of this paper. Figures 4(d) and (e) summarize the frequency dependence of the pairing susceptibility and the correlation length, averaged between time t = 10 − 30 t −1 h after the pump, similar to the pump strength dependence shown in Fig. 1. Except for the situation close to phonon parametric resonance, we find that the maximal superconductivity enhancement is reached at approximately 1.4 eV and rapidly decreases for higher frequencies. As this frequency is close to the Mott gap resonance (approximately 4t h ∼ 1.4 eV) for U = 8t h , this enhancement can be attributed to the generation of spin fluctuations across the Mott gap. This observation is consistent with the wavelength-dependence study in LBCO, where the 800-nm laser is found to enhance superconductivity move efficiently than a 400-nm laser [11]. Although other mechanisms beyond the single-band model have been proposed to address this wavelength dependence, such a consistency strengthens the connection between our microscopic simulation and the real experiments.
VII. SUMMARY AND OUTLOOK
Altogether, our study provides a novel perspective of interpreting the light-induced d-wave pairing in the context of strong correlation and electron-phonon coupling. Although it does not rule out other interpretations, our result reflects that the light-induced Cooper pairs may be spatially local. Therefore, the Josephson plasma resonance and the optical gap may coexist with a small Drude weight, which reflects the superfluid density and cannot be directly resolved in ultrafast reflectivity experiments. Fortunately, this mystery was recently answered by a state-of-the-art ultrafast transport measurement, in an-other type of light-induced superconductor (K 3 C 60 ) [106]. Such a transport measurement directly provides the transient resistivity of the material, which reflects the superconductivity long-range order. Therefore, a similar transport measurement in La 1.875 Ba 0.125 CuO 4 will help to distinguish fluctuations from ordered d-wave superconductivity.
In addition to optical and transport properties, the single-particle electronic structure measurable by photoemission is employed frequently to characterize the d-wave superconductors. In the single-particle context, the equilibrium fluctuating superconductivity translates into the (single-particle) gap opening above the transition temperature T c [30][31][32], while the distinction from longrange ordering may hide in the quantitative spectral shape (e.g., quasiparticle width and temperature dependence of the Fermi-surface spectral weight) and is still an ongoing research. Identifying this fluctuation would be even harder, but also promising, for a nonequilibrium state measured by the time-resolved ARPES. The findings of our simulations indicate that the light-driven La 1.875 Ba 0.125 CuO 4 should exhibit a (single-particle) superconducting gap, but its quasiparticle peak should be damped by the fluctuations. Complementary to regular angle-resolved photoemission spectroscopy (ARPES), future developments on two-electron ARPES measurement [107,108], at ultrafast timescales, are promising directions to distinguish long-range superconductivity from a fluctuating one.
Alternatively, a recent attempt has investigated the inverse process by measuring the coherence of CDW states using x-ray scattering [109]. Light-induced CDW also has been recently observed in LaTe 3 with two competing CDW orders [110]. Here, we investigate specifically lightdriven cuprates, but appropriate modification [111,112] of our study and microscopic model can provide a general platform to examine light-driven fluctuations for intrinsically competing orders, as pointed out in a recent phenomenological study [38]. The fluctuating nature of the enhanced quantum phase suggests that the nonequilibrium states of competing-order systems are beyond the description of a mean-field framework.
To tackle the dynamics of systems with strong electronic correlation and electron-phonon coupling, we develop a new technique by generalizing the variational non-Gaussian exact diagonalization framework out of equilibrium.
This method overcomes the issues of unbounded phonon Hilbert space, while keeping the numerical accuracy through the exact diagonalization of the (transformed) electronic problem. Although we restrict the application into the Hubbard-Holstein model relevant for the cuprate d-wave superconductivity, our derivation is rather general and can be applied to dispersive phonons, extended electronic interactions, and more complicated electron-phonon coupling. The formalism also can be applied to multiple phonon branches as long as the phonon-phonon interaction can be ignored, but the generalization to multiband electrons is restricted by the Hilbert-space size issue in exact diagonalization. Future application of this method also includes the evolution of pump-probe spectroscopies [102,104,[113][114][115] To obtain the equations of motion for variational parameters in Eq. (5), where the phonon state is we notice that the generalization from U plrn to U NGS involves a rotation of phonon operators in the phase space. Therefore, the U NGS can be expressed by a composite transformation, leading to an equivalent form of Eq. (5): where the rotation is reflected in the unitary (Gaussian) transformation and η q is a variational matrix. In the derivation of variational equations of motion, it is usually convenient to employ the linearized formS q to parametrize the rotational transformation, i.e. U † rot R q U rot =S q R q . Thus, ignoring the redundant degrees of freedom, which can be absorbed into |ψ ph , we obtain the following relation between ϕ q andS q S q (t) = e iσyηq = cos ϕ q (t) − sin ϕ q (t) sin ϕ q (t) cos ϕ q (t) . (A4) In the above wavefunction prototype, ∆ R (vector), ξ q (matrix), λ q and ϕ q (scalar) are variational parameters, which are allowed to evolve during the dynamics.
To evaluate the nonequilibrium dynamics within the hybrid wavefunction ansatz, we project the Schrödinger equation i∂ t Ψ(t) = H(t) Ψ(t) onto the tangential space in the variational manifold, which gives equations of motion for both the variational parameters and the electronic wavefunction |ψ e (t) . In contrast to an energy minimization (imaginary-time evolution), the algebra of this projection in a real-time evolution may be ill defined. For the general time-dependent variational principle, the real and imaginary parts in the projection of Schrödinger equation on the tangential space with |ξ α denoting any tangential vector, may give rise to two sets of incompatible equations of motion, which correspond to the Dirac-Frankel and McLachlan variational principles, respectively [116]. If and only if the variational manifold is Kähler manifold, the two principles result in the same equation of motion. By designing the wavefunction ansatz in Eqs. (5) and (4), we Kählerize the variational manifold such that the two principles are compatible and the derivations below are self-consistent. Physically, the Kählerity guarantees that the variational principle Eq. (A5) minimizes errors with respect to the realistic time-dependent wavefunctions.
Taking advantage of the composite representation of U NGS , it is convenient to define and solve the Schrödinger equation in a rotating frame i∂ t Ψ (t) =H Ψ (t) . Thus, the rotated Hamiltonian becomes Here, we denote the general electron-electron interaction terms as H e−e and assume it to commute with the electronic density operator n i , which is satisfied in the Hubbard-Holstein model Eq. (1). This assumption substantially simplifies the derivation and leads to the following Eq. (A9).
On the left-hand side of the rotated Schrödinger equation, we obtain Here, S q = e iσyξq is the linearized transformation matrix of the U GS and the effective Hamiltonian becomes for the product state |ψ ph ⊗ ψ e . Here, α takes the x and y directions, while δ α denotes the vector pointing to the nearest neighbors along the corresponding directions. After transforming the Hamiltonian to the basis of product states, the electron-phonon dressing effects are reflected in both the interaction and the kinetic energy. For the effective electronic interaction, the dressing of phonons mediates an additional V q on top of the original H e−e , whose variational expression is (Note, that we employed the assumption that H e−e commutes with n i and, therefore, U plrn .) For the kinetic energy, the phonon-dressed effective hopping term can be reformulated in a closed form when taking the expectation with respect to the phonon Gaussian state |ψ ph . Now with both the time derivative and the effective Hamiltonian transformed into the product-state basis, we can project Eqs. (A8) and (A9) onto various tangential vectors sequentially. For the zeroth-order tangential vector, which is proportional to the phonon vacuum state |0 ph , we obtain the equation of motion for the electronic state: The (phonon-traced) H eff together with the scalar terms in the square brackets govern the time evolution of the electronic wavefunction |ψ e . In the NGSED framework, we treat |ψ e as a full many-body state and evaluate this evolution stepwisely by the Krylov-subspace method (see Appendix B). Moreover, the equations of motion for (the variational parameters of) |ψ ph can be obtained by projecting Eqs. (A8) and (A9) onto the first-order tangential vector (R T 0 S T 0 |0 ph ⊗ |ψ e ) and the second-order tangential vector (R † q R q |0 ph ⊗ |ψ e ). After some algebra, these two equations are and respectively. Here, ρ f = N e /N is the average electron density, and the renormalized phonon energy matrix is As we are interested in the equal-time measurements (see Sec. IV), the gauge degrees of freedom are redundant. Therefore, we employ a gauge-invariant covariance matrix Γ q = S q S † q and track the evolution of Γ q instead of S q . Equation (A14) becomes Finally, the equations of motion of λ q and ϕ q , which appear in U NGS and entangle phonons and electrons, can be obtained by projecting to the third-order tangential vector a † q ρ q |0 ph ⊗ |ψ e . These two equations are and respectively, where the modulated electronic correlation and the density correlation is C q = ρ −q ρ q . With the variational wavefunction Eq. (5), we can express all (equal-time) observables using analytical expressions formed by the variational parameters and the correlations of fermionic operated evaluated in the electronic wavefunction |ψ e [43]. The most important observable in this manuscript is d-wave pairing correlation, which can be explicitly written as This expression contains the pairing correlation in the electronic part of wavefunction −4e iq·r cos q x 2 cos q y 2 Here, ther in the last summation denotes the half-unitcell-shifted coordinatesr = r +x/2 −ŷ/2. Thus, we have now extended the NGSED framework into nonequilibrium dynamics. Each step of the time evolution is achieved by the evaluation of above coupled differential equations, as well as large-scale Krylov-subspace method (for the electronic wavefunction). To benchmark the accuracy, we compare the simulation results with the ED in a 2D Hubbard-Holstein model with g = t h and U = 8t h . The ED simulation is exact, except for the finite phonon occupation, which is truncated at M = 10 in our calculation. To allow the ED simulation with acceptable computational complexity and accuracy, we choose a small eight-site cluster and a relatively high phonon frequency ω 0 = 5t h . For a medium pump strength (A 0 = 0.4) with frequency Ω = 5t h , the relative errors for two dominant correlations, i.e., the spin structure factor S(π, π) and the d-wave pairing susceptibility P (d) are both below 1% even in the center of the pump pulse (see Fig. 5). At the same time, the relative error for the s-wave pairing susceptibility P (s) reaches approximately 5%. This relatively larger deviation mainly originates from the fact that the variational wavefunction tends to primarily capture the dominant correlations with limited parameters, leaving a slightly larger deviation for other correlations. In addition, the truncation error for the phonon Hilbert space in ED simulations is also more prominent out of equilibrium, which also contributes to the numerical error shown in Fig. 5.
Appendix B: Krylov-Subspace Method for Time-Evolution
With instantaneous variational parameters, the Hamiltonian can be reduced to an effective electronic one Then the single-step electronic state evolution becomes |ψ e (t + δt) ≈ e −iH(t)δt |ψ e (t) . (B2) Here, the large Hilbert-space dimension (> 10 8 ) requires stable and efficient evaluation of the wavefunction, which in this study is based on the Krylov-subspace method. The Krylov subspace for an instantaneous wavefunction |ψ e (t) and Hamiltonian H eff (t) is defined as K n (t) = span{|ψ e (t) , H eff (t)|ψ e (t) , · · · , H eff (t) n−1 |ψ e (t) }.
A widely used Krylov-subspace generator is the Lanczos algorithm, where the wavefunction can be approximated by [117][118][119][120] |ψ(t + δt) ≈ exp −iU n (t)T n (t)U n (t) † δt |ψ(t) .(B3) Here, U n (t) is the basis matrix of the Krylov subspace K n (t), and the T n (t) is the tridiagonal matrix generated by the Lanczos algorithm. Since the projected matrix is with dimension n, much smaller than the Hilbert-space dimension, the evaluation of Eq. (B3) is much cheaper. The error of evaluating the vector propagation is well controlled by n ≤ 12e −(ρ δt) 2 /16n (eρ δt/4n) n given the Krylov dimension n ≥ ρ δt/2 and spectral radius ρ = |E max − E min | [119]. Depending on the simulated pump strengths, we adopt n = 50 − 100 in this paper.
Appendix C: Spin and Charge Dynamics for Other Momenta
In Fig. 2 in the main text, we show the time evolution for the charge and spin structure factors for only a single momentum q = (π, π), which plays the dominant role in d-wave superconductivity. In this section, we also present the calculations for other momenta.
The upper two rows in Fig. 6 show the dynamics for u = 6.6 and λ = 4 (Set 1). In contrast to the suppression of N (π, π), the charge structure factors are always enhanced for other momenta, due to the light-melting of the dominant order. The dynamics for spin structure factors are more complicated: They are enhanced during the pump, but for large pump strengths (A 0 ≥ 0.5) the structure factors decrease rapidly after the pump pulse. As discussed in the main text, this means that a strong pump further suppresses the spin fluctuations and is, thereby, unfavorable for d-wave superconductivity.
In contrast to the dynamics with parameter Set 1, the evolution with parameter Set 2 (lower rows in Fig. 6) suggests unchanged spin correlations after the pump. Together with the dominant (π, π) momentum in the main text, these spin fluctuations give rise to the d-wave superconductivity after melting the charge-ordered state.
Using the momentum distribution of the spin and charge structure factors, we can also obtain the evolution of the correlation length. As shown in Fig. 7, the charge correlation length decreases rapidly after the pump pulse enters. Together with the reduction of N (π, π) shown in Fig. 2, it reflects the light melting of the charge instability, otherwise dominant at equilibrium. This trend is distinct from the light-driven dynamics of the d-wave pairing correlation, whose intensity is enhanced but the correlation length is reduced.
Appendix D: Frequency Distribution of Dynamics
Besides the increase and decrease of the correlations, the postpump dynamics reflect some underlying physical quantities. Figure 8 presents the Fourier spectra of the charge and spin dynamics shown in Fig. 2 in the main text. For the model parameter Set 1, the dynamics have low-energy periodicity approximately 2π/ω 0 , reflecting the role of phonons in forming and melting a chargeordered state. For the model parameter Set 2 close to the phase boundary, the phonon energy becomes highly renormalized [73]. Therefore, the low-energy spectrum becomes complicated and dependent on the pump strengths. This strength dependence is also reflected by the dynamics of other momenta in Fig. 6.
FIG. 7. Evolution of the correlation length ξ of the charge structure factor renormalized by its equilibrium value ξ0, obtained for (upper) u = 6.6 and (lower) u = 7, respectively. | 2021-01-12T02:15:47.216Z | 2021-01-10T00:00:00.000 | {
"year": 2021,
"sha1": "a04412afb16399240d19f4024e6ce6fa1f5e386b",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevX.11.041028",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "a04412afb16399240d19f4024e6ce6fa1f5e386b",
"s2fieldsofstudy": [
"Physics",
"Education"
],
"extfieldsofstudy": [
"Physics"
]
} |
256846934 | pes2o/s2orc | v3-fos-license | Giant Thermomechanical Bandgap Engineering in Quasi-two-dimensional Tellurium
Mechanical straining-induced bandgap modulation in two-dimensional (2D) materials has been confined to volatile and narrow modulation due to substrate slippage and poor strain transfer. We report the thermomechanical modulation of the inherent bandgap in quasi-2D tellurium nanoflakes (TeNFs) via non-volatile strain induction during hot-press synthesis. We leveraged the coefficient of thermal expansion (CTE) mismatch between TeNFs and growth substrates by maintaining a high-pressure enforced non-slip condition during thermal relaxation (623 to 300K) to achieve the optimal biaxial compressive strain of -4.6 percent in TeNFs/sapphire. This resulted in an enormous bandgap modulation of 2.3 eV, at a rate of up to ~600 meV/%, which is two-fold larger than reported modulation rate. Strained TeNFs display robust band-to-band radiative excitonic blue photoemission with an intrinsic quantum efficiency (IQE) of c.a. 79.9%, making it promising for energy efficient blue LEDs and nanolasers. Computational studies reveal that biaxial compressive strain inhibits exciton-exciton annihilation by evading van-Hove singularities, hence promoting radiative-recombination. Bandgap modulation by such nonvolatile straining is scalable to other 2D semiconductors for on-demand nano(opto)-electronics.
Introduction
The gapless nature of graphene, 1 has sparked a huge interest in discovering novel twodimensional (2D) semiconductors with tailored bandgaps. 2 For the latter, introducing a slight disorder in lattice symmetry by mechanical straining may drastically influence the electronic configuration in 2D semiconductors, resulting in massive bandgap modulation.
TeNFs/sapphire exhibited significantly reduced г values, which indicated that their improved crystallinity and substantial reduction in the native defect density.
To study the homogeneity of compressive strain in TeNFs/sapph, we performed Raman mapping of the phonon modes as shown in Fig. 1l. The bright spots represent the phonon modes at different points labeled as P1-P5. The inset shows the blue-shift Δω=-5.47 cm -1 in the A 1 phonon mode. The µ-Raman spectra recorded from points P1-P5 (shown in the inset) are displayed in Fig. 1m, indicating the strong spatial localization of compressive stain in TeNFs. The inset shows the points distribution on the TeNF, where the scale bar is 5 µm. After establishing the strain localization, it was important to gauge the strain gradient along the Te flakes. The thermomechanically induced compressive strain gradient in TeNFs was validated using COMSOL Multiphysics simulations (Fig. 1n). The strain gradient at the TeNFs/sapph interface was predominantly introduced during the relative thermal contraction and is displayed from left to right in a systematic manner. The top row represents the strain gradient from above, while the bottom row depicts its cross-sectional view at the interface of TeNFs and sapphire. Due to their maximum thermal expansion state, the interfacial compressive strain is at its minimum when the hot press reaches its maximum temperature (350 ˚C), i.e., Tmax.-Troom is 10 ˚C. The CTE mismatch causes csapphire and TeNF to contract at dissimilar rates throughout the temperature relaxation to Troom, resulting in a strain gradient from the flake edges to the core at room temperature (Tmax.-Troom = 20 ˚C). 41 We calculated the relaxation constant: = (ωmax. − ω2D Te)/(ωmax. −ωbulk Te), where ωmax. , ωbulk Te, and ω2D Te are the peak values (in cm -text S4, and shown in Fig. S3 (a, b, &c), supporting Information) reveal that R is zero at a thickness of 12 nm, but it reaches unity at a thickness of 60 nm when dislocations occur, plastically relaxing the strain in their proximity.
Strain estimation with electron microscopy
We employed high-resolution transmission electron microscopy (HRTEM) as a direct approach to visualize compressive strain induction in TeNFs. Tellurium crystallizes to form a hexagonal lattice with every fourth atom exactly above another atom in its chain, resulting in an equilateral triangle when projected on a plane perpendicular to the chain direction. An infinite helical rotation at 120° along the longitudinal [0001] armchair (caxis) covalently connects these atoms. 43 The weak van der Waals force along the zigzag direction binds all turns to hexagonal bundles in the radial direction. Fig. 2 (a, b) show, respectively, the transmission electron microscopy (TEM) and HRTEM images of a thicker flake reminiscent of unstrained bulk Te and provides reference values of its lattice constants along both the zigzag and armchair-axes. The HRTEM in Fig. 1b Fig S4b, supporting information.
Optical determination of bandgap modulation and exciton dynamics in TeNFs
Strain induction has a significant impact on the band structure of TeNFs. To investigate potential changes in band structure from the biaxial compressive strain, we performed both the CW and time-resolved photoluminescence (PL), absorbance, and emission spectroscopy characterizations over a single 12 nm thick TeNF using a variety of excitation wavelengths spanning from UV to NIR. supporting Information). The red-highlighted peak X 0 at 2.62 eV corresponds to the bandedge recombination of electron-hole (e-h) pairs. 45 The origin of the X D peak (highlighted in green), centered at 1.78 eV could be attributed to radiative e-h recombination due to the native defect states, as indicated by the peak's large width. However, the origin of the peak X Int. at 1.3 eV remains unknown. We hypothesize that this may be attributable to either the loosely bound surface oxide species on TeNF surfaces or the emission resulting from the interfacial states of tellurium and Sapphire. To provide a reasonable explanation for the genesis of this PL peak, however, further investigation is required. We excited TeNFs/sapph. with a variety of excitation wavelengths, including 325nm, 410 nm, 532 nm, 633 nm, and 785 nm, to investigate the interband transitions. We obtained a PL emission peak in response to all the excitation sources, which indicated the opening of sub-band states within the modulated bandgap of TeNFs (Fig. 3d). However, the intensity of the PL reduced significantly at longer excitation wavelengths, as shown in given in Fig. S6b, Supporting Information. The schematic illustration of interband transitions in TeNFs have been presented in Fig. 3e. The substrate-dependent comparative PL emission spectra of TeNFs/sapphire (blue), TeNFs/ITO (orange), TeNFs/Si (wine), and bulk Te (cyan) is shown in Fig. 3f. It is evident that ITO substrate also played a key role in significant bandgap opening in TeNFs, however, it had other enhanced and broad peaks that include a contribution from the native defects and, potentially, the TeNFs/ITO interfacial states.
We performed thickness-dependent PL studies which revealed a monotonic decrease in PL intensity (~two orders of magnitude) with decreasing flake thicknesses from 200 to 10 nm ( Fig. 3g). The reason for the drastic reduction in PL is the reduced absorption and, as a result, the reduced number of e-h pairs created by the excitation laser. This dependence is well described by the following expression: is the PL intensity of the TeNF with ℎ = 200 . 46 The absorption coefficient = 5000 cm −1 was used. 47 We also observed an overall blue-shift of 170 meV in the PL peak energy with decreasing flake thicknesses from 200 to 10 nm, which is a signature of quantum-size effects in strained TeNFs.
The η of TeNFs was calculated to be 79.9%, suggesting a strong radiative recombination process in photo-generated excitons. The compressive strain causes the splitting between the Van-Hove singularity (VHS) resonance and the exciton transition energy, resulting in considerable suppression of exciton-exciton annihilation (EEA) and promoting the radiative recombination process. 50
Calculations of strain-induced band gap modulation
Density functional theory (DFT), as implemented by the Vienna ab initio simulation program (VASP), was used to perform electronic and optical calculations 51 for Te valence orbitals and electrons for pseudoatoms were 5s 2 and 5p 4 , respectively. The compressive strain on monolayer Te is applied by changing the cell parameters (a and b, equally), as shown in Fig. 4a. The in-plane xy atomic orbitals overlap significantly more when additional compressive strain is applied, which has a major impact on the inner energy states energy shift. As illustrated in Fig. 4 Fig. S9 (a, b), supporting Information.
Conclusion
In this work, by means of a simple hot-pressing method to thermo-mechanically mold Te nanoparticles into ultrathin Te nanoflakes, by sandwiching them between c-sapphire, ITO, and Si substrates. Because of the significant adhesion assisted CTE mismatch between the substrate and TeNFs, the hot-pressed TeNFs endured compressive biaxial strain, resulting in persistent, substantial bandgap modulation (0.35-2.65 eV). Quantitatively, an optimum strain of -4.6% in TeNFs/sapphire was obtained, whereas the bandgap modulation rate in TeNF/ITO was as high as 600 meV/%. The strong blue photoemission in TeNFs is due to strained-induced suppression of exciton-exciton annihilation (EEA), which enhanced radiative carrier recombination. This report outlines hot-pressing as an efficient technique for introducing hereditary bandgap engineering in quasi-2D materials during their synthesis by manipulating the substrate-induces lattice strain.
To obtain moisture-free surfaces of c-sapphire before mass-loading, the substrates are placed in a furnace maintained at 70 °C for 30 minutes. A commercial hot-press system (AH-4015, 200 V-20A, Japan) is used for thermo-mechanical squashing of TeMPs to fabricate ultrathin TeNFs. During thermo-mechanical squashing, the temperature of hot-press is raised from RT to 350 °C, while the pressure is gradually increased from the atmospheric pressure to 1 GPa.
Mass
The sapphire-TeMPs-sapphire assembly is hot-pressed for 30 min at 350 °C before allowing thermal relaxation to RT, while maintaining a constant pressure of 1 GPa throughout the relaxation process. The obtained sample is studied without any postfabrication annealing.
Material characterizations
Morphology of TeNFs is studied by using the field emission scanning electron microscopy
Pressure Calculation
The pressure applied during hot-press for mechanically squashing TeMPs to fabricate ultrathin TeNFs is estimated by the following expression:
Text S1: Detailed Fabrication Mechanism of strained TeNFs
We fabricated TeNFs by thermally assisted mechanical molding of agglomerates of tellurium microparticles (TeMPs) onto various substrates (c-sapphire, ITO, silicon), using hot-pressing method (schematic diagram is shown in Fig. 1a, main text). Tellurium chunks were thoroughly ground to obtain corresponding microparticles for making a Te/ethanol dispersion (1mg/25mL). 20µL of dispersion was drop-casted on highly polished and cleaned substrates with dimensions of 10mm×10mm×0.5mm, by the pipet gun in an Argon-filled glove box to maintain the lowest possible exposure to oxygen. The substrates, after drying, were left with non-uniformly scattered agglomerates of TeMPs, which were further capped with a dimensionally identical substrate to form a sandwich-like assembly, which was placed between the heating steel plates (jaws) of a commercial hot press. The substrates under the jaws of the hot press were heated from room temperature (tR) to 350 ˚C, while the pressure was gradually elevated from atmospheric (uniaxial) pressure to a range of around 0.74 -1 GPa, depending on the type and nature of the substrate. After 30 min of attaining the temperature of 350 °C, the whole assembly was thermally relaxed to room-temperature (Troom), and TeNFs with different lateral sizes and thicknesses were obtained on both the bottom c-sapphire, while a fraction of flakes was torn apart by the top substrate.
Text S2: Thickness-dependence of Relaxation Constant (R)
We used µ-Raman studies to quantify the degree of plastic strain relaxation with flake thicknesses by finding the relaxation constant R, where (0 ≤ ≤ 1) reflects the connection between the film thickness and the degree of plastic strain relaxation. When R equals one, the lattice structure is regarded completely relaxed, and when R equals zero, it is considered completely strained. Any value between 0 and 1 implies a moderately relaxed/strained structure. 1 We calculated R using the formula = (ωmax. − ω2D Te)/(ωmax. −ωbulk Te), where ωmax.strained, ωbulk Te, and ω2D Te are the peak values (in cm -1 ) of A 1 phonon modes of maximum strained TeNF, bulk Te (relaxed) crystal, and that of the TeNF with a specific thickness, respectively. At a thickness of 12 nm, R is zero, however at a thickness of 60 nm, R grows to one as dislocations emerge, plastically relaxing the strain in their proximity.
Text S3: Detailed XPS studies of polymorphic TeNFs:
To confirm whether a large PL signal purely originated from Te NFs, rather than from tellurium oxide species (i.e. TeO, TeO2 etc), the chemical stoichiometry of as-prepared Te NFs was probed by x-ray photoemission spectroscopy (XPS). The comparative XPS spectra of bulk Te and as-prepared Te NFs are presented in Figure 6a.
, where σ is the stress tensor, C is the fourth order elasticity tensor, εph is the temperaturedependent strain and u is the displacement. External forces and initial stresses are zero.
The thermal strains at different cool-down temperatures were found by sweeping 0 °C ≤ T ≤ 340 °C, using: The reference temperature was Tref=350 °C, and α(T) is the coefficient of thermal expansion.
Quadratic serendipity shape functions were used for the simulations, and MUMPS was the solver of choice. The tellurium flakes were approximated to discs with a 40-nm radius (three different thicknesses: 5, 10 and 20 nm). Similarly, the underlying sapphire wafer was approximated to a cylinder with a height of 100 nm and a 50 nm radius. The mesh element sizes ranged between 5-10 nm.
Text S5: Detailed Computational studies:
All first-principles calculations are performed using the projector augmented wave method as implemented in Vienna ab-initio simulation package (VASP) 6 coupled with generalized gradient approximation (GGA) electron-electron interaction defined as Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional 7 . The geometry of the system is optimized until Hellmann-Feynman force is converged to 10 −4 eV/Å and the energy difference is converged to 10 −6 eV 8,9 . The frequency-dependent dielectric matrix was calculated within the independent-particle approximation. 16 Local field effects and many-body effects were not considered, which was proven to be adequate to quantify the optical contrast for mono-layer Te. 17 The frequency-dependent complex dielectric function ( ) where, ( ) = 1 ( ) + 2 ( ), can be used to determine the linear optical characteristics of semiconductors, where and are the 1 ( ) real and 2 ( ) imaginary parts of the dielectric function, and ( ) is the photon frequency. Where, absorption coefficient ( ), can be calculated from the real 1 ( ) and the imaginary 2 ( ) parts: (2) To disentangle the role of the matrix elements, we computed the Joint Density of States | 2023-02-15T06:42:44.166Z | 2023-02-14T00:00:00.000 | {
"year": 2023,
"sha1": "b5f179f864da1c27a31b4ec49f04bdd0a0b03e8e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b5f179f864da1c27a31b4ec49f04bdd0a0b03e8e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250144441 | pes2o/s2orc | v3-fos-license | Ionization Distributions in Outflows of Active Galaxies: Universal Trends and Prospect of Future XRISM Observations
The physics behind the ionization structure of outflows from black holes is yet to be fully understood. Using archival observations with the Chandra\HETG gratings over the past two decades, we measured an absorption measure distribution for a sample of outflows in nine active galaxies (AGNs), namely the dependence of outflow column density, $N_H$, on the ionization parameter, $\xi$. The slope of $\log(N_H)$ vs. $\log\xi$ is found to be between 0.00 and 0.72. We find an anti-correlation between the log of total column density of the outflow and the log of AGN luminosity, and none with the black hole mass and accretion efficiency. A major improvement in the diagnostics of AGN outflows will potentially occur with the launch of the XRISM/Resolve spectrometer. We study the ability of Resolve to reveal the outflow ionization structure by constructing the absorption measure distribution from simulated Resolve spectra, utilizing its superior resolution and effective area. Resolve constrains the column density as well as HETG, but with much shorter observations.
INTRODUCTION
Outflows are prevalent in Active Galactic Nuclei (AGNs). In the X-rays, one can observationally categorize three types of outflows. First are the relatively slow, ∼ 100 km s −1 , outflows detected through their numerous absorption lines in the soft X-ray spectra of Seyfert galaxies (e.g., Kaastra et al. 2000). Second, are the ultra-fast outflows (∼ 0.1c) detected mostly via highly blue-shifted Fe-K lines (Tombesi et al. 2010;Nardini et al. 2015). Third, are the high column-density outflows that almost completely obscure the soft X-ray AGN emission (Kaastra et al. 2014a). The present work will focus on the Seyfert outflows and on their unique property of a broad distribution of ionization, which has not been identified in the other two types.
Seyfert outflows are known for their broad ionization distribution, from the most highly charge species to neutral and near neutral species Behar et al. 2001). The ionization of a steady-state photo-ionized outflow can be described by its ionization parameter, which represents the balance between ionizing photons and recombining electrons. In the X-rays, one usually uses where L is the ionizing luminosity between 1 -1000 Ryd, n denotes the hydrogen number density, and r is the distance of the absorber from the ionizing source at the AGN center. ξ therefore has dimensions, in cgs units that is erg s −1 cm. Seyfert outflows span a few orders of magnitude in ξ; henceforth we will refer to −1 < log ξ < 4 in these units. Previous works found several discrete ionization components in Seyfert outflows and discussed whether they represent a continuous distribution. Blustin et al. (2005) published a survey of 23 AGNs to obtain insights into the absorbers and found that most outflows have multiple ionization phases, but there was no conclusion regarding whether the ionization structure is discrete or continuous. The average number of ionization components in their sample is two, but two AGNs with high quality spectra were modeled with three components. Detmers et al. (2011) studied the outflow of Mrk 509 with a model that spans four orders of magnitude in ξ, but preferred discrete ξ components over a continuous distribution. McKernan et al. (2007) analyzed X-ray spectra of 10 Seyfert outflows. Their best fits had two or more ξ-components, and five of the sources were modeled with three or more components. Laha et al. (2014) fitted XMM-Newton/RGS spectra of 17 Seyfert outflows using 1 -3 ξ-components for each one, to obtain an ionization distribution common to all Seyfert outflows.
The column density N H = ndr of the outflow distributed over different ξ values is termed the Absorption Measure Distribution (AMD, Holczer et al. 2007): Since the AMD involves n and r, its measurement can be invaluable for obtaining the density profile n(r) of the outflow (Behar 2009), which is not otherwise accessible via absorption measurements. This profile, in turn, may hold the crucial hint to the launching mechanism of Seyfert outflows, whether magnetic, radiative, or thermal (or a combination thereof). Different launching mechanisms could produce different AMDs, to be compared with the observed ones. For example by parameterizing the AMD as a power-law, with a slope a. Magnetohydrodynamic (MHD) accretion disk winds predict an AMD slope that depends on the scaling of the magnetic field with radius (along the line of sight) B ∼ r q−2 , leading to or q = (2 + a)/(2 + 2a) (Fukumura et al. 2010). Another physical effect is radiation compression of a hydro-statically held slab of photo-ionized gas (Stern et al. 2014) which results in a broad and flat (a ≈ 0) AMD. As the radiation is absorbed in the slab, its pressure drops, thus the gas pressure increases. Since the temperature also drops (less radiative heating) the density rises sharply, hence compression. Finally, numerical models of thermally driven winds have been able to produce AMDs that are flat, at least above log ξ = 2 (Waters et al. 2021, Fig. 13 therein).
In the present work, we model the spectra of a sample of nine Seyfert galaxies, using the Chandra/HETG spectrometer, seeking patterns in their outflow AMDs, and aiming to find a universal AMD profile. The specific goal is to study AMDs with commonly available spectral analysis tools, unlike the custom method of (Holczer et al. 2007). Our secondary objective is to assess the abilities of the future calorimeter spectrometer XRISM/Resolve (Tashiro et al. 2020) for studying outflow AMDs. We examine advantages of the superior sensitivity of XRISM/Resolve compared to Chandra/HETG. A comprehensive view of AGN outflows with the Hitomi calorimeter, similar to that of XRISM/Resolve was published by Kaastra et al. (2014b).
Sample and Data
In order to study the AMD of AGN outflows, we selected all of the observations in the Chandra/HETG archive with visibly detectable absorption lines that can be ascribed to outflows. The targets and observations used in the present paper are detailed in Table 1. For each object, we used only observations from the same year in order to minimize variability of the absorber. We checked one such example, the NGC 5548 HETG observations from 2000, 2002 and 2019, all with relatively high photon counts. All AGNs in the sample have a Chandra/HETG exposure of at least 190 ks and at least 160,000 source counts. The rightmost column in Table 1 lists previously published works with multi-ξ models for these observations, and for each target. Despite the variety of approaches, the models share some common attributes, as follows. They include 2 − 6 ξ-components (e.g., Netzer et al. 2003). Most of the models identify at least two kinematic components (e.g., Steenbrugge et al. 2005a); half of the objects have a slow outflow with velocity 100 − 500 km s −1 and a fast one with velocity > 1000 km s −1 (e.g., Silva et al. 2016). Out of these four objects with fast outflows, three are only apparent in the high-ξ components. Two objects have no high-ξ components nor fast outflows (Gupta et al. 2013;Detmers et al. 2011). In contrast with this variety of models, the present analysis uses a fixed set of ionization components, in order to obtain a uniform AMD structure. Table 2 lists the physical parameters of the AGNs. The unabsorbed X-ray luminosity L X in the 0.5 -10 keV band is measured from the present data, while the distance and black-hole mass M BH are taken from the literature. It can be seen that all AGNs are at low z ≤ 0.034, yet M BH spans two orders of magnitude between 1.6 × 10 6 M − 1.4 × 10 8 M , as does the accretion efficiency 0.003 ≤ L X /L Edd ≤ 0.3. Therefore, if the outflow AMD depends on any of these, we expect to be able to find hints to that dependence in our sample. The simultaneous fitting of all spectra of each target assumes the absorber did not vary (or it yields results for a mean outflow of that target) during the temporal period of observations. Since the overwhelming majority of observations for a given target were obtained within a year, or even much less, this seems to be a reasonable assumption for Seyfert outflows, also supported by the previous analyses. The unresolved (narrow) absorption lines suggest the absorber is relatively far from the nucleus, and would not vary over such short time scales. However, there are reports of absorber variability (Krongold et al. 2005(Krongold et al. , 2007.
Spectral Model
In each observation, first diffraction order (±1) of both the HEG and the MEG gratings were used. The total exposure time for all targets is at least 190 ks, and the total photon counts between 1.5 and 20Å are greater than 166320 for all objects. We prepared different XSTAR (Bautista & Kallman 2001) tables for each target, with the appropriate power-law spectral index Γ, a soft excess component when needed, and using solar abundances. We use a turbulent velocity of 100 km s −1 , except for the broad lines of NGC 3516 (Holczer & Behar 2012) and the log ξ = 4 component of MCG -6-30-15 (Holczer et al. 2010). The grid includes 7 logarithmic steps between logN H = 10 18 − 10 24 , and 9 logarithmic steps in log ξ from -1 to 4.
We fitted the spectra using the the Xspec implementation of C-statistic (Cash 1976), since some of the bins in the data contain only few photons. The above tables were sufficient to well-fit (C-stat/dof ∼ 1) all spectra with components describing the continuum, and several absorption ionization components N H (ξ).
We fitted the spectra of each target between 1.5-20Å using Xspec (Arnaud 1996). The continua are modeled as a power-law with a soft excess (when needed) that rises above the power-law around 15Å. This soft-excess is commonly modeled as a black-body component, which is satisfactory for our purpose of characterizing the absorber. The neutral galactic absorption was also taken into account with a fixed absorption component (Table 2). We modeled the absorber with six components at fixed log ξ i values of -1, 0, 1, 2, 3, and 4. The exception is NGC 3783, where we added two more components at 0.5 and 2.5, required by its superb spectrum.
The velocity of each ξ i -component was fitted individually. The outflows of NGC 3516 and MCG -6-30-15 have more complex kinematic structure. NGC 3516 requires at least two kinematic components. One between -650− -50 km s −1 and the second at −1650 ± 50. Both kinematic components have all six of the ξ i -components. The turbulent velocity in each outflow was kept fixed for all ξ i -components. The outflow of MCG -6-30-15 also has two kinematic components; the log ξ = 4 component is at −1800 ± 80 km s −1 , while the lower-ξ-components are between −350 − −130 km s −1 .
Finally we allowed the fit to include the following narrow emission lines -1.78Å (Fe Lyα), 1.87Å (Fe +24 f ) , 1.94Å (Fe Kα), 13.45Å (Ne +8 r), 13.70Å (Ne +8 f ). NGC 4151 included the following emission features as well: 7.13Å (Si Kα), 9.17Å (Mg +10 r), 9.31Å (Mg +10 f ), 12.13Å (Ne +9 Lyα), 16.01Å (O +7 Lyβ), 16.8Å (O +6 RRC). The best-fit column densities N H (ξ i ) of each AGN are subsequently used to build the AMD, by plotting log N H as a function of log ξ i . For each target, we obtain the AMD slope by fitting a linear regression in log space to log N H vs. log ξ i , taking into account N H (ξ i ) uncertainties, which are calculated by Xspec with the standard 90% confidence. For NGC 3516 we built the AMD based on the values of the slower outflow velocity. The uncertainties on each N H (ξ i ) for this purpose are taken as a mean of the lower and upper statistical uncertainties extracted from the spectral fit. The total column density, N tot H = Σ i N H (ξ i ), is obtained for each AGN. The reported uncertainty of N tot H reflects the maximal lower and upper limits.
AMD Reconstruction
We subsequently take the best-fit Chandra/HETG model to simulate XRISM/Resolve spectra using its anticipated response matrix (Ishisaki et al. 2018). We simulate a standard exposure time of 100 ks for all targets. For Ark 564, Mrk 509, IC 4329A, NGC 4051, NGC 4151 and NGC 5548, the highest ξ component, log ξ = 4 is not well constrained with Chandra/HETG. Since the simulation randomly draws photons, it is meaningless to simulate such low column densities that the spectrometer cannot detect. Thus, in the XRISM/Resolve simulations of these targets we take the HETG upper limit. In order to get an idea of the ability of XRISM/Resolve to constrain the AMD, we then fitted the same model to the simulated spectra, followed by constructing an AMD for each object. Next, we carried out the linear regression and calculated N tot H to be compared with the Chandra/HETG results. Figure 2 shows the same part of the spectrum in the XRISM/Resolve simulation, plotted in energy instead of wavelength to best demonstrate the performance of the future spectrometer.
Column Densities
The best-fit column densities for each ξ i -component of each target, as well as N tot H , are listed in Table 3. The best-fit continuum parameters and cstat/dof values are listed in Table 4. As expected, the best-fit column densities of Chandra/HETG observations and XRISM/Resolve simulations agree to within 60%, and are consistent within the uncertainties. However, the fractional uncertainties of the XRISM/Resolve fits are generally smaller by up to a factor 4, and especially in the highest ξ-components. The improvement of XRISM/Resolve is most noticeable in the log ξ = 4 component of NGC 4151 and NGC 5548. We also examined the difference between Resolve simulations when taking the best-fit HETG value, versus using the upper limit for log ξ = 4. The results are very similar for Ark 564 and NGC 4051, where the column density of the log ξ = 4 component is not constrained neither in HETG nor in Resolve. For Mrk 509 and IC 4329A the column of the log ξ = 4 component is again not constrained when the original column density value is used, as opposed to the upper limit value.
For the most part, our results of N tot H and outflow velocities approximately agree with previous works, apart from the two outstanding following exceptions. Gupta et al. (2013) fitted the outflow of Ark 564 with two ξ-components, both at a velocity of ∼ −100 km s −1 , at log ξ = 0.39 ± 0.03 and at −0.99 ± 0.13. Their total columns at log N H (cm −2 ) = 20.94 and 20.11, respectively, are much lower than what we find, likely because Gupta et al. (2013) spectra above ∼ 9Å, therefore finding no high-ξ components. The XMM-Newton/RGS spectrum of the outflow of NGC 4051 was fitted by Silva et al. (2016) with four ionization components: log ξ = 0.37 ± 0.03, 2.60 ± 0.10, 2.99 ± 0.03, and 3.70 ± 0.04 corresponding to outflow velocities of, respectively, −340 ± 10, −530 ± 10, −4260 ± 60, and − 5770 ± 30 km s −1 . We do not identify the three high-velocity components in the HETG spectra. For NGC 3783 we found two velocity ranges between the ξ i -components, within the range of ionic velocities reported in Kaspi et al. (2002).
AMDs
Figure 3 presents all the AMDs and their fitted slopes, for both spectrometers, and including N H (ξ i ) uncertainties. It can be seen from the figure and Table 3 that the highest column densities occur in the high-ξ i components. In some AGNs it is difficult to identify these components using the HETG observations, therefore Resolve spectra could have a meaningful advantage. The values of AMD slopes are detailed in Table 5. The HETG slopes range between 0.00 -0.72, and between -0.07 -0.85 including the uncertainties. Apparently, a slowly increasing AMD is a common property of Seyfert outflows. Apart from Mrk 509, all slopes are tightly constrained to ∼ 0.1 or better. The anomalously high N tot H in NGC 4151 is due to its intermittent obscuration (see Section 1). Its high N H (ξ i ) > 10 22 cm −2 values at log ξ = −1 and log ξ = 1, which none of the other AGNs features (see Table 3) likely have little to do with the outflow.
The consistency between AMD slope values of HETG and Resolve (Table 5) implies one can accurately constrain N H (ξ i ) and reconstruct the AMD based on XRISM/Resolve, with observation times half as long as those of Chandra/HETG, or shorter. Our results in Table 3 show that XRISM/Resolve has the sensitivity to measure N H (ξ i = 10 4 ) down to ∼ 10 21 cm −2 , with a 100 ks exposure. Column densities of lower-ξ i -components can even be constrained as low as ∼ 10 20 cm −2 . In order to demonstrate the effect of the different ξ i -components on the XSTAR based model, we plot each of them individually in Figure 4, for the model of NGC 3783 which has an abundance of resolved features. The Figure reveals several attributes of the model. The lower ξ i -components absorb mainly the continuum, without many absorption lines. This is most evident for log ξ = −1. Higher ξ values absorb the continuum less and less; log ξ = 4 absorbs virtually no continuum. The column density of each ξ i -component is predominantly determined by the imprint of its continuum slope (Figure 4). The Fe-M UTA, at 16 − 17Å appears mainly in the log ξ = 1 component. Apparently, this XSTAR model does not fit the conspicuous UTA of NGC 3783 properly, see residuals in Figure 1. Since there is overlap in the lines between components, there is much freedom for the fit to lower or raise column densities of adjacent ξ i -components, which is reflected in the uncertainties. The main driver of C-stat minimization is therefore the continuum.
AMD and other AGN Parameters
It remains to be understood what drives the outflow properties. We examine the connection between the AGN physical properties and N tot H , meaning the sum of column densities of all log ξ i components. In Figure 5 we show a relation between logN tot H and log L X , which appear to be anti-correlated. The Pearson's correlation coefficient r and Spearman's rank coefficient r s are -0.22 and -0.38, and p-values 0.56 and 0.31, respectively. However, by removing NGC 4051, the anti-correlation improves dramatically (-0.83 and -0.83, with p-values 0.01 and 0.01, respectively). Note that the two main groups of AGNs in Fig. 5 differ by their log ξ = 4 column density (high N tot H low L X ) or lack thereof (low N tot H high L X ). This separation may turn out to be a smooth transition, once XRISM/Resolve better constrains this component. Conversely, there seems to be no clear relation between M BH and N tot H , as can be seen in Figure 6. The coefficients there are r = 0.17, r s = 0.22, with p-values 0.66 and 0.58, respectively. Therefore, there is also no clear relation between N tot H and L X /L Edd (∼ L X / M BH ), see Figure 7. The coefficients there are r = −0.40, r s = 0.36, with p-values 0.29 and 0.36, respectively. We also examine the connection between the AMD slope and the above AGN parameters. Since the slopes span a narrow range, we do not expect to find a strong relation. Indeed, in both cases we find no significant relation between the AMD slope and these AGN parameters.
DISCUSSION
Using archival Chandra/HETG grating observations of nine AGNs we constructed the AMDs of their outflows using at least 6 pre-defined ξ i -components ranging over −1 < log ξ i < 4. Mrk 509 requires three components, while the other three only provide upper limits, while NGC 3783 requires seven components and one upper limit. This is a somewhat broader range than reported by McKernan et al. (2007). The reason is that we assumed all pre-defined components, while McKernan et al. (2007) sought the minimal number of components that provided a satisfactory fit.
The best-fit slopes of the various AMDs in the Chandra/HETG spectra span a range of 0.00 -0.72 (-0.07 -0.85 with uncertainties), which is consistent with the range of 0.0 -0.4 reported in Behar (2009). Steenbrugge et al. (2005b) found a slope of 0.40 ± 0.05 for the outflow of NGC 5548, which is inconsistent with our results. This may be due to the fact we used later observations of the outflow as well as the ones used in Steenbrugge et al. (2005b). In a comprehensive linear regression for the ξ-components of 17 different Seyfert outflows (including all of the present ones except NGC 4151), Laha et al. (2014) find an AMD slope of 0.31 ± 0.06, which is consistent with the present slopes. This suggests a ubiquitous AMD shape for AGN outflows of a shallow positive slope (< 0.72).
The present slope range of 0.00 -0.72, corresponds in the MHD self similar solutions of B ∼ r q−2 (Fukumura et al. 2010), to q = 0.79 − 1.0, or approximately B ∼ 1/r in all outflows. Our results are marginally consistent with those of Stern et al. (2014), a ∼ 0.03. XRISM/Resolve spectra will allow us to measure the high-ξ components with smaller Table 2. uncertainties, providing a more definitive AMD slope value. The MHD outflow model is scalable with q, but other models will be confronted with these refined AMDs.
The anomalous high column densities of NGC 4151 (N H (ξ i ) > 10 22 cm −2 ) in low-ξ components are greater than those of any other AGN (Fig. 3). The soft X-rays of NGC 4151 are often heavily absorbed (George et al. 1998;Kraemer et al. 2005) by K-edges of light elements that are likely not related to the steady outflow. We analyzed the 2002 spectra where NGC 4151 was in a high flux state. Nevertheless, residual continuum absorption results in these high N H (ξ i ) values for low-ξ i components, although there are no clear absorption lines above 10Å (see also Kraemer et al. 2005).
Previous works (Holczer et al. 2007;Laha et al. 2014) find a gap in the AMD, between log ξ = 0.5 − 1.5, and suggest this could be a universal feature due to thermal instability. Waters et al. (2021) show that in thermally driven winds, the buoyancy of gas clumps, and their disintegration within thermally unstable regions can remove this gap from the AMD. The present method of a rigid ξ-grid does not provide unambiguous evidence for this gap, although marginal evidence can be seen in the AMDs of NGC 3516, NGC 3783, NGC 4151 (Fig. 3).
Since the AMD slopes are relatively similar between AGNs, they point to a basic physical attribute of the outflows which is universal. On the other hand, the large dispersion in N tot H , allows us to correlate it with the AGN fundamental properties. We find that the logN tot H plausibly anti-correlates with log L X . A similar anti-correlation is found in the SUBWAY quasar sample between N HI and L bol (Mehdipour et al., in preparation). Conversely, there is no correlation between N tot H and M BH or L X /L Edd . Radiatively driven winds are actually expected to drive more mass with luminosity. The anti-correlation with L X thus might suggest that the X-ray flux moves gas out of the line- Table 2. of-sight, leading to lower column densities. An alternative explanation is that high-L X AGNs totally ionize the wind, thus hiding its most ionized components. However, the similar AMDs of all AGNs and specifically the lack of increasing AMD slope with luminosity, suggests that outflows of low-L sources are as ionized as those of high-L ones. Blustin et al. (2005) found a possible, weak correlation between N tot H and the bolometric luminosity (cf. Fig. 5 in Laha et al. 2016), which hangs on four luminous quasars. Two of them, PG0844+349 and PG1211+143, have ultra-fast velocities, quite different from the Seyfert outflows (e.g., Laha et al. 2014). The two other sources, IRAS 3349+2438 (L X ∼ 6 × 10 44 erg s −1 and N tot H = 1.2 ± 0.3 × 10 22 cm −2 , Holczer et al. 2007), and MR 2251-178 (L X = 1.7 − 5.2 × 10 44 erg s −1 and N tot H = 3.2 − 6.3 × 10 21 cm −2 , Kaspi et al. 2004), would strengthen the anticorrelation of Fig. 5.
CONCLUSIONS
Following the uniform analysis of a sample of nine Seyfert outflows we reach the following conclusions: • The AMD slope, a proxy of the ionization distribution in the outflow is relatively flat. This slope is found to be a universal characteristic of the outflows, indicating a common wind-launching mechanism, or micro-physics that is not related to global properties of the AGN.
• The log of the total column density in the outflow N tot H anti-correlates with the log of the X-ray luminosity L X , perhaps indicating that high-L X sources clear absorbing gas from the line of sight. | 2022-07-01T01:15:34.395Z | 2022-06-30T00:00:00.000 | {
"year": 2022,
"sha1": "9a5b7b3bace2ee0045f4f582420a468341264dc2",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.3847/1538-4357/ac7c6b/pdf",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "e381c240b940b5c1f31ed752f5112dd8ef2dec4d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
216153882 | pes2o/s2orc | v3-fos-license | Percutaneous Coronary Intervention Complexity and Risk of Adverse Events in relation to High Bleeding Risk among Patients Receiving Drug-Eluting Stents: Insights from a Large Single-Center Cohort Study.
Methods
Between January 2013 and December 2013, 10,167 consecutive patients undergoing PCI were prospectively enrolled in Fuwai PCI Registry. Complex PCI was defined when having at least one of the following characteristics: 3 vessels treated, ≥3 stents implanted, ≥3 lesions treated, bifurcation with 2 stents implanted, total stent length >60 mm, treatment of chronic total occlusion, unprotected left main PCI, in-stent restenosis target lesion, and severely calcified lesion. The primary ischemic endpoint was major adverse cardiovascular events (MACE) (composite of cardiac death, myocardial infarction, definite/probable stent thrombosis, and target lesion revascularization), and primary bleeding endpoint was Bleeding Academic Research Consortium (BARC) type 2, 3, or 5 bleeding.
Results
The median duration of follow-up was 29 months. In adjusted Cox regression analysis, patients having complex PCI procedures experienced higher risks of MACE (hazard ratio (HR): 1.63, 95% confidence interval (CI): 1.38-1.92; P < 0.001), compared with noncomplex PCI. In contrast, the risk of clinically relevant bleeding was statistically similar between the 2 groups (HR: 0.86 [0.66-1.11]; P = 0.238). There was no statistical interaction between HBR (PARIS bleeding score ≥8 or <8) and complex PCI in regard to MACE (adjusted P interaction = 0.388) and clinically relevant bleeding (adjusted P interaction = 0.279).
Conclusions
Patients who had undergone complex PCI resulted in substantially more ischemic events, without an increase in clinically relevant bleeding risk, and these associations did not seem to be modified by HBR status. More intensified antiplatelet therapy may be beneficial for patients with complex percutaneous coronary revascularization procedures.
Introduction
Due to advances in intervention techniques and technologies [1,2], percutaneous coronary interventions (PCIs) are increasingly performed in complex clinical and anatomical subsets of patients although literature data showed steady declines in population-wide rates of coronary revascularization over the past decade [3]. Given the association of anatomical complexity and functional severity of CAD with future cardiovascular events [4,5], the concept of complex PCI and higher-risk indicated population for revascularization has recently been proposed [6,7]. However, there is no universal definition of complex PCI in terms of angiographic and lesion characteristics, in turn causing in a variety of clinical outcomes reported in previous studies [7][8][9][10][11][12]. Although procedural complexity emerges as a correlate of ischemic events, controversial results have been reported in many studies evaluating the adverse impact of complex PCI procedures on bleeding events [7][8][9][10][11][12]. For instance, some reports suggested the increased risk of bleeding events in patients with high-risk features for stentrelated ischemic events [10][11][12], whereas others refuted this association [7][8][9]13]. Meanwhile, concomitant high bleeding risk (HBR) may be present, making it challenging for clinical decision-making on the duration and intensity of dual antiplatelet therapy (DAPT) after complex PCI. Currently, limited data are available regarding the effect of complex percutaneous coronary revascularization procedures on clinical outcomes in a real-world population, especially in East Asian Patients.
In clinical practice, because patients with high bleeding risk (HBR) who undergo PCI experience both high rates of ischemic and bleeding events and represent an overall highrisk population [14,15], whether complex PCI procedures exert similar or differential impact on both thrombotic and bleeding complications among those with and without HBR is uncertain. Hence, determining the optimal strategy for DAPT in patients after complex PCI procedures requires the individualized assessment of the patient's risk of ischemia and bleeding. To date, the PARIS bleeding risk score is a 6item scoring system developed to estimate the bleeding risk in patients who receive DAPT after drug-eluting stent (DES) implantation [16], which drives its endorsement by 2016 ACC/AHA DAPT guidelines [17]. In the derivation and validation study of PARIS score [16,18], the absolute risk difference in coronary thrombosis and major bleeding with prolonged dual antiplatelet therapy was largely negative for patients with high PARIS bleeding risk score, particularly for those at low or intermediate thrombotic risk. Specifically, it is known that complex PCI and HBR are intertwined and unfavorably affect prognosis after PCI, but the impact of HBR estimated by PARIS bleeding risk score on the occurrence of ischemic and bleeding events in the setting of complex PCI procedures with DES implantation is not well established.
Accordingly, we sought to (1) describe the ischemic and bleeding events of patients who underwent complex PCI procedures compared with noncomplex PCI and (2) examine whether HBR, as defined by the PARIS bleeding risk score, affects the association between procedural complexity and clinical outcomes differently in an unselected real-world population receiving PCI with DES.
Patient Population.
This was a retrospective analysis of prospectively collected data. Between January 2013 and December 2013, a total of 10,724 consecutive patients who underwent PCI for CAD were prospectively enrolled from Fuwai Hospital, National Center for Cardiovascular Diseases, Beijing, China. For the present study, exclusion criteria were treatment by balloon angioplasty alone without stent placement, implantation of bioresorbable scaffolds, or bare-metal stents. Finally, 10,167 patients were selected for this analysis. Demographic and clinical characteristics, angiographic and procedural information, and follow-up data were systematically and prospectively collected in our dedicated PCI registry by independent research personnel. The study was conducted based on the principles of the Declaration of Helsinki, and its protocol was approved by the Institutional Review Board. All patients provided written informed consent for prospective follow-up before the intervention. Details of the clinical and laboratory analysis and procedures are contained in the Supplementary material method.
Patient Follow-Up.
After index PCI, patients were followed up at 1, 6, and 12 months and annually thereafter. Follow-up data were collected through medical records, telephone communications, or clinical visits by well-trained cardiologists who were blind to the purpose of the present study. Patients were advised to return for coronary angiography if indications of ischemic events occurred. The median follow-up duration was 881 days (interquartile range (IQR): 807 to 944 days).
Definitions and Clinical
Outcomes. Complex PCI was quantified with at least 1 element of the following characteristics: 3 coronary vessels treated, ≥3 stents implanted, ≥3 lesions treated, bifurcation with 2 stents implanted, total stent length >60 mm, treatment of chronic total occlusion (CTO), unprotected left main PCI, in-stent restenosis target lesion, and severely calcified lesion (requiring a rotablator system). The definition of complex PCI is an extended version of that proposed by Giustino et al. [7] and used in the ESC DAPT guidelines [19] and includes PCI for unprotected left main, in-stent restenosis, and heavily calcified lesion (using rotablation). Notably, these high-risk features are well recognized to predispose to higher rates of thrombotic events [20][21][22], but they were the exclusion criteria in a retrospective analysis using the pooled patient-level data of 6 randomized controlled trials [7]. Validation of the PARIS bleeding score and instruction for its calculation were described elsewhere [16,18,23]. Patients were deemed at HBR for scores ≥8 and non-HBR for scores <8.
The primary ischemic outcome was MACE, defined as a composite of cardiac death, MI, definite or probable ST, and target lesion revascularization (TLR). The primary bleeding outcome was clinically relevant bleeding defined as the Bleeding Academic Research Consortium (BARC) type 2, 3, or 5 [24]. Secondary outcomes were all-cause death, cardiac death, MI, TV-MI, definite/probable ST, any repeat revascularization, target vessel revascularization (TVR), TLR, stroke, and any bleeding. Detailed information on endpoint definitions is presented in the Supplementary material method.
Statistical Analysis.
Continuous variables are expressed as mean ± SD or median (interquartile range) and compared with Student's t-test or the Mann-Whiney U test, respectively. Categorical data are reported as numbers and percentages and were compared using chi-square or Fisher's exact test as appropriate. Cumulative event rates for ischemic and bleeding events were constructed using Kaplan-Meier method among those with and without PCI complexity and after substratifying all subjects by both PCI complexity and HBR. Event rates were compared across groups using the log-rank test. Hazard ratios (HRs) with 95% confidence intervals (CIs) were calculated using a Cox proportional hazard regression model. A multivariable Cox regression model was used to compare the risks of adverse cardiac events between the complex PCI and noncomplex PCI groups using the following covariates: age, sex, current smoking, body mass index, hypertension, diabetes mellitus, chronic kidney disease, left ventricular ejection fraction, prior MI, prior revascularization (percutaneous coronary intervention and/or coronary artery bypass graft), acute coronary syndrome, mean stent diameter, hemoglobin, platelet count, type of DES implanted, and DAPT duration (as a time-adjusted covariate). "Complex PCI" was also assessed as either a categorical (0, 1 to 2, and ≥3) or a continuous (per increase in the number of complex PCI features) covariate in the Cox model. In addition, each complex PCI procedure component was included as a separate predictor in the multivariable Cox regression analysis to calculate individual predicted probabilities for MACE and clinically relevant bleeding. The consistency of the effect of undergoing complex PCI procedures according to HBR (HBR vs. non-HBR) was evaluated by formal interaction testing. Exploratory sensitivity analyses were performed to evaluate the consistency of our overall findings, including using three bleeding risk categories (low risk: 0 to 3, moderate risk: 4 to 7, and high risk: ≥8 points) of PARIS bleeding risk score and defining HBR according to PRECISE-DAPT score (i.e., ≥25). All tests were two-sided, and a P value of <0.05 was considered to be statistically significant. All analyses were performed with SAS version 9.4 (SAS Institute, Cary, NC, USA).
Clinical and Procedural Characteristics.
Of 10167 patients (mean age: 58.3 ± 10.3 years) with available angiographic characteristics, 3651 (35.9%) underwent complex PCI. The baseline and procedural characteristics according to PCI complexity are presented in Table 1. Patients who underwent complex PCI were more likely to be elderly and male with a high prevalence of diabetes mellitus and hypertension. The complex PCI group had a higher proportion of stable CAD as an indication for PCI, previous MI, and myocardial revascularization with either PCI or CABG. There were higher PARIS thrombotic risk scores in the complex PCI group without difference in PARIS bleeding risk score levels. Procedurally, the complex PCI group had a greater number of treated vessels and lesions with more numbers of stents implanted, leading to a greater total stent length. Subjects with complex PCI were more likely to display involvement of thrombotic lesion, type B2/C lesion, and higher SYNTAX scores. The prevalence of the complex PCI components in the overall population is illustrated in Supplementary Figure 1, and the overlap of these high-risk features is summarized in Supplementary Table 1. As expected, ≥ 3 stents implanted and ≥3 lesions treated frequently overlapped with other high-risk procedural characteristics.
By including complex PCI as a continuous variable within the same multivariable models, the risk of MACE tended to be greater as the number of high-risk procedural characteristics increased (per number of complex PCI variables increase, adjusted HR: 1.16, 95% CI: 1.09-1.23; P < 0.001) (Supplementary Table 2). Of note, the complex PCI score was not associated with greater risk of clinically relevant bleeding (adjusted HR: 0.91, 95% CI: 0.82-1.02; P � 0.107). Besides, the number of PCI complexity was associated with greater risk of the primary ischemic endpoint. Conversely, there was a numerically gradual risk decrease for clinically relevant bleeding (0 : 2.9%; 1 to 2 : 2.4%; ≥3 : 2.2%; P � 0.223) as the number of high-risk features increased. Adjusted risk for MACE and clinically relevant bleeding according to each component of high-risk procedural feature is illustrated in Supplementary Table 3. Individual high-risk features, such as ≥3 stents implanted, bifurcation with 2 stents, >60 mm total stent length, in-stent restenosis target lesion, and severely calcified lesion, are independent predictors for MACE but not for clinically relevant bleeding.
Clinical Outcomes in relation to Complex PCI Procedures and HBR.
Indeed, subjects with HBR had significantly greater rates of ischemic and bleeding events compared with subjects without HBR (Supplementary Table 4). As shown in Figure 2, the rates of MACE among subjects with both HBR and complex PCI, HBR alone, complex PCI alone, or neither HBR and complex PCI were 8.9%, 6.9%, 7.8%, and 4.4%, respectively (P < 0.001). Similar patterns of higher risk were observed for cardiac death, MI, or definite/probable ST. The rate of clinically relevant bleeding was higher among subjects with HBR, although complex PCI showed nonstatistically significant low rates of major bleeding (P � 0.003). Clinically relevant bleeding rates across these same 4 groups were 5.6%, 4.6%, 2.2%, and 2.9%, respectively.
Adjusted HRs for ischemic and bleeding events associated with complex PCI procedures and stratified by the presence or absence of HBR are shown in Table 3. The HRs of any endpoint were similar in the direction and magnitude among the HBR and non-HBR groups with no evidence of statistical interaction (all P interaction > 0.05), suggesting a consistent effect within complex PCI. There was no significant interaction (P � 0.388) in the adverse effect of complex versus noncomplex PCI for MACE between patients with HBR (adjusted HR: 1.13, 95% CI: 0.57-2.25) and non-HBR (adjusted HR: 1.66, 95% CI: 1.40-1.97). The unadjusted rates of cardiac death, MI, definite/ probable ST, and TLR were higher in HBR subjects with complex PCI in relation to HBR subjects without complex PCI; however, after multivariable adjustment, the HRs were not significantly different as analysis of subjects with HBR is limited by small sizes. It was worthy of noting that the risk of clinically relevant bleeding associated with complex PCI was not increased in participants with HBR (
Discussion
The present study of more than 10,000 real-world patients undergoing PCI predominantly with new-generation DES is the first study to address the association between complex PCI procedures, HBR, and occurrence of adverse events in a large all-comers PCI cohort. The main findings of this analysis could be summarized as follows: (1) Compared with noncomplex PCI, PCI complexity was associated with a considerably higher risk of adverse ischemic events, with no higher risk of clinically relevant bleeding in multivariable analyses over median 29 months of follow-up. The ischemic risk tended to be greater for progressively higher degrees of procedural complexity. (2) The independent impact of complex PCI on thrombotic events was substantial and uniform irrespective of HBR status, and there was no interaction between complex PCI and HBR (i.e., PARIS bleeding score ≥8) on clinically relevant bleeding. (3) Together these findings indicated that regardless of HBR, complex PCI was an independent driver of adverse ischemic outcomes without an excess of bleeding events, suggesting that the use of potent P2Y 12 inhibitors may be beneficial to patients who underwent complex percutaneous revascularization. We observed that PCI complexity exerted an adverse impact not only on MACE proportional to the number of complexity criteria present, but also on all individual endpoints including cardiac death, MI, definite/probable ST, and TLR, findings that corroborated results from previous reports [7][8][9][10][11][12]. Intriguingly, subjects with complex PCI did not experience a significant increased risk of clinically relevant bleeding, as compared with the noncomplex PCI group. In this regard, three analyses from DAPT study, PROMETHEUS study, and a pooled patient-level data from six RCTs showing comparable clinically relevant bleeding risks between complex and noncomplex PCI groups were consistent with our findings [7][8][9]13]. In contrast to these observations, the results of ADAPT-DES registry and Global Leaders trial showed that complex PCI criteria were correlated with a higher incidence of bleeding [10,12]. Analogously, in the Bern PCI Registry consisting of 10,236 post-PCI patients, Ueki et al. also demonstrated a significant relationship of high-risk features for stent-related ischemic events with BARC 3-5 bleeding [11].
These conflicting results in regard to the impact of complex PCI on bleeding events may be attributable to differences in definition of "complex PCI," intensity of DAPT (clopidogrel vs. more potent P2Y 12 inhibitors), and the bleeding risk of the study population. Specifically, our own definition of complex PCI was an extended version of that proposed by Giustino et al. and included unprotected left main PCI, in-stent restenosis target lesions, and rotational atherectomy use for a heavy calcified lesion. Given that these high-risk features were established risk factors of thrombotic complications [20][21][22], such patients that were excluded from a patient-level pooled dataset from six RCTs are necessary to be taken into account in a real-world practice. Under this scenario, it is unsurprising that the proportion of patients receiving complex procedures was markedly higher (35.9%) in our study than in two previous pooled patient-level databases from RCTs that ranged from 17.9% to 29.6% [7,12,13]. Furthermore, high-risk features from the Bern PCI Registry were mainly comprised of CKD (47.6%) [11], which has emerged as a common contributor to both types of ischemic and bleeding complications [16,25]. They found that CKD was an independent predictor for both device-oriented composite endpoint (DOCE) and BARC 3-5; however, ≥3 lesions treated or ≥3 stents implanted were the only independent predictors for DOCE, but not bleeding. On the contrary, in the present study, each component of complex PCI procedures was not associated with clinically relevant bleeding. Thereby, the high proportion of CKD of the "complex PCI" in the Bern PCI Registry is predisposed to clinical tendencies to bleeding events. Additionally, one explanation for these differences could lie in the use of P2Y 12 inhibitors. Prior reports involved use of more potent antiplatelet agents such as ticagrelor and prasugrel which cause more bleeding events despite of lowering residual ischemic risk [7][8][9]12], whereas the patients from our current study were limited to treated with clopidogrel due to the unavailability of other P2Y 12 inhibitors except for clopidogrel in China during the study period. Moreover, another hypothetical explanation for this negative result of clinically relevant bleeding may reside in the bleeding risk of the study population. The mean PARIS bleeding risk score in our cohort was relatively lower than the Bern PCI Registry (3.7 ± 2.1 vs. 4.3 ± 2.5). Meanwhile, it was speculated that the data on comparisons of triple therapy (oral anticoagulant (OAC), aspirin, and P2Y 12 inhibitor) versus double (OAC plus P2Y 12 inhibitor) demonstrated that triple therapy significantly increased the risk of bleeding [26,27], an important consideration given that up to 8.3% of patients treated with triple therapy (any DAPT and oral anticoagulant) at discharge in the Bern PCI Registry compared with 0.2% in our PCI registry.
Although the risks of ischemic events are greater than those of major bleeding in most patients undergoing PCI, the predicted long-term probabilities of ischemic (composite Journal of Interventional Cardiology of cardiac death, MI, and definite/probable ST) and bleeding (moderate/severe bleeding) outcomes may share common risk factors and have a strong correlation [28]. Given the mutual and possibly competing role of high ischemic and bleeding risk features, we postulated that HBR may differentially affect the risk of adverse events after complex PCI compared with noncomplex PCI. Nevertheless, the relevance of complex PCI procedures on longitudinal outcomes in the setting of HBR remains less clear. In the present investigation, we found that the presence of HBR further amplified the underlying thrombotic risk of complex PCI because subjects with both abnormalities were at highest risk for ischemic events and those without complex PCI and absent HBR were at lower ischemic risk; however, the association between complex PCI and ischemic outcomes was similar in direction and magnitude among those with and without HBR, with no evidence of statistical interaction. In other words, HBR increased the risk of adverse ischemic events to a similar extent after complex PCI and noncomplex PCI. Similarly, no evidence of an interaction between complex PCI and HBR in regard to the risk of clinically relevant bleeding was observed, although the unadjusted rates of bleeding complications were numerically higher for HBR patients with vs. without complex PCI. These observations suggested that more intense and longer antiplatelet therapy may improve outcomes after complex PCI procedures. Nonetheless, the potential implications from our findings should all be considered as hypothesis-generating and require randomized trials for validation. This study needs to be interpreted in the context of certain limitations. First, this study was a post hoc analysis of an observational, albeit large, prospective study that precluded causal inferences, and as such, it had to be considered as hypothesis-generating. Second, the patients who underwent complex PCI were not randomly assigned but were decided according to the operator's discretion. Although the major results were consistent after multivariable adjustment models, we did not correct for all possible and unmeasured confounders. Third, significant differences in the adjusted rates of clinical outcomes in HBR patients with complex PCI vs. noncomplex PCI were not observed, in contrast to that seen in the non-HBR cohort and the entire study population. This condition was likely due to the relatively modest sample size of HBR patients. No significant interactions were present between complex PCI and HBR versus non-HBR status for the risk of ischemic and bleeding events, indicating that the primary results of the study as related to the influence of complex PCI on clinical outcomes apply to HBR patients as well. Fourth, as all patients in our study only received clopidogrel as the P2Y 12 inhibitor, the results may not be generalizable to those receiving more potent antiplatelet agents, such as ticagrelor or prasugrel. Despite these limitations, the current study had the advantages of inclusion of an all-comers population with minimal exclusion criteria, full-scale procedural complexity definitions, and a relatively long-term follow-up duration.
Conclusions
Patients having complex PCI procedures, compared to those having noncomplex PCI, were at a substantial higher risk of ischemic events, with no higher risk of clinically relevant bleeding, irrespective of HBR status. HBR further increased the risk of long-term adverse events after PCI of both complex PCI and noncomplex PCI to a comparable degree, whereas bleeding risk did not increase to the same extent as ischemic risk after complex PCI procedures within HBR group, suggesting that intensified antiplatelet therapy may be beneficial for patients with complex PCI.
Data Availability
The data used to support the findings of this study are available from the corresponding author upon reasonable request. | 2020-03-19T10:40:48.157Z | 2020-03-01T00:00:00.000 | {
"year": 2020,
"sha1": "ddd7e55c40e85e98090cc539eef867367ef9ffc7",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/jitc/2020/2985435.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f008f7f81cdee8e0d2115e4af54d357ecc18a88b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253663566 | pes2o/s2orc | v3-fos-license | Network Architecture for Optimizing Deep Deterministic Policy Gradient Algorithms
The traditional Deep Deterministic Policy Gradient (DDPG) algorithm has been widely used in continuous action spaces, but it still suffers from the problems of easily falling into local optima and large error fluctuations. Aiming at these deficiencies, this paper proposes a dual-actor-dual-critic DDPG algorithm (DN-DDPG). First, on the basis of the original actor-critic network architecture of the algorithm, a critic network is added to assist the training, and the smallest Q value of the two critic networks is taken as the estimated value of the action in each update. Reduce the probability of local optimal phenomenon; then, introduce the idea of dual-actor network to alleviate the underestimation of value generated by dual-evaluator network, and select the action with the greatest value in the two-actor networks to update to stabilize the training of the algorithm process. Finally, the improved method is validated on four continuous action tasks provided by MuJoCo, and the results show that the improved method can reduce the fluctuation range of error and improve the cumulative return compared with the classical algorithm.
Introduction
As artifcial intelligence continues to thrive, reinforcement learning (RL), which is a learning process that combines exploration and action, has been well developed in discrete action spaces focusing on decision control. By letting the agents learn continuously in a way of trial and error, RL pursues the overall maximum return while seeking the optimal action policy [1,2]. However, when high-dimensional inputs or continuous action tasks are involved, traditional RL that relies on maximizing expected returns by performing trial and error may not work well. To tackle these kinds of problems, the concept of deep reinforcement learning (DRL) has been presented. In 2013, DeepMind proposed a method of using deep neural networks to play Atari games. It is the frst successful and versatile DRL algorithm, although its scope of application is still limited to low-dimensional discrete action spaces. Te topics dealing with continuous action tasks have become a new set of research interests [3,4].
Te basic idea of deep reinforcement learning is to ft the value function and policy function in reinforcement learning through a neural network. Typical algorithms include Deep Q-Network (DQN) [5] based on discrete action tasks and Deep Deterministic Policy Gradients (DDPG) [6] based on continuous action tasks. DDPG and DQN have very high similarities in algorithms. Te main diference is that DDPG introduces a policy network to output continuous action values. DDPG can be understood as an extension algorithm of DQN in continuous action. DDPG algorithm has been studied extensively with a series of outcomes obtained. Mnih et al. [7] proposed the concept of two-layer BP neural network and hence improved the DDPG algorithm. Te search efciency of BP network was improved by using Armijo-Goldstein-based criterion and BFGS method [8]. Nikishin et al. [9] reduced the infuence of noise on the gradient by averaging methods under the premise of random weights. Parallel actor networks and prioritized experience replay are used and tested in the continuous action space of bipedal robots [10]. Te experimental results show that the revised algorithm can efectively improve the training speed. In addition, the storage structure of experience in DDPG is optimized, which improves the convergence speed of the DDPG algorithm through binary tree [11][12][13].
To sum up, the above methods propose improvements to address the shortcomings of DDPG, and all have achieved good results. Although the performance of the improved algorithms has been signifcantly improved, the faws of local optimal solutions and large error fuctuations need to be further addressed.
Te main content of this paper is as follows: Firstly, the basic principle of DDPG is introduced, and then, combined with the description of the network structure and its associated parameters, the existing shortcomings are also analyzed. Secondly, an improved algorithm is proposed to tackle the shortcomings of DDPG. Te improvement method is mainly divided into two aspects. First, in order to reduce the probability of local optimal solutions, a critic network is added to assist training, and the smallest Q value in the two critic networks is taken as the estimated value of the action. Second, the dual-critic network will select the suboptimal Q value to update each round, and the suboptimal Q value also corresponds to the suboptimal action, which leads to the continuous underestimation of the action value of the agent. In response to this problem, this work introduces a dual-actor network based on the dual-critic network architecture; that is, the most valuable action in the two action networks is selected for training under the minimum Q value, so as to improve the robustness of the network structure. Finally, the efectiveness of the improved method is verifed in eight simulated, experimental environments.
Te rest of this paper is organized as follows: Te basics of DDPG are introduced in Section 2. In Section 3, the idea of improving the algorithm is elaborated. Section 4 includes experimental results and analysis. Section 5 summarizes the work and refers to the future works.
Deep Deterministic Policy Gradients
Te problem that reinforcement learning needs to solve is how to let the agent learn what actions to take in an environment, so as to obtain the maximum sum of reward values [12][13][14]. Te reward value is generally associated with the task goal defned by the agent. Te DDPG algorithm is used to solve the reinforcement learning problem in continuous action space [6,[15][16][17]. Te main process is as follows: Firstly, the experience data generated by the interaction between the agent and the environment is stored in the experience recall mechanism. Secondly, the sampled data is learned and updated through the actor-critic architecture, and fnally the optimal policy is obtained. Te structure of the DDPG algorithm is shown in Figure 1 [15].
Based on the deterministic policy gradient, the DDPG algorithm uses a neural network to simulate the policy function and the Q function and combines the deep learning method to complete the task training [16]. Te DDPG algorithm continues with the organizational structure of the DQN algorithm and uses actor-critic as the basic architecture of the algorithm [17]. Te combination of the concepts of the online network and the target network in the DQN algorithm with the actor-critic method makes both actor and critic modules in the DDPG have access to the structure of the online network and the target network [6,18,19].
During the training process, the agent in the current state S decides the action A that needs to be performed through the current actor network and then calculates the Q value of the current action and the expected return value y i � R + cQ ′ according to the current critic network. Ten, the actor target network selects the optimal action A ′ among the actions that can be performed according to the previous learning experience, and the Q ′ value of the future action is calculated by the critic target network. Te parameters of the target network are periodically updated by the online network parameters of the corresponding module.
DDPG adopts a "soft" method to update the target network parameters; that is, the magnitude of each update of the network parameters is very small, which improves the stability of the training process [20][21][22]. Te update coeffcient is denoted as τ, then the "soft" update method can be expressed as (1) DDPG makes the decision of using action a t by the deterministic policy π. It approximates the state-action function via a value network, with the defnition of the target function as the accumulated reward with a discounted factor [23,24] as shown in the following equation: In the online network of the critic, the update of the network parameters is based on the minimal value of the mean square error of the loss function [10], which can be expressed as For the actor online network, the network parameters are updated according to the loss gradients of the policy [10] as shown in the following equation:.
Error Analysis. It is an inevitable problem for Q-
Learning to tend to overestimate errors [25][26][27][28]. In Q-Learning, the update of the estimated value of an action by the learning algorithm is conducted by the ε-greedy policy y t � r + c max (Q(s t+1 , a t+1 )), hence the actual maximal value of an action is usually smaller than the estimated 2 Computational Intelligence and Neuroscience maximal value of this action as shown in the following equation: Equation (5) has already been proved for its establishment [29,30]. Even the zero mean error of the initial state will lead to an overestimation of the action value due to the update of the value function, and the adverse efect of this error will be gradually enlarged by the calculation of the Bellman equation.
In the structure of actor-critic, the update of the actor policy depends on the critic value function [31][32][33]. Given the online network parameter φ, ϕ approx denotes the updated parameter of the actor network calculated by the estimated maximal value function max (Q θ (s, a)), ϕ true denotes the parameter obtained by using the actual value function Q π (s, a), where Q π (s, a) is unknown in the training process which represents the value function in an ideal state, then ϕ approx and ϕ true can be expressed in the following equation: In Equation (6), Z −1 1,2 ||F[·]|| � 1, which normalizes gradients by using Z 1 and Z 2 . Otherwise, highly estimated errors would have been a certain case in a strict constraint if gradient normalization had not been used [34,35].
Since the gradient is updated in the direction of the local maximum, there is a very small number k1, so that when the learning rate of the neural network is less than k1. Te parameter π approx based on ϕ approx and the parameter π true based on ϕ true converge to the local optimal value of the corresponding Q function, at this time, the estimation of π approx is restricted to be below π true as shown in the following equation: On the contrary, there is an extremely small number k2, so that when the learning rate of the neural network is less than k2, the parameter π approx and the parameter π true also converge to the local optimal value of the corresponding Q function, and the estimation of π true is limited below π approx .
If the training efect of the critic network is satisfying, the estimation of the policy value will be at least similar to the actual value of φ true as shown in the following equation: At this time, if the learning rate of the network is smaller than the smaller one of k1 and k2, we know by combining Equations (8) and (9), the action value will be overestimated as shown in the following equation:
Computational Intelligence and Neuroscience
Te existence of errors will lead to inaccurate estimation of the action value, making the suboptimal policy be taken as the optimal policy output of the online network, thereby afecting the performance of the algorithm.
Dual-Actors and Dual-Critics Network
Structure. Due to the existence of the overestimated error, the estimation of the value function can be used as an approximate upper limit of the estimated value of the future state. If there is a certain error in every Q value update, the accumulation of errors will result in a suboptimal policy. Aiming at this kind of problem, an additional critic network is used in this work. Te smallest Q value of the two networks is taken as the estimated value of the action in each update, so as to reduce the adverse efect of the overestimated error.
Te process of obtaining the smallest Q value via the dual-critic network is shown in the following equation: Although the dual-critic network can reduce the overestimated error of the algorithm and reduce the probability of generating a local optimal strategy, in the actual training process, it is rare for the learning rate of the neural network to be less than the minimum value of k1 and k2. Combined with Section 3.1 analysis, that is, the probability of overestimation is very low. Te dual-critic network will select the suboptimal Q value to update in each round. Te suboptimal Q value also corresponds to the suboptimal action, which leads to the continuous underestimation of the action value of the agent, and in turn afects the rate of convergence of the critic network [36][37][38].
Aiming at the problem of underestimation of the dualcritic network, in this work a dual-actor network is presented for training on the basis of the dual-critic network architecture. Te network selects the action with the highest value among the two actions under the minimum Q value, which is used to reduce the infuence of the Q value underestimation and improve the robustness of the network structure.
Te network structure of the dual-actors and dual-critics is shown in Figure 2.
For a two-actor network, the training of this network is subject to the same issues upon the use of the same sample data and processing methods. In order to ameliorate this kind of problems, the update of the parameters of the twoactor network is based on diferent policy gradients, which helps to reduce the coupling between the two-actors and further improves the convergence rate of the algorithm [39,40].
If the policies of the two-actors are defned as π 1 and π 2 , and the parameters of the dual-critic network are θ 1 and θ 2 , we will have two actions a 1 � μ (s| π 1 ) and a 2 � μ (s| π 2 ), then we can select the action with the maximal value based on this dual-actor network by using the following equation: (1) Arm_easy. 400 * 400 2-dimensional space is constructed in the Arm environment. One end of a robot arm is fxed in the middle of the environment. Te goal of the training is to make the other end of the robot arm fnd the blue target point as shown in Figure 3. (2) Arm_hard. Tis is similar to the Arm_easy environment, the only diference is that the target point is randomly generated in each round.
Two classical, continuous control task used in this work are shown below.
(1) Pendulum. Te pendulum starts at a random position, the aim is to push it swing upwards and keep erected. (2) Mountain Car Continuous. Tis task is to drive a car to reach the top of a hill; however, the power of the car is not sufcient to drive it directly to reach the top, it needs to rise and drop on the left and right sides repeatedly so that it can accumulate power to reach the top. It is shown in Figure 4.
Te 4 Mujoco continuous control tasks include:
(1) Half Cheetah. Train a bipedal agent to learn running as shown in Figure 5.
(1) Randomly initialize the actor-critic network for their parameters θ 1 Q , θ 2 μ and θ 2 Q , θ 2 μ (2) Initialize the target network Q′ and μ′, and copy the online network parameters to the target network (3) Initialize the experience playback bufer D, noise coefcient N t , and discount rate c (4) Set up external loop, the round number � 1, M (5) Initialize State S as the current state, and obtain the start state s 1 (6) Set up internal loop, the round number � 1, T (7) Select action a t : a t � argmax a [Q 1 (s t , a t , θ 1 μ ), Q 2 (s t , a t , θ 2 μ )] + N t (8) Conduct action a t , and obtain the reward r t and the new state s t+1 (9) Save the experience data (s t , a t , r t , s t+1 ) in an experience pool (10) Randomly select a certain number of samples (s i , a i , r i , s i+1 ) from the experience pool (11) Calculate the target value Q: y 1 � r t+1 + cQ 1 ′ (s t+1 , μ′(s t+1 |θ μ′ ))y 2 � r t+1 + cQ 2 ′ (s t+1 , μ′(s t+1 |θ μ′ ))y � min (y 1 , y 2 ) (12) Calculate the square error of the loss function and update the critic network: J(w) � 1/m m j�1 (y j − Q(�(S j ), A j , w)) 2 (13) Update the actor network via the gradients of the sample data: (14) Regularly update the parameters of the target network: Computational Intelligence and Neuroscience (4) Walker2d. Train a 3-dimensional bipedal agent to walk forward as fast as possible.
Tis work compares the performance of DN-DDPG and the original DDPG algorithm. In order to study the improvement efect of the dual-critic network and the dualactor network, the DCN-DDPG algorithm which is the single-actor and dual-critic network is included for comparison. Te outcomes of the comparison are shown intuitively through experiments.
Parameter Setting.
To ensure the accuracy and fairness of the experimental results, the common parameter values of diferent algorithms are the same. Te training rounds for both the Arm environment and the two Gym classic control tasks are set to 2000 times, and the maximum number of training steps per round is 300 times. Te training rounds of 4 kinds of Mujoco continuous control tasks are set to 5000 times, and the number of training steps per round is the maximum number of round steps in the Gym environment. Te agent continuously learns and explores in the environment. If the preset task in the environment is successfully completed or the number of training times per round exceeds the maximum number of times, the scene will be reset and a new round will be started. Some parameters in the MuJoCo task are shown in Table 1. Neuron number in 2 nd layer 300 6 Experience pool volume 100000 7 Batch data size 256 8 Soft update coefcient 0.01 9 Action reward discount rate 0.99 10 Critic net output distribution low limit −20 11 Target net parameters update round number addition of an extra actor network to optimizing training. Te comparison of these three algorithms can make a more intuitive display of the two improved methods mentioned in this article: dual-critics and dual-actors. Te experimental results are shown in Figure 6. Te shaded part in the fgure represents the standard deviation during training, that is, when using the same hyperparameters and network model, diferent random number seeds are used to achieve random exploration. Te shaded upper limit is the optimal result. Te x-axis represents the number of rounds of agent training, the y-axis represents the cumulative reward obtained per round, and the experiment recorded the average reward value per 100 rounds.
In the environments of Arm easy and Arm hard, the average rewards from three algorithms stay around a same value. In some cases, the rewards from both DCN-DDPG and DDPG are superior to that of DN-DDPG. However, from the point of view of overall training efects, DN-DDPG performs better than the other two algorithms, while DCN-DDPG is slightly better than DDPG. In Pendulum experiment, the overall performance of the DN-DDPG is the best, Computational Intelligence and Neuroscience which is due largely to the fact that dual-critics network is able to reduce the error while dual-actors network selects the action of higher value. In cases of Mountain Car Continuous, the average rewards from these three algorithms tend to be the same. However, during the process of 200 time steps, DN-DDPG has a better convergence speed than the rest two algorithms. In addition, in Half Cheetah, Humanoid, Hopper and Walker2d, DN-DDPG has a worse starting performance than DCN-DDPG and DDPG, which could be due to the fact that DCN-DDPG and DDPG have relatively simpler network structure able to deal with complex environment easier than DN-DDPG. Te DN-DDPG needs a period for training, and after this initial training period the average reward from DN-DDPG becomes obviously better than the rest two algorithms. Again, the overall performance of DCN-DDPG is better than DDPG. Finally, the shaded areas of diferent algorithms are compared, with the outcomes that the area of DN-DDPG is smaller than those of DCN-DDPG and DDPG, which refects that the training of DN-DDPG is more stable. From the experimental results in Figure 6, the dualcritics method is able to increase the performance of DDPG algorithm, but to a limited extent. By introducing dualactors method, the DN-DDPG network, based on the DCN-DDPG, is able to further increase the overall performance and training stability of the algorithm. Hence, compared to the original DDPG, the DN-DDPG which is based on dualactors and dual-critics, has the best increased performance.
Conclusion
A deep deterministic policy gradient algorithm is proposed based on a dual-actors and dual-critics network. In order to reduce the overestimated error in the original actor-critic network, a dual-critics target network is introduced into the algorithm, and the minimum action estimate generated by the two networks is selected to update the policy network. In order to alleviate the problem of underestimation caused by the dual-critics network, a dual-actors network is added on the basis of the original network, and the action with the highest value among the two actions generated by the dual-actors network is selected. Te experimental results show that, compared with the original DDPG algorithm, and the DDPG algorithm based on the single-actor and two-critics network, the novel DN-DDPG algorithm based on the dual-actors and dual-critics network has a higher cumulative reward and a smaller standard deviation of training.
Tere is more to be explored in future work. First, in order to improve the optimization ability of the algorithm, more suitable deep learning methods can be explored and applied to neural networks. Second, for the experience replay mechanism in the DDPG algorithm, it is viable to explore whether there is a better method to determine the sample priority to improve the convergence speed during training.
Data Availability
Te dataset can be accessed upon request.
Conflicts of Interest
Te authors declare that there are no conficts of interest to report regarding the present study. | 2022-11-18T15:45:13.040Z | 2022-11-18T00:00:00.000 | {
"year": 2022,
"sha1": "147aa8f0b6d64e9a35019075ca07a8ea19a42a0e",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/cin/2022/1117781.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6531670d673b9f914b0488687ff951f444e04a4e",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
204641125 | pes2o/s2orc | v3-fos-license | Statistical analysis of control quality of MPC using testing hypothesis
Methods of the statistical induction have a significant role in the quantitative research. In a wide spectrum of research areas, the methods based on testing hypotheses have been frequently used. However, in the area of the process control, testing hypothesis has not been widely considered as an established tool for signal analyses, although signals in control loops are suitable for analysis by means of quantitative statistical methods due to their stochastic character. Particularly, a statistical paired comparison can be applied for analysis of control quality achieved with different control algorithms. This comparison can be based on a paired comparison of corresponding signals obtained with different or modified control algorithms. The aim of this paper is a proposal of incorporation of testing hypothesis to analysis of control quality. The analysis was performed on a strictly defined significance level 0.001, which is a standardly used value in technical applications. As an example was demonstrated analysis of control quality achieved with two versions of a predictive controller. Finally, achieved results of paired comparison using testing hypothesis are discussed.
Introduction
One of the main aims in the process control [1] is achievement of a suitable quality of control. The quality of control is often examined in order to evaluate which control algorithm reaches the best results in a particular control problem. The control algorithms which yield appropriate control results are often complex and computationally demanding. Therefore, there is an effort to propose modifications of the control algorithms in order to simplify them. These simplifications are obviously at the expense of the control quality. The quality of the control is then examined in order to evaluate whether the modified control algorithms are still suitable for a particular control problem or not. In this paper, this case was considered.
As a suitable example of an improving a control algorithm with regards to decreasing of the computational complexity, a multivariable Model predictive control (MPC) will be considered. Model predictive control is one of currently utilized modern control methods, as can be seen in e.g. [2]- [4].
With regards to high computational requirements caused by multivariability of the controlled process, higher prediction and control horizons and number of constraints of variables, modifications of MPC control algorithm are advantageous as has been proposed e.g. in [5]. A significantly important part of the constrained MPC is an optimization task. It is characterized by higher computational complexity. Therefore, a reduction of the computational complexity of the optimization methods in MPC has been widely researched [6]. An application of an appropriate numerical method is necessary for achievement of the vector of future increments of the manipulated variables given by solving the optimization problem incorporating a suitable cost function with considered constraints. As frequently used numerical method for solving the optimization problem in MPC, the Hildreth method has been widely used [7]. Besides modifications of the numerical optimization methods, general simplified optimization strategies used in MPC controllers have been proposed [6].
The control quality is mostly analyzed using general control quality criterions based on sums of powers of control increments and on sums of powers of control errors [8]. These criterions can result only in descriptive attributes of control quality. Therefore, on the basis of particular values of the criterions, it is not possible to identify if the control quality achieved with one algorithm is statistically significantly different from control quality achieved with another algorithm.
The aim of this paper is to examine the control quality with use of testing hypotheses [9] probability distribution character, then the Paired T-test has to be applied. In the opposite case, the Wilcoxon Paired test should be used. An important assumption for testing hypotheses is a declaration of a significance level α, which is considered as strictly defined for 0.001 value in the technical applications. The analysis was then performed on this strictly defined significance level. In this paper, the testing hypothesis is applied in comparison of control quality between a standard MPC algorithm and particularly modified MPC algorithm [11], where a modification of an iterative numerical optimization method was proposed. The modification consists in an addition of a new termination condition in the iterative algorithm. In this paper, the obtained results of the analysis using testing hypothesis are more mathematically supported then the simple descriptive values of the control quality criterions.
Multivariable MPC analyzed by standard control quality criterions
In MPC, the control quality criterions J1 (1) and J2 (2) will be evaluated as well as a number of floating point operations. The floating point operations (flops) are measured both for the MPC algorithm without the modifications (F) and for the modified MPC algorithm (F*), which was proposed in [11] (Section 3). The numbers of flops is determined using rules published in [12]. This standard analysis will be compared with our new proposal of the analysis of signals using testing hypothesis with regards to control quality in Section 4. (2) A model with two inputs and two outputs (TITO) was further considered in the framework of MPC [1]- [4] in this paper.
The TITO system can be further expressed as (3)-(5) in the form of a general transfer matrix (3).
A widely used model in model predictive control is the CARIMA model which we can obtain by adding a disturbance model (10)- (11), where polynomial matrix C will be further supposed to be equal to the identity matrix.
The difference equations (10) of the CARIMA model are used for computation of predictions in predictive control. These equations can be further written into a matrix form (12)-(15).
It is necessary to directly compute three steps-ahead predictions by establishing of previous predictions to later predictions. Three past values of the system output are determined by computation: Computation of the predictions can be divided into recursion of the free response and recursion of the matrix 13 14 15 16 11 12 11 12 23 24 25 26 21 22 21 22 33 34 35 36 31 32 1 31 32 0 43 44 45 46 41 42 2 41 42 51 52 51 52 53 54 55 56 61 62 61 62 63 64 65 66 1 1 qq qq gg gg If the control horizon is lower than the prediction horizon a number of columns in the matrix is reduced. Predictions can be written in a compact matrix form (26).
The computation of a control law of MPC is based on minimization of quadratic cost function (27) related to quadratic programming [13].
where w is a vector of the reference trajectory. The criterion can be modified using the expression to (29) and (32)-(33).
where the gradient g and the Hess matrix H are defined by following expressions:
Modified optimizer in MPC
The modification of Hildreth's method presented in [11] was used in this paper for purposes of analysis and comparison of control quality.
This modification of the dual Hildreth's method proposes a new exit condition of the iterative numerical algorithm which is based on a comparison of two last vectors (35) in context of the dual optimization method [13] (36)-(38) using following condition (39).
Proposal of analysis of control quality using testing hypotheses
For the purposes of evaluation of quality of control by means of analysis of discrete signals (controlled variables, increments of manipulated variables, control errors), the testing hypothesis can be utilized. The testing hypothesis can be based on a comparison of paired values of the discrete signals. A particular type of the paired values comparison corresponds to an individual type of mathematical hypothesis. It can be solved either by the Paired T-test or by the Wilcoxon Paired test. The type of the test is chosen according to the normality property of the tested data. If the data fulfil normality, the Paired T-test is used. Otherwise the Wilcoxon Paired test is then applied.
An important initial part in testing hypotheses is a declaration of the significance level α. In technical applications, it is generally considered as strictly defined for the value of 0.001. This value of the significance level was also applied in this paper. The second part of testing hypothesis is a declaration of the zero hypothesis. The zero hypothesis is given by the following proposition: "There are not statistically significant differences between pairs of values". An alternative hypothesis is then defined as: "There are statistically significant differences between pairs of values". The results obtained from the testing hypothesis are considered in the form of p values [9]. The zero hypothesis is failed to reject, if the p value of the testing hypothesis is greater or equal to α. Zero hypothesis is rejected in favour of the alternative hypothesis if p is lower than α.
Results
By the analysis using testing hypotheses, influence of simplifications performed in the MPC algorithm was evaluated. The paired comparison statistical methods were applied. Results of the testing normality of the data by Shapiro-Wilk and Anderson-Darling tests were an assumption for selection of an appropriate statistical method. The discrete signals in multivariable MPC with constraints were analysed in the hypothesis testing with regards to quality of control.
The modified MPC was compared with the nonmodified MPC by simulation with constrained optimization problem in MATLAB. Following TITO model (40)-(41) was being controlled. The setting of MPC's parameters (42) and conditions (43) were considered. The matrix I is an identity matrix and E is a unit matrix. As can be seen in Table 1, values of control quality criterions (1)-(2) were slightly decreased and the lower number of floating point operations was measured in MATLAB using rules [12]. Using the proposed approach based on the analysis of signals between two realized versions of MPC, results (Table 2) in the form of p-values were achieved using the corresponding statistical tests according to the datanormality condition.
The zero hypothesis was given by the following proposition: "There are not statistically significant differences between pairs of signal values". An alternative hypothesis was then defined as: "There are statistically significant differences between pairs of signal values". Significance level was considered as 0.001. According to the achieved results in Table 2, each zero hypothesis was failed to reject on the significance level 0.001. Therefore statistical significant differences between pairs of signals were not identified on the strictly declared significance level. This corresponds with the results of control quality criterions in Table 1, where only slight diferences can be observed.
Conclusions
For the purpose of application of testing hypothesis into a field of analysis of signals, the control quality achieved by two different algorithms was analyzed and compared using testing hypothesis. In this hypothesis testing, partial values of signals were analyzed in each sampling period of MPC. It would not be possible to consider statistical significance of differences in achieved control quality only from the descriptive attributes given by standardly used control quality criterions. By using of methods of testing hypotheses on existence of the statistical significant differences between two discrete signals, analysis of a control quality was successfully complemented by more mathematically supported results. The paired comparison was performed by the Wilcoxon paired test on the strictly defined significance level 0.001. Therefore, the achieved results had relevant informational value based on mathematical statistics. It was proved that the difference between control quality of modified and original methods of optimization in predictive control was not statistically significant. | 2019-09-26T08:51:40.507Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "90a39eae0aef32a1d154ddcd9cec744935f001d5",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2019/41/matecconf_cscc2019_01037.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "25d7fd9d257a7eb66e649371f1e702032f379bc9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
38828303 | pes2o/s2orc | v3-fos-license | Building Time-Affordable Cultural Ontologies Using an Emic Approach
. Recently, studies about culturally-aware systems have arisen to address digitized culture. Among these systems those enculturated driven by cultural knowledge embed culture in their design. To deal with the specifics of cultural groups, the development of machine-readable cultural knowledge representations can provide a substantial help. In this research we present a process to build time-affordable, emic, conceptually-sound and machine-readable cultural representations. These representations originate from Cognitive Anthropology. They follow a three steps methodology: ethnographic sampling, individuals’ personal knowledge elicitation and cultural consensus analysis. We use lexico-semantic relation extraction as a mean to automatically elicit knowledge structures. Their formalisation is achieved through Ontology Engineering. We conducted experiments to build three cultural ontologies in order to assess the whole process. It came out that with the lexico-semantic relation extraction technique, the best representations we can obtain are consensually-limited, incomplete and contain some errors. However, many clues indicate that these problems should be solved by using higher quality elicitation techniques.
Introduction
Interest in cultural awareness grows more popular as globalisation is vector of increasing cultural diversity. Since the s, with the rapidly expending web, culture is digitized and computer systems are now the entities which are the most exposed to its diversity. Culture shapes users' behaviors and thus impacts the performance of many systems/applications. That is why these systems have to develop cultural awareness.
Blanchard et al. [ ] define culturally-aware systems as "any system where culture-related information has had some impact on its design, runtime or internal processes, structures, and/or objectives". They present three types of systems: enculturated systems, runtime cultural adaptation systems and cultural data management systems. Enculturated systems are systems whose design meet the cultural requirements of given cultures [ ]. Runtime cultural adaptation systems aim to artificially reproduce cultural intelligence through two steps: understanding and adaptation. In other words, by identifying one's culture a culturallyintelligent system can provide the right enculturation as presented by Rehm [ ].
The enculturation of a system is constrained by the cultural knowledge available for the latter or a designer. That is why, machine-readable representations providing understanding about cultures could effectively support the development of these systems.
Two approaches can be used to produce representations of cultures. The etic approach has for objective to find cultural universals. It is an outsider view of culture. In contrast, the emic approach tries to identify the specifics of a culture such as their concepts and behaviors. Insight is gained from inside. Currently, cultural knowledge representations used to support the development of enculturated systems are etic-based. Their main appeals are that they are ready-to-use representations easily applicable to any culture [ , ]. However, these representations are coarse-grain and limit the understanding of the cultures they describe [ ]. Therefore, finer-grained emic-based representations are more relevant to develop enculturated systems.
While emic-based representations solve the problem associated with the lack of granularity, their creation is time-consuming. Most of the methodologies used in practice by ethnographers require intensive human intervention (from the ethnographers or participants) in the process of eliciting knowledge. Therefore the latter is hardly scalable, and thus not practicable to deal with the diversity of cultures. As such, the process supporting the construction of emic-based cultural representations must be relatively automatic.
In this paper we present a process applicable to any cultural domains to build time-affordable, emic, conceptually-sound and machine-readable cultural knowledge representations. To construct these representations we followed a methodology coming from Cognitive Anthropology. It is composed of three steps leading to the acquisition of culturally-relevant information: ethnographic sampling, individuals' personal knowledge elicitation and cultural consensus analysis. The time-affordable elicitation of knowledge and its formalisation are similar to what already exist in other ontology engineering works such as SPRAT [ ] or DYNAMO [ ]. We follow Hearst's [ ] method to automatically extract hypernym/hyponym relations from texts. As for the formalisation of the representations, we rely on the Resource Description Framework (RDF) formal language. Therefore, this research is about the emic and automatic generation of cultural ontologies from texts. https://www.irit.fr/dynamo/ Our plan is as follow. We begin by introducing the methodology. It starts with the creation of the cultural knowledge representations and ends with their formalisation. Then, we present our process and the associated design choices. We end by experimenting extensively our process on the public safety domain with police forces coming from Australia, USA and England. Obtaining encouraging results, we conclude this study.
Emic-based Cultural Knowledge Representations
Ethnography is the process of collecting, recording and searching for pattern to describe a culture of people. In other words, ethnography is about discovering cultural knowledge leading to the production of cultural knowledge representations. "New ethnography", ethnoscience or Cognitive Anthropology are founded on the premise that culture is a "conceptual mode underlying human behavior" [ ]. The cognitive theory of culture situates culture in the mind as a system of learnt and shared knowledge [ , ].
This theory shaped a number of methodologies to produce cultural representations which are intrinsically emic. "Ethnographers must discover the organizing principles of a culture-the semantic world of the natives-while avoiding the imposition of their own semantic categories on what they perceive" [ ].
To our knowledge, there is no clearly defined methodology to create cultural representations. Most of the ones developed in the literature are based on the ethnographers' experiences. However, these methodologies share three main steps: ethnographic sampling, individuals' personal knowledge elicitation and cultural consensus analysis [ -]. .
Ethnographic Sampling
The ethnographic sampling step is based on the idea that cultural knowledge is socially-constructed. It aims to capture a representative number of individuals likely to share the same culture and thus similar knowledge. This task is generally achieved through the identification of a community, a set of individuals with long-term, strong, direct, intense, frequent and positive relations [ ].
Once the ethnographic sample is determined, the knowledge of each participant needs to be elicited.
. [ , ]. They constitute the core of any conceptualisation. As such, individuals' knowledge elicitation is mainly about acquiring concepts and lexico-semantic relations.
Individuals' Personal Knowledge Elicitation
After eliciting the personal knowledge structures of each individual constituting the sample, their distribution has to be analysed to determine their cultural dimension. .
Cultural Consensus Analysis
The The three steps of the methodology leads to the production of cultural knowledge representations. However as such, they cannot be used for the development of enculturated systems as computers systems are not yet able to make sense of them. To be understandable, they have to be formalised.
Formal Cultural Knowledge Representations
The cultural representations are composed of knowledge structures. The formalisation of such structures is studied in the field of Knowledge Engineering, more precisely the Ontology Engineering subfield. Therefore, methodologies to build ontologies could be used to formalise the cultural representations.
. Ontologies
Gruber had defined an ontology as "an explicit specification of a conceptualisation" [ ]. The term 'explicit' in Gruber's definition means that the knowledge must be specified unambiguously, constraining its interpretation. The principal components of an ontology are labels, concepts, relations and axioms. Axioms are rules associated to the relations in order to embed logic necessary for reasoning.
Borst [ ] added to the former definition that the specification had to be formal and the conceptualisation shared. Indeed, it is necessary that the conceptualisation results from a consensual agreement to ascertain that the knowledge embedded is coherent and consistent within a specific context. This task is called an 'ontological commitment'. This aspect is ensured by the shared dimension of the cultural representations. The formalisation of the specification is needed for interoperability, re-usability and especially for for enculturated systems to read cultural representations.
There are different levels of formalism depending on the language used to express the ontology ranging from informal, mostly written in natural languages, to formal, based on machine-readable languages. Formal languages like RDF (Resource Description Framework) or OWL (Web Ontology Language) are supporting the semantic web. RDF is a language based on entities (resource, property, value) which constitute triples of the form (subject, predicate, object). Resources are concepts described thanks to an Uniform Resource Identifier (URI). It makes sense since ontologies are non-ambiguous specifications. Properties can be attributes or any other kind of relations, most likely semantic ones. Values are literals pointing either to a symbol or another resource. The common syntax to formalize RDF is the XML, called RDF/XML. Ontologies written in RDF can be interpreted by machines through SPARQL Protocol and RDF Query Language (SPARQL). .
METHONTOLOGY
Methodologies to create ontologies are mostly based on experience [ ]. The METHONTOLOGY is a proven framework describing the general steps to build an ontology [ ]. Common steps are composed of specification, conceptualisation, formalisation, implementation and evaluation.
The specification consists in planning the production and exploitation of an ontology. At a minimum, it defines its primary purpose, level, granularity and scope. These specifications are mainly guidelines for the conceptualisation. Typically, the conceptualisation step is carried out by a group of domain experts. The goal is to discover the significant concepts and associated relations related to a domain [ ]. The formalisation step expresses the conceptualisation with formal languages. It is often manually supervised by knowledge engineers or with the support of a software like Protégé . Mapping techniques can also be used to automatically transpose informal to formal knowledge [ ]. The implementation step addresses the technical and practicable aspects associated with the usage of an ontology by a computer system. The evaluation step validates each step according to the specifications.
Following the METHONTOLOGY, we are able to produce formal cultural ontologies by considering cultural knowledge representations as conceptualisations. Finally, these ontologies are readable by computer systems and can provide a significant amount of understanding about the cultures they represent.
Building Time-Affordable, Formal and Emic-based Cultural Representations
The design of our process was driven by the METHONTOLOGY whose conceptualisation step consists in the methodology coming from Cognitive Anthropology. Among other choices required to build the process, we decided to use the lexico-semantic relation extraction to have an elicitation as automatic as possible. .
Selecting Individuals based on Shared Social Criteria
Typically, cognitive anthropologists select their sample through shared sociallyrelated criteria such as genders, religions, jobs or areas -working places [ ], towns [ ] or regions [ ]. While the strength of this method comes from its ease of use and speed, its weakness is that it cannot fully guarantee that the selected individuals actually represent a community. Effective but costly techniques to identify communities can be found in social sciences such as the community detection algorithms coming from social network analysis.
In this study, samplings are created by following the traditional technique as a number of studies proved its efficiency. .
Eliciting Automatically Individuals' Knowledge from Texts
Automatically extracting individuals' knowledge structures from texts is an indirect elicitation technique [ ]. It is composed of two tasks. The first one consists in collecting a sufficient amount of textual data for a given individual. The second task aims to retrieve the latter's knowledge (i.e. significant concepts and/or relations) by analyzing the data.
. . Collecting Web Data
Ethnographic data are mainly textual and most of the time collected thanks to interviews or observations. Besides being costly in time, recording data through these means also biases to some extent the data. The safest and fastest technique to collect data is to gather already existing raw data. Nowadays, the web provides a large amount of freely available textual data about many individuals from which data can be collected. In our process, the data were retrieved directly from websites. Textual data collection was achieved thanks to HTTRACK . It is a tool that can mirror the content of a website by crawling and downloading its files.
The automation of the data collection came with an additional constraint during the sampling step. Indeed, it became necessary to verify that the individuals composing the sampling disposed of accessible online data. http://www.httrack.com/
. . Textual Data Analysis
The goal of the data analysis is to retrieve the conceptualisation of an individual [ ]. This part of our process consists in acquiring knowledge structures by mining significant concepts and their relations. It required several preprocessing steps. We started by cleaning the data, followed with natural language processing and ended by annotating the lexico-semantic relations to extract.
Preprocessing
The web nature of the data collected drove the cleaning operations. Web data can come in various file formats (.doc, .odt, etc). The text extraction from any files was achieved by Apache Tika . We handled language heterogeneity by identifying the language of each document with the LangDetect API [ ]. We only kept English documents. OpenNLP was used to detect sentences. We decided to work on the sentence level rather than the document level mainly to avoid data redundancy by ensuring that the sentences were unique. For example, documents coming from websites are often distinct from each other while they are composed of duplicate contents such as menus, Twitter or Facebook feeds and so on.
Then, we used the Stanford CoreNLP API to support common natural language processing operations: tokenization, Part of Speech (PoS) tagging and lemmatization. Eventually, nominals which constitute the main concepts of conceptualizations were found using a simple pattern matching based on the PoS tags of the tokens.
After having cleaned and preprocessed the textual data, the results were stored as annotations in a 'serial data store' using GATE (General Architecture for Text Engineering). This last operation was required to easily retrieve and mine the data.
Discovering Important Concepts
Finding significant concepts in content is based on the idea that the number of occurrence and importance of a token are correlated. Thus, term frequency is often used to weight and rank terms. Other metrics can achieve similar results, such as TF/IDF (Term Frequency/Inverse Document Frequency).
In our process, the important concepts were selected by coupling the quantification of nominals with a rough filtering on their total occurrences.
Finding Significant Relations
In this study, we use the most popular method to find lexico-semantic relations. Introduced by Hearst [ ], it relies on handwritten syntactic patterns indicative of semantic relations. For example, in the sentence: 'A dog is an animal', the syntactic pattern 'is a' indicates that there is a hypernym/hyponym relation between 'animal' and 'dog'. Therefore, hypernym/hyponym relations can be discovered through a simple mapping, by using the expression Y is a X, with Y and X two nominals. Thereafter, many researchers confirmed the relevance of Hearst's methodology by applying it for other lexico-semantic relations [ -].
Like Wang et al. [ ], the implementation of lexico-semantic relation extraction was achieved through the Java Annotation Patterns Engine (JAPE) which is specific to GATE. The syntactic patterns we used are summarized in the table . The final set of extracted lexico-semantic relations is constituted by filtering them according to the significance of their pairs of concepts.
Syntactic patterns
At the level of the individuals, we are able to elicit their personal knowledge. However, we cannot yet determine which part is cultural. To this end, we have to analyze the 'sharedness' of these distributed knowledge structures. .
Aggregating Concepts and Lexico-Semantic Relations
To analyze the cultural consensus of the sample, the elicited personal knowledge (concepts and lexico-semantic relations) of each individual was aggregated. It led to a mixed representation composed of knowledge ranging from personal to cultural (similarly to Vuillot et al. [ ]). To obtain a valid cultural representation, it is necessary to evaluate the knowledge and filter the latter based on its distribution. At this stage we are able to create a cultural representation from an ethnographic sample. However, these representations cannot be implemented into enculturated systems and thus are still unusable. They have to be formalised. .
Ontologizing Concepts and Lexico-Semantic Relations
In our process, we used the "ontologizing" technique [ ]. After the consensus analysis, we mapped the concepts and hypernym/hyponym relations into RDF classes and RDFs sub-classes. The formalisation constitutes the last step of our process which is summarized in figure . It starts by selecting individuals based on shared social criteria. Then, web data about each individual of the sample are collected. These data are analysed through text-mining techniques to automatically elicit their respective personal knowledge (embodied in the conceptual structures). By quantifying the sharedness of individuals' personal knowledge, we are able to determine the cultural consensus. The latter analysis enables the production of a cultural representation. Finally by ontologizing the conceptual structures, a formal cultural ontology is created. Having described the whole process to produce formal timeaffordable cultural representations, the next section consists of experiments to assess its performances.
Experiments
The public safety domain was chosen for our experiments for two main reasons: the available amount of data and current social context. After a description of the settings, we present and discuss the results associated to three formal cultural representations we tried to produce.
. Settings
We constituted three samples with culturally different police forces coming respectively from Australia, United States and England (see table ). Considering agencies as individuals may not be the best choice to carry out our experiments. However, this decision was driven by the necessity of being able to collect large amount of textual data about a single domain for a consequent number of 'individuals'. Table . Samples with their respective number of individuals.
Number of Individuals Australian Police Forces American State Police Forces English Police Forces
While collecting data from the web, due to the robot protection or other factors, the content of some websites could not be retrieved. Therefore, we excluded these police forces from our samples.
After having retrieved the data, we preprocessed it. We cleaned the textual data and kept well formed sentences with a length between and characters. We removed police forces having less than , sentences left. This threshold was used to separate the individuals which possess too few data. The table provides updated information about our samples. The 'Appendix A' provides the detailed list of individuals we had at the initial stage of the process and their respective sample. Details about the police forces remaining and the associated number of sentences are available in 'Appendix B'.
Then, we quantified the nominals and extracted the lexico-semantic relations for each individual. For each sample, the nominals were ranked given their averaging position. We kept arbitrarily the top nominals and filtered accordingly the hypernym/hyponym relation candidates in order to create the various domain conceptualizations. At this point, we were able to produce cultural representations for the Australian, American and English police forces.
. Evaluation
The evaluation of our experimental results was achieved by relying on a semiautomatically constituted gold standard. Three gold standards were constituted with labeled lexico-semantic relations, one for each sample. Because every police forces belong to the westerner culture, we were able to use WordNet [ ], which possesses a similar cultural bias, to obtain automatically assessments on the elicited lexico-semantic relations. Whereas, our raw results show an average precision of %. According to Cederberg and Widdows, the discrepancy in precision is mainly due to the difference of quality between the datasets. In fact, Hearst use Grolier's Encyclopedia, Maynard et al. use Wikipedia and themselves the British National Corpus. In contrast, we are using sources of poorer quality as our data came directly from website pages. We believe that it can explain our lower initial precision.
We observed the potential cultural representations by varying the number of agreements increasingly. We expected that highly consensual representations have higher precision but a lower relation coverage compared to mixed ones. Our hypothesis was that to obtain the best cultural representations, it is necessary to manage properly this trade-off between precision and loss. We computed the loss Based on these observations, we conclude that the main problem is the high loss. The loss could be explained by three factors. The first one concerns the high number of relations specific to individuals such as (partner, hypernym, northumbria police), but it does not constitute a problem as we are not interested by those. The second factor corresponds to the cultural domain. Many extracted relations are related but do not strictly belong to the public safety domain like (resource, hypernym, goods). Similarly to the first factor, this loss does not matter. The third factor concerns the scarcity of the syntactic patterns enabling the extraction of the lexico-semantic relations. Their low recall has for consequence that the discovery of a relation in a corpora is related to luck. This last factor is truly problematic.
This issue is directly linked to the knowledge elicitation technique used in our study. Indeed, lexico-semantic relation extraction relying on syntactic patterns cannot provide the quantity nor the quality required to support properly our process to produce cultural representations. In fact, no existing hypernym/hyponym relation mining technique using large corpora might be able to achieve this task. So we were expecting those results.
Nevertheless, with few efforts we were able to produce a relevant partial cultural ontology for the English Police Forces composed of hypernym/hyponym relations. We used Gephi to visualize the end result. On figure we focused on the concept 'crime'. We observe common hypernym/hyponym relations as well as an interesting contextual relation between 'hate crime' and 'issue'. Such relations are really meaningful in a cultural conhttps://gephi.org/ text. In fact, the focus on hate crimes by English police forces comes from the enforcement of the Equality Act . It also becomes obvious that many relations are missing, but we believe that this representation provides a coherent foundation to support further improvements.
Conclusion
We have to remind that our goal was to build time-affordable, emic, conceptuallysound and machine-readable cultural representations. We introduced a methodology coming from Cognitive Anthropology to build emic-based cultural conceptualisations. In addition, we explained their formalisation through Ontology Engineering. Then, we presented a process to produce mostly automatically the representations. Using lexico-semantic relation extraction, the best we can obtain with this technique are representations consensually-limited, incomplete and containing some errors. However in the future, by using higher quality elicitation techniques, these problems could be solved.
Up to day, culturally-intelligent systems are developed using etic-based cultural representations. While facilitating cross-cultural mediation, these coarsegrain representations are not fitted for the development of systems requiring a deep understanding of cultural aspects [ ]. We believe that the production of finegrain cultural ontologies, obtained through an emic approach, is a first step for the development of a new generation of artificial cultural awareness supporting these systems. , no. , pp. | 2018-01-23T22:48:45.259Z | 2015-07-25T00:00:00.000 | {
"year": 2015,
"sha1": "a4b5bb73b8d4194908ed7645a65a242f2223da78",
"oa_license": "CCBY",
"oa_url": "https://hal.inria.fr/hal-01626988/file/447996_1_En_8_Chapter.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "bfcf1859fcd2321c052954648ecf161a0cb41049",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Computer Science",
"Sociology"
]
} |
52813485 | pes2o/s2orc | v3-fos-license | Beta Oscillations Distinguish Between Two Forms of Mental Imagery While Gamma and Theta Activity Reflects Auditory Attention
Visual sensory processing of external events decreases when attention is internally oriented toward self-generated thoughts and also differences in attenuation have been shown depending on the thought’s modality (visual or auditory thought). The present study aims to assess whether such modulations occurs also in auditory modality. In order to investigate auditory sensory modulations, we compared a passive listening condition with two conditions in which attention was internally oriented as a part of a task; a visual imagery condition and an inner speech condition. EEG signal was recorded from 20 participants while they were exposed to auditory probes during these three conditions. ERP results showed no differences in N1 auditory response comparing the three conditions reflecting maintenance of evoked electrophysiological reactivity for auditory modality. Nonetheless, time-frequency analyses showed that gamma and theta power in frontal regions was higher for passive listening than for internal attentional conditions. Specifically, the reduced amplitude in early gamma and theta band during both inward attention conditions may reflect reduced conscious attention of the current auditory stimulation. Finally, different pattern of beta band activity was observed only during visual imagery which can reflect cross-modal integration between visual and auditory modalities and it can distinguish this form of mental imagery from the inner speech. Taken together, these results showed that attentional suppression mechanisms in auditory modality are different from visual modality during mental imagery processes. Our results about oscillatory activity also confirm the important role of gamma oscillations in auditory processing and the differential neural dynamics underlying the visual and auditory/verbal imagery.
INTRODUCTION
When attention is internally oriented toward self-generated thoughts (SGT), the sensory/cognitive processing of external events decreases, which is known as perceptual decoupling (Smallwood and Schooler, 2006). This decoupling can be observed as impairments in performance during demanding tasks as well as reductions in amplitude of the ERP components related with sensory response (Smallwood and Schooler, 2015). Studies focused on visual response reduction during SGT have shown consistent results regarding this sensory attenuation. For instance, they have shown similar ERP amplitude reductions in P1 and P300 components, regardless of the experimental paradigm (Smallwood et al., 2008;Baird et al., 2014;Barron et al., 2011;Kam et al., 2011).
However, for auditory processing during SGT the evidences are rather inconclusive. On the one hand, studies have shown reductions in auditory N100 during mind wandering contrasted with on-task conditions (Kam et al., 2011;. Nonetheless, also found maintenance of auditory reactivity toward deviant stimuli, as an adaptive mechanism to respond to environment even during a mind wandering episode (also see: . On the other hand, Braboszcz and Delorme (2011) showed attenuation of the auditory ERP during mind wandering in a different time window; over the 200 ms post stimulus.
This non-conclusive evidence leave the open question of whether the auditory modality uses different attentional suppression mechanisms compared to visual modality and whether there is maintenance of auditory sensitivity when attention is internally oriented. Furthermore, if there is actually an attentional suppression process, it is not clear yet if specific mental states can modulate auditory attention during SGT. For instance, visual processing has been shown to be differentially affected depending on the thought's modality, i.e., visual or verbal/auditory thoughts, and this was showed for ERP and alpha oscillations (Villena-González et al., 2016). Assessing whether thought's modality (visual or auditory) affects in the same way the sensory response in the visual and auditory cortices would provide important information about supra-modal or modality specifics mechanisms of attentional suppression and also to better understand to what extent SGT might affect our daily attentional performance.
Given that brain oscillations can provide an understanding about levels of processing of the brain function different from ERPs (Cohen, 2014a), previous research has also investigated the functional association between SGT and brain oscillations, showing changes at different frequency bands (alpha, beta, theta) when comparing SGT with external tasks (Cooper et al., 2003;Engel and Fries, 2010;Braboszcz and Delorme, 2011). On the other hand, gamma band activity has been classically shown to index auditory stimuli processing (Tiitinen et al., 1993;Cervenka et al., 2011).
The present work aims to assess whether auditory response is attenuated when attention is oriented toward internal thoughts, but also if there is any differential perceptual processing associated to the modality of the thought (visual or auditory/verbal). For this purpose, we used the auditory N1 component of ERPs and time-frequency measures in order to evaluate auditory sensory processing when participants performed three conditions; one in which participants passively listened auditory stimulation, one in which they performed a visual imagery task and the last one in which they were asked to perform an inner speech task. We hypothesize that during passive listening condition there should be a greater processing of auditory stimuli than during internal thoughts, which would be reflected as a larger auditory N1 and more spectral power in the gamma band during passive listening than in the other conditions. When both imagery conditions are compared (inner speech vs. visual imagery), we hypothesize that there should be a greater processing of auditory stimuli during visual imagery than during inner speech, given that during inner speech the auditory processing resources are being used by the internal task. This hypothesis is based on the theoretical framework proposed by Villena-González et al. (2016) in which there would be a processing resource competition if the modality of thought is the same that incoming stimulation. This would be reflected as a larger auditory N1 and more spectral power in the gamma band during visual imagery than during inner speech condition.
Participants
Twenty volunteers (9 women) were recruited for the study (mean age = 23, range = 18-30). All participants had normal or corrected-to-normal vision, and reported no colorvision deficiency. Participants had no history of drug abuse, neurological or psychiatric conditions. The protocol was approved by the Ethics Committee of Pontificia Universidad Católica de Chile. All subjects gave written informed consent in accordance with the Declaration of Helsinki. All experiments were performed at the Neurodynamic Laboratory of the School of Psychology of this University.
Stimuli and Procedure
All stimuli were presented on a computer screen with gray background situated 57 cm away from the participant. Psychopy software (Peirce, 2007) was used to design the experiment and display the stimuli. The task is a modified version of the task used in Villena-González et al. (2016). The task was composed of 2 blocks; each block had 30 auto-administrated trials. Before each block participants were asked to fix their eyes on a fixation cross in the middle of the screen. Participants started each trial by pressing a button. After this, either the word "imagine" or "speech" came up in the middle of the screen (Figure 1). Afterwards, a colored circle (2 cm radius) appeared indicating whether the attention should have been allocated in the external stimuli (red circle) or in their thoughts (green circle). When the word "imagine" appeared followed by the green circle, participants were instructed to think about anything they wanted, given that it had a strictly visual quality, avoiding auditory or phonological elements of any form. This condition will be named throughout the text as: "visual imagery." If the word "speech" was followed by a green circle participants were instructed to think using their inner speech without using any sort of mental image. This condition will be referred to as: "inner speech." Finally, if any of the two words were followed by a red circle, participants were instructed to covertly attend the appearing tone beeps. This condition will be referred to as "passive listening." After the circle appearance, during the passive listening or imagery period, 15 auditory probes (beep FIGURE 1 | Schematic of experimental paradigm. Each trial started by pressing a button triggering a fixation cross presented between 1 and 1.5 s. When the word "imagine" appears followed by the green circle, participants were instructed to think about any mental image, avoiding auditory elements. If the word "speech" was followed by a green circle participants were instructed to think using only their inner speech. Finally, if any of the two words were followed by a red circle, participants were instructed to passively hearing the auditory stimuli. Color cues were counterbalanced between blocks. Auditory probes are presented in each condition (1200 Hz, 75 dB SPL) with duration of 100 ms. The inter stimulus interval was randomly jittered between 1000 and 1500 ms.
tone; 1200 Hz, 75 dB SPL) were presented through headphones with duration of 100 ms. The inter stimulus interval was randomly jittered between 1000 and 1500 ms. Color circles were counterbalanced, i.e., during the second block the red circle cued the thought related tasks and the green circle cued the auditory related task.
EEG Recording
EEG data was obtained using 64 electrodes (Biosemi R ActiveTwo) arranged according to the international 10/20 extended system. Horizontal and vertical eye movements were monitored using four external electrodes. Horizontal EOG was recorded bipolarly from the outer canthi of both eyes and vertical EOG was recorded from above and below of the participant's right eye. Two additional external electrodes were placed on the right and left mastoid to be used for later re-referencing.
EEG Data Pre-processing
Data pre-processing was performed using Matlab 7.8.0 (The Mathworks, Inc.) with EEGLAB v7.1.7.18b toolbox (Delorme and Makeig, 2004). The signal was down-sampled off-line at 1024 Hz. Because of hardware setup constraints, all electrodes were referenced to CMS and DRL during acquisition, but offline re-referenced to averaged mastoids. Horizontal and Vertical EOG were calculated by means of the difference between leftright electrodes and above-below electrodes, respectively. For ERP analysis, a 2nd order infinite impulse response (IIR) Butterworth filter was used for band-pass filtering continuous EEG data, with a half amplitude cut-off frequency of 0.05 Hz and 30 Hz. For frequency analyses we used a band-pass filtering on epoched EEG data with a half amplitude cutoff frequency of 0.5 and 80 Hz using Fieldtrip software (Oostenveld et al., 2011). Artifact detection was performed on segmented data (see below epoch segmentation details) by manual inspection blind to condition. All epochs with artifacts were rejected.
ERP Calculation
The EEG signal was segmented into 20 trials per conditions; each trial window was captured from the appearance of the colored circle up to the trial's end. Further segmentation in epochs was applied for each condition, selecting the 200 ms preceding each auditory probe appearance up to 500 ms after that. All of the epochs corresponding to the first appearing auditory probe were discarded given that participants were less likely to have already engaged in the thought construction process. After epochs rejections due to artifacts, the total amount of epochs per condition was in average 168.4 (SD: 27.69) for Passive listening, 163.95 (SD: 36.42) for Inner speech and 168.55 (SD: 31.62) for Visual imagery.
Epochs were averaged for each participant and condition. The middle-latency auditory response was defined as the local peak (Luck, 2005) occurring in the 70-90 ms time windows following the auditory probe (P1). Two time windows were calculated for N1 negative deflection; 90-110 ms and 130-160 ms. The Fz, FCz, Cz, CPz, and Pz electrodes were used for this computation given that midline fronto-central electrodes are widely known to be the best location to better capture the auditory N1 effect (Kam et al., 2011).
Time-Frequency Calculation and Baseline Normalization
Epochs time-locked to auditory stimulus were computed between −500 and 1000 ms. Afterwards we applied the short-time fast Fourier transform (FFT) method for extracting time-frequency power to each of these epochs. FFT was applied to sequential and overlapping segments of 250 ms of signal tapered by a Hanning taper in order to minimize the possibility of edge artifacts. The amount of overlap between successive time segments was 25 ms. Thus, spectral power was computed for every frequency bin (between 1 and 80 Hz). This procedure was performed for each epoch, electrode and participant, and then it was averaged across epochs. In order to eliminate evoked power, the average across epochs in the time domain was subtracted from each epoch in the matrix before the time-frequency transformation, in order to obtain the induced power time-frequency chart. Finally, the resulting signal was normalized by converting it to a Z-score relative to the baseline time windows. Using this Z-normalization, power data are scaled to standard deviation units relative to the power data during the baseline period. All these analyzes were computed using Fieldtrip software (Oostenveld et al., 2011).
Electrodes Selection for Time-Frequency Statistics
Electrodes were chosen using the data-driven methodology described by Keil et al. (2014) in order to follow an unbiased approach. According to this procedure, we averaged across all conditions, time, frequency and participants in order to reveal the topographical distribution of power in the scalp. Based on power values of the electrodes, a threshold was set in order to keep with the 10% of electrodes with the higher values of power. Two clusters were identified; a frontal and a parietal one. Specifically, we took a cluster of six electrodes over the frontal region (Fpz, Fp1, Fp2, AFz, AF3, and AF4) and another cluster of three electrodes in the parietal region (CPz, P1, and P2) (Supplementary Figure S3). These two clusters were used to perform the time-frequency statistical analysis.
Permutation Test and Multiple Comparison Correction
Time-frequency charts were averaged across epochs resulting in a grand average time-frequency chart for each experimental condition, participant and electrode cluster. For each electrode cluster (frontal and parietal), differences of spectral power were assessed using a non-parametric randomization test, including a correction for multiple comparisons (Nichols and Holmes, 2002;Maris and Oostenveld, 2007). The randomization (or permutation) test is a procedure in which the time-frequency charts belonging to different conditions are shuffled to compute a random distribution. This is then used to evaluate the statistical significance of the results by testing that the experimental differences between conditions exceed the random distribution (null hypothesis distribution). Following this rationale, the first step of the permutation test was to shuffle time-frequency charts across conditions and randomly choose two groups from the complete sample and calculate a t-statistic for each time-frequency bin. This step was repeated 1000 times. To correct for multiple comparisons, the highest t-value of each permutation was included in the permutation distribution (Bosman et al., 2012;Cohen, 2014b). Finally, from this distribution, the 5th percentile threshold value was used as a threshold to compare the permutation distribution with the original t-values. All values above this threshold were considered statistically significant with a p < 0.05.
RESULTS
The main aim of the present study was to investigate whether the auditory response is attenuated when attention is inwardly oriented as it has been reported for visual modality. Secondly, we assessed whether differences in mental representation can influence the potential attenuation of the sensory response. For these two purposes, we analyzed ERPs components and time-frequency power spectrum, while participants passively listened auditory stimuli, performed visual imagery or while they generated an inner speech.
Early Components of Auditory ERP
We analyzed the ERP evoked by the auditory probe. The amplitude for the different voltage deflections were calculated in five mid-line electrodes (Fz, FCz, Cz, CPz, and Pz) for each participant and condition. Repeated measure ANOVA showed no differences between conditions for any of the analyzed ERP components.
Time-Frequency Results
In order to investigate whether the different attentional states affects oscillatory brain dynamics, we calculated the timefrequency charts corresponding to the spectral power induced by auditory stimulus in different frequency bands. This analysis was performed in two different clusters of electrodes from frontal and parietal regions. All differences reported in the following section are significant after multiple comparison correction described in Section "Permutation Test and Multiple Comparison Correction" for non-parametric permutation test (p < 0.05).
Time-Frequency Results in the Frontal Area
Time-frequency plots are shown for each experimental condition in Figure 3. Differences in spectral power can be observed for gamma and theta frequency bands when passive listening was compared with internal attentional conditions (Figures 4A,B). Specifically, a narrow band (48-53 Hz) within gamma range showed a higher spectral power during the passive listening condition contrasted with the other conditions, between 70 and 200 ms. Another effect in gamma band (46-54 Hz) can be observed later in time between 420 and 670 ms, showing higher FIGURE 2 | ERPs to auditory probe. ERP waveforms for three different midline electrodes; Fz, FCz, and Cz. There are not differences between conditions for any of the early sensory components of ERP. gamma power during passive listening than in visual imagery condition ( Figure 4A). This late effect can also be observed when inner speech is compared with visual imagery (Figure 4C). In this case gamma band power (48-52 Hz) is higher in inner speech than in visual imagery during the 450-620 ms time window.
Differences in low frequencies can also be observed, with the passive listening condition showing higher power than inner speech between 3 and 10 Hz (25-200 ms) and later between 4 and 8 Hz (400-550 ms). Besides, passive listening condition also showed higher power than visual imagery in a very similar frequency band and time window, early between 3 and 10 Hz (25-150 ms) and later between 5 and 14 Hz (300-400 ms). These differences can also be observed in Figures 4A,B.
Finally, inner speech condition showed a higher power than visual imagery between 6 and 10 Hz during 270-400 ms and 570-670 ms (Figure 4C). Time-frequency charts of non-significant comparisons are in Supplementary Figure S4.
Time-Frequency Results in the Parietal Area
Time-frequency plots of parietal region are shown for each experimental condition in Figure 5. Differences in spectral power can be observed for beta frequency bands when visual imagery was compared with the other conditions. Specifically, visual imagery showed higher power than inner speech between 15 and 20 Hz (100-200 ms) and also higher than passive listening between 16and 19 Hz (100-150 ms) (Figures 6A,B). Interestingly, visual imagery showed lower power in the same temporal range but in higher range of beta band (Figures 6C,D). Specifically, visual imagery showed lower power than passive listening between 26 and 30 Hz (70-150 ms) and also lower power than inner speech between 26 and 29 Hz (70-120 ms). As it can be observed in Figure 5 and Supplementary Figure S5, there is an increase in beta band activity in all the conditions after the auditory stimulus, but the increase is restricted to high beta during passive listening and inner speech while during visual imagery the increase is in the entire range of beta broadband. Finally, visual imagery showed higher power than passive listening between 41 and 43 Hz (25-100 ms) and passive listening showed higher power than visual imagery between 53-56 Hz (420-470 ms), 56-62 Hz (520-600 ms), and 51-53 Hz (680-720 ms).
DISCUSSION
The present study sought to assess whether the auditory response is attenuated when attention is inwardly oriented toward mental imagery. Secondly, we also investigated if differences associated to the modality of the thought (visual or auditory/verbal) affects the brain auditory processing. ERPs and time-frequency power were analyzed while participants passively listened auditory stimuli, performed visual imagery or while they generated an inner speech.
The results of the present work showed no differences for early components of the auditory ERP between conditions. However, time-frequency analyses showed higher frontal gamma and theta power during passive listening than in the internal attentional conditions. We also found a different pattern of beta band activity for the visual imagery condition contrasted with the other two conditions. In the following we will discuss in detail these results.
ERPs Showed Maintenance of Auditory Evoked Response During Mental Imagery
Auditory ERPs were measured in order to assess sensory response to external auditory stimuli. In the present study we did not find differences between passive listening and inward attention conditions, neither when both inward conditions were compared. This result showed that when participants are instructed to orienting their attention to either visual or auditory thoughts, there is a maintenance of auditory sensory response, which suggests this modality operates differently from visual modality regarding attentional suppression mechanisms and controvert some previous findings about this issue.
The sensory attenuation during self-generated thoughts has been reported and widely replicated for visual modality (Smallwood et al., 2008;Barron et al., 2011;Kam et al., 2011;Baird et al., 2014). However, the sensory attenuation for auditory modality still remains unclear since the sensory modulations reported in previous studies have been inconclusive. For instance, studies have shown reductions in auditory N100 during mind wandering contrasted with on-task conditions (Kam et al., 2011. Interestingly, the same authors also found the auditory modality maintain the response toward deviant stimulus. They claimed this maintenance is an adaptive mechanism which is useful to respond to potentially dangerous environment stimuli even when participants are engaged in mind wandering . This latter claiming suggests auditory modality would have a flexible attentional suppression mechanism, varying depending on the kind of stimuli. On the other hand, Braboszcz and Delorme (2011) showed amplitude attenuation of auditory ERP over 200 ms post stimulus during mind wandering contrasted with on task, in which significant differences are found after multiple comparison correction described in Section "Permutation Test and Multiple Comparison Correction" for non-parametric permutation test (p < 0.05).
but differences in early N1 were not found. These differences can be attributed to differences in the task and the setup of each experiment, which it was proposed in a posterior review about the issue . Following this rationale, the absence of differences in the present study can be due the auditory system is deploying a flexible attentional suppression mechanism depending on context variation (e.g., type of task) but it could be due also because of factors such as psychological variables, for instance, the deliberateness of thought. In the mentioned previous works, SGT was measured as a spontaneous mind wandering but in the present study the attention was deliberately oriented toward thoughts as a part of a task. Deliberateness/spontaneity distinction is starting to be strongly considered in the field of self-generated thoughts (Seli et al., 2016a,b) since this is an important psychological variable that could influence many aspect of brain activity such as functional connectivity between attentional fronto-parietal network with default mode network (Golchert et al., 2017). Another possibility is that auditory system is differentially affected depending on the type of stimulation. For example, Bekinschtein et al. (2009) designed an auditory paradigm that assessed brain responses to violations of temporal regularities that were either local in time or global across several seconds. Local violations led to an early response in auditory cortex, independent of the focus of attention or the presence of a concurrent visual task, whereas global violations led to a late and spatially distributed response that was only present when subjects were attentive and aware of the violations (Bekinschtein et al., 2009). In this sense, auditory system can be prone to be attentional suppressed during SGT only for stimulation with some degree of complexity and temporality rules/patterns. This might be due an adaptation mechanism of human auditory system in order to allowing language processing (Aboitiz, 2012), which require the processing of temporal complex and global constructions rather than merely single tones without any pattern (as in the case of the present study). Taking all this evidence into account, the auditory modality maintain the reactivity toward external stimuli measured with ERPs when attention is deliberately oriented to internal thought as a part of a task, which can be explained by the intentionality of the process or the nature of stimulation. From the ERP results in the present study it is not possible to quite understand the mechanisms underlying attenuation in auditory system and further ERP research is needed to specify the conditions under which these differences can be observed.
Early Gamma and Theta Oscillations in Frontal Areas Reflect Differences Between External vs. Internal Attention
Given that the induced brain oscillations reflect different levels of processing of the brain function compared with evoked time-domain techniques (Cohen, 2014a), we assessed whether auditory processing elicits changes in brain oscillations depending on whether the attention was externally or internally oriented. Thus, we analyzed the time-frequency spectral power in different frequency bands.
Our results showed higher frontal gamma and theta power during passive listening than in the internal attention conditions. Specifically, early frontal gamma band activity showed a higher power during passive listening when it was contrasted with the other two conditions (Figures 4A,B). This result is in line with previous studies showing that selective attention and top-down attentional processing in auditory modality can modulate the early gamma band activity (Tiitinen et al., 1993;Debener et al., 2003). Tiitinen et al. (1993) showed that gamma power is higher between 25 and 100 ms post-stimulus when attention is focused on the auditory stimuli, contrasted with when participants were unattended or engaged in a competing task (reading a book). Gamma band power has been also observed to increase in conditions of conscious perception contrasted with not perceived stimulus (Rodriguez et al., 1999;Castelhano et al., 2013). However, in the present study we were not able to confirm this point (for instance, we do not have behavioral measures of perceived beeps vs. unperceived beeps). Despite of this, our results suggest that the enhanced gamma activity reflects conscious attention to the auditory stimuli during passive listening which is not present in the other two conditions. Importantly, it has been shown that, besides auditory N100, gamma activity can provide further and complementary information about auditory processing, supporting our pattern of results (Cervenka et al., 2011). Regarding the difference in late gamma oscillation between visual imagery and the other two conditions (Figures 4A,C), it can be observed that late gamma slightly increase around 400 ms in a very similar way during passive listening and inner speech but it becomes slightly negative during visual imagery which causes the significant differences (Figure 3). This pattern might be interpreted as a further attentional suppression mechanism during the visual imagery condition. However, an alternative interpretation is that late gamma activity could reflect cognitive aspects of the ongoing task. However, we do not have enough evidence to provide a complete explanation of this result. For instance, we do not have behavioral measures to verify that participants executed the imagery tasks with an acceptable performance. This is certainly an important limitation of this study since behavioral data and questionnaires are always important to fully understand the brain dynamics that underlie cognitive processes.
A noteworthy observation in the present results is that we did not find differences in alpha power. This frequency band has been widely associated with attentional suppression, mainly in visual modality (Toscani et al., 2010;Hanslmayr et al., 2011). Although this attentional suppression has been reported in other modalities, such as auditory and touch, these results have not been as robust as in visual modality . For instance, absence of differences in alpha band has been reported before for attentional tasks in auditory modality (Braboszcz and Delorme, 2011;Teng et al., 2017). Besides, alpha band has been showed to have a shared mechanism between a supra-modal and modality-specific network, which necessarily makes the auditory alpha modulation to be different from visual modality (Banerjee et al., 2011). Further research is needed to reveal the different generators and mechanisms underlying alpha modulation in the different modalities (Cohen, 2017) but this absence in results in alpha band suggests the attentional suppression mechanisms in the auditory modality operate differently from visual modality.
In the present study we observed higher theta power in passive listening compared with internally oriented attention. Theta oscillations have also been suggested to be important in auditory processing (Teng et al., 2017). Similar pattern of results were observed in a study about "inattentional deafness" where missed stimuli were indexed by reduced stimulus evoked phase synchrony in low frequencies (6-14 Hz) compared with detected stimuli, from 120 to 230 ms poststimulus onset (Callan et al., 2018). These results support that auditory stimuli might be missed during imagination processes and that frontal theta oscillation can play an important role indexing auditory attentional processing.
Visual Imagery Exhibit a Different Pattern of Beta Oscillatory Activity
We also found early beta band differences for the visual imagery condition contrasted with inner speech and external task condition in the parietal regions (Figure 6).
Changes in beta band have been generally associated with topdown control deployment to maintain an internal cognitive state (Engel and Fries, 2010). However, more specifically, early beta band activity modulations have been related with an integration process between different sensory modalities (mainly auditory and visual information) during different tasks involving the input from two sensory modalities such as in McGurk illusion (audiovisual) (Roa Romero et al., 2015), perception of ambiguous audiovisual stimulus (Hipp et al., 2011) or sensory gating paradigm (auditory-somatosensory) (Kisley and Cornwell, 2006). Importantly, additional evidence for the involvement of beta band activity in multisensory processing comes from a study in which participants were instructed to respond to the appearance of auditory, visual and combined audiovisual stimuli. Only in the crossmodal condition, an enhancement was observed for beta oscillations in the time interval between 50 and 170 ms (Senkowski et al., 2006). In the case of the present study, the auditory stimuli were presented during the three conditions and there was not simultaneous visual stimulation. However, it can be assumed that during passive hearing the auditory processing was unimodal. In the case of inner speech condition, if we take into account that this kind of thought uses the auditory cortex to be performed (Shergill et al., 2001), it can also be taking as a unimodal processing of information. This is because despite the sources are different (beep/external and inner speech/internal), both are processed in the same sensory cortex. On the contrary, visual imagery has been showed to use visual cortex in order to represent visual thoughts (Kosslyn et al., 2001), and therefore, if processing of auditory stimuli is going on while visual cortex is producing visual imagery, this condition is likely to behave as crossmodal. If that were the case, then the enhanced low beta activity is indexing that visual and auditory information are being processed at the same time (Senkowski et al., 2008). There is a growing body of research providing new evidence about the importance of beta band activity in the processing of different characteristic of auditory stimuli (Cirelli et al., 2014;Alavash et al., 2017;Chang et al., 2018), and the results of the present study help to complement the findings in this line of research. Specifically, we here showed that beta increase at different range is related with the characteristics of the task, showing that during visual imagery the increase is in the full range of beta (15-30 Hz) while in the other conditions is restricted to high beta (24-30 Hz). Therefore, the different pattern of beta activity related with auditory stimuli could differentiate visual imagery from the other cognitive states.
Finally, regarding our initial hypotheses, the ERP results were different from our predictions, showing maintenance in auditory processing during imagination process. However, our predictions about gamma coincided with our results, suggesting a reduction in conscious attention during mental imagery conditions. These results suggest that ERPs and gamma oscillations are showing different aspects of auditory processing, supporting the different mechanisms underlying them, in which ERP probably is related with the brain processing while gamma activity is more associated to differences in conscious attention. Nonetheless, differences in late gamma between both mental imagery conditions were different from our predictions. Specifically, we expected a lower power in gamma during inner speech than during visual imagery based on the theoretical framework that if the modality of thought is the same that incoming stimulation, there should be a competition for processing resources (Villena-González et al., 2016). However, the inversed effect was observed, which strongly suggest that attentional suppression mechanisms operate different in auditory modality compared to visual modality. Therefore, this results provide evidence against the view that attentional mechanisms are supramodal (Farah et al., 1989;Eimer and Van Velzen, 2002) and also clarify that the "modality resource competition" between visual imagery and visual perception described in Villena-González et al. (2016) do not occur for the auditory modality.
CONCLUSION
In summary, our event-related potential results showed that when attention is inwardly oriented to visual or verbal/auditory imagery, the processing of auditory stimuli is maintained compared with a passive listening condition. Our results in the frequency domain showed early changes in frontal gamma and theta power which reflects differences between external and internal attention. Specifically, the reduced amplitude in gamma and theta band during inward attention may reflect reduced conscious attention of the current stimulation. Furthermore, different pattern of beta activity was observed during visual imagery which differentiate this condition from the other ones and it could reflect crossmodal integration between visual and auditory modalities. Finally, our work provides more evidence to confirm the differential electrophysiological dynamics underlying the visual and auditory/verbal imagery.
AUTHOR CONTRIBUTIONS
MV-G, IP-G, VL, and ER conceived and designed the experiments. MV-G and IP-G performed the experiments and analyzed the data. MV-G, VL, and ER contributed reagents, materials, and analysis tools. MV-G wrote the paper.
FUNDING
Equipment was bought by a CONICYT/FONDEQUIP Grant No. EQM120027 to ER. FONDECYT REGULAR No. 1150241 to VL, FONDECYT REGULAR No. 1170145 to ER., and FONDECYT POSTDOCTORADO No. 3180295 to MV-G. | 2018-09-25T13:06:35.624Z | 2018-09-25T00:00:00.000 | {
"year": 2018,
"sha1": "ff7f0068c1d4cabeeb935ac522010575413d40a3",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnhum.2018.00389/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ff7f0068c1d4cabeeb935ac522010575413d40a3",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
255909048 | pes2o/s2orc | v3-fos-license | The Bright Side of Psychedelics: Latest Advances and Challenges in Neuropharmacology
The need to identify effective therapies for the treatment of psychiatric disorders is a particularly important issue in modern societies. In addition, difficulties in finding new drugs have led pharmacologists to review and re-evaluate some past molecules, including psychedelics. For several years there has been growing interest among psychotherapists in psilocybin or lysergic acid diethylamide for the treatment of obsessive-compulsive disorder, of depression, or of post-traumatic stress disorder, although results are not always clear and definitive. In fact, the mechanisms of action of psychedelics are not yet fully understood and some molecular aspects have yet to be well defined. Thus, this review aims to summarize the ethnobotanical uses of the best-known psychedelic plants and the pharmacological mechanisms of the main active ingredients they contain. Furthermore, an up-to-date overview of structural and computational studies performed to evaluate the affinity and binding modes to biologically relevant receptors of ibogaine, mescaline, N,N-dimethyltryptamine, psilocin, and lysergic acid diethylamide is presented. Finally, the most recent clinical studies evaluating the efficacy of psychedelic molecules in some psychiatric disorders are discussed and compared with drugs already used in therapy.
Introduction
In both the past and in modern human primitive societies, psychedelic molecules have been used to alter a shaman's state of consciousness and put them in contact with divinity [1][2][3]. Shamans are well acquainted with nature and particularly with some of the therapeutic effects of plants and mushrooms. Therefore, in addition to performing a religious function, they are also considered healers [1]. The knowledge of the sacredness of nature and the therapeutic virtues of plants and fungi are the basis of the concepts of "entheogen" and "teacher plant" [1,4,5]. Entheogens are those molecules that are derived from plants and/or fungi that alter the perception of space and time and states of consciousness, allowing the user to communicate with the divinity or the dead [6]. The concept of "teacher plants" is based on the plants' ability to transmit their "therapeutic knowledge" to the shaman who ingests, or smokes, the plant itself [7]. These concepts were developed by the first anthropologists of the last century who described shamanic practices during their travels among the Amazonian Indians, American Indians, or among the people of Central Africa [8][9][10][11]. Since the 1960s, the use of plant derivatives based on psychedelic molecules has also spread to Western countries [12,13]. Based on the chronicles of those years, artists and intellectuals were fascinated by psychedelic molecules that allowed them to "think outside the box", and, therefore, increase their creativity [14][15][16][17][18]. In the following years, governments imposed strict bans on the use of psychedelics, considering them to be narcotics [19]. These restrictions have hampered pharmaceutical research and ibogaine to combat drug, alcohol and nicotine addiction [25][26][27][28]. Recently, potential applications have been found in the treatment of anxiety, obsessive compulsive disorders, major depression, autism spectrum disorders, and, finally, in delaying cognitive decline [27,[29][30][31][32][33].
This study was designed to give a rationale for PAP. If we search the words "psychedelic therapies" on PubMed (Figure 1), the number of papers in this field published in the past 10 years has increased (from 249 papers in 2012 to 582 papers in 2022).
This review aims to point out the therapeutic potential of Tabernanthe iboga, Echinocactus williamsii, Psychotria viridis, Psilocybe cubensis, and Claviceps purpurea. The choice of these plant and fungi species is related to the applications of their main active ingredients in psychedelic-assisted psychotherapy.
Emphasis will be placed on the molecular mechanisms that determine the psychedelic response. To this purpose, structural and computational studies investigating the interaction of psychedelic molecules with their macromolecular targets have been considered for the preparation of this article.
In this regard, 3D structures reported in the current review were produced using UCSF Chimera. Although the research on this topic has flourished increasingly throughout the years, a decrease in the number of papers can be observed in the early 70s. In that period, research on psychedelic drugs was partially abandoned for several reasons, including tighter regulations connected to Richard Nixon's 'War on Drugs' [34].
This review aims to point out the therapeutic potential of Tabernanthe iboga, Echinocactus williamsii, Psychotria viridis, Psilocybe cubensis, and Claviceps purpurea. The choice of these plant and fungi species is related to the applications of their main active ingredients in psychedelic-assisted psychotherapy.
Emphasis will be placed on the molecular mechanisms that determine the psychedelic response. To this purpose, structural and computational studies investigating the interaction of psychedelic molecules with their macromolecular targets have been considered for the preparation of this article.
In this regard, 3D structures reported in the current review were produced using UCSF Chimera.
Ethnobotany
Tabernanthe iboga is a shrub from equatorial Africa belonging to the Apocinaceae family ( Figure 2A). The root is rich in monomeric polycyclic indole alkaloids (1-3% in the whole root, 5-6% in the bark), the primary one being ibogaine, followed by tabernantin, ibogamine, and ibogalin ( Figure 2B-E) [35]. The root was used by the African natives as a nervous stimulant (to deal with hunger and fatigue) [8]. In addition, it was also used as an aphrodisiac, due to the transient excitement it produces [36]. These properties are confirmed in the current uses of the plant as a neuro-muscular tonic, appetite stimulant, anti-asthenic, and antidepressant [37]. The psychedelic and hallucinatory action is mainly a result of the ibogaine [35]. This alkaloid causes visual hallucinations, often associated with a strong state of anxiety and apprehension [38]. At toxic doses, convulsions, bradycardia, hypotension, paralysis, and respiratory arrest occur [38].
psychedelic drugs was partially abandoned for several reasons, including tighter reg connected to Richard Nixon's 'War on Drugs' [34].
Ethnobotany
Tabernanthe iboga is a shrub from equatorial Africa belonging to the Apoc family ( Figure 2A). The root is rich in monomeric polycyclic indole alkaloids (1-3 whole root, 5-6% in the bark), the primary one being ibogaine, followed by tabe ibogamine, and ibogalin ( Figure 2B-E) [35]. The root was used by the African nat nervous stimulant (to deal with hunger and fatigue) [8]. In addition, it was also an aphrodisiac, due to the transient excitement it produces [36]. These proper confirmed in the current uses of the plant as a neuro-muscular tonic, appetite sti anti-asthenic, and antidepressant [37]. The psychedelic and hallucinatory action is a result of the ibogaine [35]. This alkaloid causes visual hallucinations, often as with a strong state of anxiety and apprehension [38]. At toxic doses, conv bradycardia, hypotension, paralysis, and respiratory arrest occur [38].
Central Nervous System Pathways
Over the years, numerous pharmacological studies on the effects of ibogaine derivatives have been reported. Early research examined the effects of ibogaine central nervous system and on the cardiovascular system [39]. Lambarene ® ibogaine-based drug marketed in France as a "neurostimulant" [39]. However, at of the 1960s the drug was withdrawn and ibogaine was included among the substances in sports and was made illegal in almost all countries [40]. Des prohibition imposed, results of numerous pharmacological research has su potential applications in the treatment of psychological trauma, depressive phen and especially in combating drug addiction [28,36,37,39]. Ibogaine is particularly e when detoxifying from opiates. At the same time, ibogaine exhibits a certain cardio [41] which has led pharmaceutical chemists to develop new derivatives that ac opiate pathway and are safe for the cardiovascular system [40].
Central Nervous System Pathways
Over the years, numerous pharmacological studies on the effects of ibogaine and its derivatives have been reported. Early research examined the effects of ibogaine on the central nervous system and on the cardiovascular system [39]. Lambarene ® was an ibogaine-based drug marketed in France as a "neurostimulant" [39]. However, at the end of the 1960s the drug was withdrawn and ibogaine was included among the doping substances in sports and was made illegal in almost all countries [40]. Despite the prohibition imposed, results of numerous pharmacological research has suggested potential applications in the treatment of psychological trauma, depressive phenomena, and especially in combating drug addiction [28,36,37,39]. Ibogaine is particularly effective when detoxifying from opiates. At the same time, ibogaine exhibits a certain cardiotoxicity [41] which has led pharmaceutical chemists to develop new derivatives that act on the opiate pathway and are safe for the cardiovascular system [40].
The molecular and receptor mechanisms involved in the action of ibogaine on addictions have yet to be well defined ( Figure 3). However, some studies highlight the
Structural and Computational Studies
The pharmacological action of ibogaine and its main circulating active metabolite, noribogaine, is very complex to understand because they interact with different receptors and transporters present in the CNS [44].
Even though action of ibogaine and noribogaine at various receptors and transporters has been known in the literature for a long time, in-silico investigations for the characterization and study of the interactions present in the ibogaine-target complexes have been developed in recent years in parallel with the evolution of computational techniques.
Given the involvement of ibogaine and its metabolites in mood disorders, the molecular docking approach has focused on the human serotonin transporter, hSERT. hSERT is a member of the neurotransmitter sodium symporters (NSSs) family, which is composed of twelve transmembrane secondary active transporters that utilize sodium (Na + ) and chloride (Cl − ) gradients to promote the transport of neurotransmitter across the membrane in the extracellular fluid [61,62] while e potassium ion (K + ) antiport stimulates the Reduced alcohol and nicotine intake has been associated with the antagonism of ibogaine derivatives to the α3β4 nicotinic acetylcholine receptors in the medial habenula [45][46][47][48]. Other studies have also reported the receptor affinity of ibogaine towards the κ and µ receptors of opioids involved in the reduction of drug addiction [47][48][49]. At the same time, ibogaine and its derivatives are considered N-methyl-D-aspartate (NMDA) receptor antagonists, similar to memantine, and are capable of reducing the signs of opiate withdrawal in mouse models [50][51][52][53].
As for the circuits involving monoamines, it seems that the agonism of ibogaine towards the signalling of serotonin (5-HT) is involved in the hallucinogenic and antidepressant effects. From one side, ibogaine acts as an agonist of the 5-HT2A receptors [54,55], on the other it shows an inhibitory effect on the SERT transporter, thus causing an increase in serotonergic tone [56][57][58]. The strong agonism of the 5-HT2A receptor is the basis of the hallucinogenic response induced by ibogaine. However, ibogaine and its derivatives do not show specific pharmacological profiles. Indeed, an inhibitory action was also highlighted on the monoamine transporters [59]. The high affinity of ibogaine and its derivatives to monoamine transporters could explain the observed antidepressant effects, as has also been elucidated by structural studies (see the Structural and Computational Studies section).
Pharmacokinetic and pharmacodynamic studies have focused on the effects of ibogaine and its derivatives on abstinence crises in humans [35,39,60]. In these studies, the subjects were exposed to oral administration at doses between 500 and 800 mg per individual (70 kg) [39,60]. The C max for ibogaine and its derivatives ranged from 30 to 1250 ng/mL and a T max approximately 2 h and 5 h after ingestion, respectively. Furthermore, the accumulation of ibogaine in adipose tissues suggests its slow release over time. This considerable inter-individual variability has complicated the interpretation of data on the efficacy of ibogaine and derivative treatments.
Despite the observed intra-individual variability, observational studies on the subjective effects of ibogaine are very consistent and have divided the experience after its administration into three phases [8,36,38]. In the first phase (up to 8 h after administration), the subject enters a dream state experiencing alterations in sensory perception and past memories of his/her life. The second phase (up to 20 h after administration) is emotionally neutral and reflective. In the last phase (up to 72 h after administration) the subject shows increased self-awareness and meaning in life, accompanied by greater excitement and disturbed sleep.
Structural and Computational Studies
The pharmacological action of ibogaine and its main circulating active metabolite, noribogaine, is very complex to understand because they interact with different receptors and transporters present in the CNS [44].
Even though action of ibogaine and noribogaine at various receptors and transporters has been known in the literature for a long time, in-silico investigations for the characterization and study of the interactions present in the ibogaine-target complexes have been developed in recent years in parallel with the evolution of computational techniques.
Given the involvement of ibogaine and its metabolites in mood disorders, the molecular docking approach has focused on the human serotonin transporter, hSERT. hSERT is a member of the neurotransmitter sodium symporters (NSSs) family, which is composed of twelve transmembrane secondary active transporters that utilize sodium (Na + ) and chloride (Cl − ) gradients to promote the transport of neurotransmitter across the membrane in the extracellular fluid [61,62] while e potassium ion (K + ) antiport stimulates the transport process in the cytoplasmic environment [63,64]. hSERT is a major target of antidepressants and of ibogaine metabolites. The function of NSSs can be modulated by a set of small molecules, which control the availability of neurotransmitter at synapses. hSERT functioning can be summarized with the serotonin reuptake process [63]. Given the association between hSERT's mechanism of action and different conformational states, Coleman et al. were able to resolve the structure of three of these hSERT-ibogaine arrangements, which were deposited in the Protein Data Bank (www.rcsb.org; accessed on 30 November 2022) and are identified by the following codes: 6DZY (outward-open), 6DZV (occluded), and 6DZZ (inward-open) [57,62,65]. Coleman and his group, based on in-silico and in-vitro studies, highlighted the way that ibogaine can be considered a non-competitive inhibitor that interacts at the level of the active site since it can bind to all three of these conformational rearrangements [66].
Firstly, they found that the tricyclic ring system of ibogaine resides between the aromatic groups of Tyr95 and Tyr176 in the outward-open and occluded conformations. Further, when ibogaine docks in the binding pocket, it shows that there is a connection at the level of the tertiary amine of ibogaine and the Asp98 residue. Besides this, all three models of the SERT-ibogaine complex shared a common interaction pattern: Phe335 undergoing conformational changes from outward-open and occluded to inward-open, where it moves further into the binding site, ultimately blocking ibogaine release from the extracellular side while rearrangements of the inward-open conformation disrupt interactions of Tyr95 and Asp98 with ibogaine. Therefore, when there is a switch from an outward-to an inward-open conformation, ibogaine can change its orientation and switch toward to the cytoplasmic permeation state. Further, the methoxy-group of ibogaine, which protrudes into a cavity near to Asn177, Ile172, and the aromatic ring of Phe341, was preserved in all the assumed conformations ( Figure 4A,B). The research then focused on the accessibility of ibogaine with respect to the position of the intracellular and of the extracellular gates present in the transporter [67].
The authors note that, in accordance with their previous work, when the ibogaine-hSERT complex is in the outward-open state, it assumes a conformation similar to that shown in the structure of paroxetine-bound hSERT [67]. From this evidence, they hypothesized that ibogaine can enter the binding site from the extracellular side because the gating residues Tyr176-Phe335 and Arg104-Glu493 lead to an open gate state, while the closed intracellular gate prevents exposure to the cytoplasm, leading to formation of the occluded conformation and a closure of the extracellular gate. All these events prevent ibogaine's entry into the central binding site. Moreover, these data have been supported by molecular dynamics simulation experiments [57].
Importantly, the authors, using structural studies, were able to confirm the presence of an allosteric site in hSERT that is reported to modulate the association and dissociation processes of substrates in the central binding site and allow the definition of ibogaine as a non-competitive inhibitor [66][67][68][69]. According to this information, the authors understood that, when gating residues get closer, structural rearrangements result in the collapse of the allosteric site and, consequently, in a reduced solvent accessible surface area (SASA) for ibogaine access (from 1448 to 1247 Å 2 ). Additionally, the conformational transition from occluded to inward-open conformations was investigated, and of note was a further reduction in the SASA value of the allosteric binding site (973 Å 2 ) and in the distance between the gating residues Arg104-Glu493 and Tyr176-Phe335. All these events lead to an increased accessibility to the central binding pocket since they drive the opening of the cytoplasmic-permeation pathway.
Eventually, the authors speculated that, because it was impossible for ibogaine to access the central binding pocket of the occluded hSERT, it might bind to the inward-or outward-open conformation and remain bound there through the allosteric binding site so that it could then access the active site in a second step through conformational changes. Another aspect was then considered: the steric bulk of ibogaine (310.4 g/mol) is higher than that of serotonin (176.2 g/mol), which excludes it from being a competitive serotonin substrate candidate because it cannot fit completely into the central binding site. This further supports the hypothesis that considers ibogaine to be a non-competitive ligand for the central binding site capable of stabilizing the inward-and outward-open conformations of hSERT. The information obtained is crucial for future studies aimed at understanding the complete functioning of small molecules, such as ibogaine, that are capable of selectively binding the pharmacological target hSERT and for designing or optimizing ligands [57]. of selectively binding the pharmacological target hSERT and for designing or optimizing ligands [57]. [57]. The superimposition (RMSD 1.152 Å) was completed using the MatchMaker tool in UCSF Chimera [70]. The artworks were produced using the same software.
Therapeutic Hypotheses
The most studied therapeutic applications of ibogaine have focused on assessing its effects as a treatment for substance use disorders. Over the recent years, ibogaine has been taken into consideration as a therapeutic agent, able to treat or assist the treatments for drug addictions such as heroin [71], cocaine [72], methamphetamine [73], cannabis and crack [74]. The advantage of the use of ibogaine as a therapeutic for drug addiction resides in the fact that a single large dose seems effective in blocking withdrawal and reduces cravings in drug-dependent individuals [72], contrary to standard pharmacological treatments which typically require a prolonged administration. Indeed, ibogaine administration, unlike commonly used treatments such as methadone, shortens the time needed for withdrawal to 2-3 days, effectively speeding up the detoxification process [72].
Among the latest studies, that undertaken by Brown and Alper (2018) showed that a single dose of ibogaine (1,540 ± 920 mg) reduced opioid withdrawal symptoms in 30 individuals with opioid use disorder (OUD), with subsequent detoxification effects sustained for 12 months post-treatment [28]. In line with these results, Noller et al. (2018) recently showed that a single ibogaine administration (200 mg capsules of ibogaine hydrochloride via oral administration) in 14 patients not only ameliorated withdrawal symptoms and led to sustained reduced opioid use over time, but also led to a collateral reduction in depression symptoms [75]. Another recent study [76] examined the opioid cravings and withdrawal symptoms in 50 patients with OUD undergoing detoxification treatment for one week with ibogaine (18-20 mg/kg of ibogaine hydrochloride via oral administration). At 48 h after the administration, withdrawal symptoms and cravings were significantly lower compared to the baseline, with the majority of patients exhibiting mild to nonclinical signs of cravings. Wilson et al. (2021) found similar results in case reports of two individuals with OUD [77]. In the first case, the patient was able to completely refrain from opioid use within 5-6 days of starting the ibogaine treatment, maintaining the abstinence for three years. In the second case, the patient took multiple doses of ibogaine over the course of four months, after which they stopped all non-medical opioids, and maintained abstinence for two years. [57]. The superimposition (RMSD 1.152 Å) was completed using the MatchMaker tool in UCSF Chimera [70]. The artworks were produced using the same software.
Therapeutic Hypotheses
The most studied therapeutic applications of ibogaine have focused on assessing its effects as a treatment for substance use disorders. Over the recent years, ibogaine has been taken into consideration as a therapeutic agent, able to treat or assist the treatments for drug addictions such as heroin [71], cocaine [72], methamphetamine [73], cannabis and crack [74]. The advantage of the use of ibogaine as a therapeutic for drug addiction resides in the fact that a single large dose seems effective in blocking withdrawal and reduces cravings in drug-dependent individuals [72], contrary to standard pharmacological treatments which typically require a prolonged administration. Indeed, ibogaine administration, unlike commonly used treatments such as methadone, shortens the time needed for withdrawal to 2-3 days, effectively speeding up the detoxification process [72].
Among the latest studies, that undertaken by Brown and Alper (2018) showed that a single dose of ibogaine (1540 ± 920 mg) reduced opioid withdrawal symptoms in 30 individuals with opioid use disorder (OUD), with subsequent detoxification effects sustained for 12 months post-treatment [28]. In line with these results, Noller et al. (2018) recently showed that a single ibogaine administration (200 mg capsules of ibogaine hydrochloride via oral administration) in 14 patients not only ameliorated withdrawal symptoms and led to sustained reduced opioid use over time, but also led to a collateral reduction in depression symptoms [75]. Another recent study [76] examined the opioid cravings and withdrawal symptoms in 50 patients with OUD undergoing detoxification treatment for one week with ibogaine (18-20 mg/kg of ibogaine hydrochloride via oral administration). At 48 h after the administration, withdrawal symptoms and cravings were significantly lower compared to the baseline, with the majority of patients exhibiting mild to nonclinical signs of cravings. Wilson et al. (2021) found similar results in case reports of two individuals with OUD [77]. In the first case, the patient was able to completely refrain from opioid use within 5-6 days of starting the ibogaine treatment, maintaining the abstinence for three years. In the second case, the patient took multiple doses of ibogaine over the course of four months, after which they stopped all non-medical opioids, and maintained abstinence for two years.
The largest study to date is an open label case series of patients (n = 191) diagnosed with opioid or cocaine dependence [72]. Individuals received, under medical assistance, gel capsules of ibogaine (8-12 mg/kg ibogaine hydrochloride), a dose range that was shown to be effective in blocking opioid withdrawal symptoms without side effects. The authors report that a single administration of ibogaine is effective for the detoxification process: subjects undergoing the treatment reported significantly decreased drug cravings and opioid withdrawal at 25-36 h after the last opioid dosage compared to baseline. They also reported a substantial decrease in scores of depression and anxiety and an improvement in overall mood one month after treatment.
In recent years, along with the growing use of psychedelics in the treatment of psychiatric disorders, research about the therapeutic effects of ibogaine has begun to take its first steps in this direction. For example, Fernandes-Nascimento et al. (2022) recently reported a single case study of ibogaine microdosing for the treatment of type II bipolar disorder (BPD) [78]. Microdosing, a growing trend in the psychedelics field, is defined as recurrent administration of small doses of psychedelic drugs, with little to no identifiable acute drug effects, with the intention of improving mental health and general wellbeing, enhance cognitive performances and boost mood [79,80]. The authors reported a significant improvement of symptoms in a 47-year-old woman with a 20-year history of BPD treated for 60 days, with two daily administrations of ibogaine (4 mg per administration, approximately 1% of a full conventional single dose). This finding is consistent with previous preclinical studies undertaken in rats [81], which characterized behavioural effects induced by acute ibogaine and noribogaine (20 and 40 mg/kg, i.p., single injections for each dose) in rats, assessing depression-like symptoms using the forced swim test. Both drugs induced a dose-and time-dependent antidepressant effect, without significant changes in locomotor activity.
Taken together, these studies demonstrate that the alkaloid ibogaine displays a potentially wide range of therapeutic effects, especially regarding its anti-addictive properties. Different open label studies have demonstrated its efficacy in ameliorating the symptoms of opioid withdrawal, cravings, and depression. In contrast to research in the realm of addiction, the anti-depression and anti-anxiety effects of ibogaine have not yet been investigated. Despite the lack of rigorous clinical trials, published research indicates that ibogaine could be an effective therapeutic agent, in particular regarding OUD, with sustained effects in the long run.
Ethnobotany
The genus Echinocactus includes a dozen perennial cacti originating from Central America and South America with a globular or cylindrical structure, with slow growth, which can reach a diameter of 90 cm and develop a fleshy root [82][83][84]. Specifically, Echinocactus williamsii ( Figure 5A), also called peyote, is a small cactus that grows in the Rio Grande valley and neighbouring deserts [82][83][84]. The psychoactive alkaloids are concentrated mainly in the epigeal part [85]. In particular, the "vegetable buttons" known as mescal correspond to the tips of the cactus cut and left to dry in the sun and subsequently used by shamans for mystical-religious rites [8,[86][87][88][89]. Furthermore, in some Central American locations it is eaten fresh, dried in paste or as tea [8]. About 20 alkaloids are extracted from Echinocactus williamsii, the most important of which is mescaline [85] ( Figure 5B). Mescaline is an alkaloid, biosynthetically derived from amino acids, with a chemical structure similar to monoamine neurotransmitters, and psychomimetic activity [85,90]. It is used by the natives of Central America both for its exciting and intoxicating properties and for the attractive hallucinations it causes [8,86,87]. In particular, it is capable of altering the basal psychic state leading to a condition of psychomotor arousal with euphoria and lively abnormal psychosensory manifestations (olfactory, auditory and above all visual hallucinations), symptoms of depersonalization and alterations in space-time orientation [90,91]. Some hallucinatory phenomena, called synesthetic phenomena, are characteristic, consisting in the fact that a perception, for example auditory, causes lively chromatic sensations. The substance is active at a dose between 100 and 400 mg per person orally [90,92]. time orientation [90,91]. Some hallucinatory phenomena, called synesthetic phenomena, are characteristic, consisting in the fact that a perception, for example auditory, causes lively chromatic sensations. The substance is active at a dose between 100 and 400 mg per person orally [90,92].
Central Nervous System Pathways
Regarding the pharmacological aspects, mescaline is considered an agonist of the 5-HT2A and 5-HT2C receptors of serotonin, and of the α2A adrenergic receptor [90,[93][94][95]. In contrast, mescaline shows low affinity for other serotonergic receptors and for dopamine receptors and monoamine transporters [85,91,[93][94][95]. The hallucinogenic effects appear to be regulated by mescaline agonism towards the 5-HT2A receptor [85,95]. However, it has been observed in animal models that blockade of dopamine receptors with haloperidol can halt the characteristic effects of mescaline, suggesting the additional involvement of other neuronal pathways [91].
The half-life of mescaline varies in the animal models studied: in cats it is 2 h; in monkeys it can reach 18 h after intraperitoneal administration. The LD50 for mescaline also varies between species, being around 30 mg/kg in monkeys, 54 mg/kg in dogs, and 157 mg/kg in mice (all intravenously administered) [91]. Mescaline is rapidly absorbed from the gastrointestinal tract and distributed mainly to the liver and kidney in murine models [91]. About 40% of absorbed mescaline is excreted unchanged in the urine. The remaining mescaline binds to liver proteins which increase its half-life [91].
In rats, mescaline dosages of between 10 and 100 mg/kg have been reported to induce an anxiolytic action, motor hyperactivity, and a propensity for social interaction [91,93,96,97]. These effects reach their peak around 1 h, last for around 2 h and are dose dependent. Regarding hallucinogenic effects, all psychedelic 5-HT2A receptor agonists induce head-twitch behaviour in both mice and rats [90,97]. This behaviour is not observed with selective 5-HT2A receptor agonists. In rats, head-twitch behaviour was observed after the administration of 10, 50, and 100 mg/kg of mescaline, reaching the peak of the effect after 1 h [90,91]. In mice, mescaline induces the head-twitch behaviour at around 10 mg/kg with an inverted "U" shape curve that is dose dependent. This "U" shape response has also been observed in the locomotor behaviour of mice [97].
In humans, mescaline has a mean half-life of approximately 6 h, and its metabolites are excreted in the urine, plasma, and cerebrospinal fluid [91,98]. About 80% is found in human urine 24 h after oral administration of mescaline, and more than 90% within 48 h [91,96,98]. Sensory synaesthesia is among the effects reported by subjects receiving mescaline. Healthy individuals who listened to music after a mescaline administration had visual effects such as intense chromatic perceptions, and kaleidoscopic and geometrizing visions of objects [16,89,99,100]. At the same time, some individuals perceived flavours after seeing certain colours. Along with visual hallucinations, mescaline can alter the perception of time, space and personality. For example, after administration of mescaline (5-
Central Nervous System Pathways
Regarding the pharmacological aspects, mescaline is considered an agonist of the 5-HT2A and 5-HT2C receptors of serotonin, and of the α2A adrenergic receptor [90,[93][94][95]. In contrast, mescaline shows low affinity for other serotonergic receptors and for dopamine receptors and monoamine transporters [85,91,[93][94][95]. The hallucinogenic effects appear to be regulated by mescaline agonism towards the 5-HT2A receptor [85,95]. However, it has been observed in animal models that blockade of dopamine receptors with haloperidol can halt the characteristic effects of mescaline, suggesting the additional involvement of other neuronal pathways [91].
The half-life of mescaline varies in the animal models studied: in cats it is 2 h; in monkeys it can reach 18 h after intraperitoneal administration. The LD50 for mescaline also varies between species, being around 30 mg/kg in monkeys, 54 mg/kg in dogs, and 157 mg/kg in mice (all intravenously administered) [91]. Mescaline is rapidly absorbed from the gastrointestinal tract and distributed mainly to the liver and kidney in murine models [91]. About 40% of absorbed mescaline is excreted unchanged in the urine. The remaining mescaline binds to liver proteins which increase its half-life [91].
In rats, mescaline dosages of between 10 and 100 mg/kg have been reported to induce an anxiolytic action, motor hyperactivity, and a propensity for social interaction [91,93,96,97]. These effects reach their peak around 1 h, last for around 2 h and are dose dependent. Regarding hallucinogenic effects, all psychedelic 5-HT2A receptor agonists induce head-twitch behaviour in both mice and rats [90,97]. This behaviour is not observed with selective 5-HT2A receptor agonists. In rats, head-twitch behaviour was observed after the administration of 10, 50, and 100 mg/kg of mescaline, reaching the peak of the effect after 1 h [90,91]. In mice, mescaline induces the head-twitch behaviour at around 10 mg/kg with an inverted "U" shape curve that is dose dependent. This "U" shape response has also been observed in the locomotor behaviour of mice [97].
In humans, mescaline has a mean half-life of approximately 6 h, and its metabolites are excreted in the urine, plasma, and cerebrospinal fluid [91,98]. About 80% is found in human urine 24 h after oral administration of mescaline, and more than 90% within 48 h [91,96,98]. Sensory synaesthesia is among the effects reported by subjects receiving mescaline. Healthy individuals who listened to music after a mescaline administration had visual effects such as intense chromatic perceptions, and kaleidoscopic and geometrizing visions of objects [16,89,99,100]. At the same time, some individuals perceived flavours after seeing certain colours. Along with visual hallucinations, mescaline can alter the perception of time, space and personality. For example, after administration of mescaline (5-10 mg/kg), some subjects perceived time faster and others slower [85,90,92,94,95,99]. Other individuals reported feelings of euphoria, ecstasy, and general well-being.
Structural and Computational Studies
Navarro et al., after having generated and validated a 5-HT2A receptor model by homology modelling studies, performed in-silico experiments, including docking, to evaluate the interaction of mescaline with this target [101]. Molecular docking studies have been carried out in two different ways. Firstly, using a protocol that involved a rigid docking mode (RRA), which gives the ligand greater degrees of conformational freedom, something that the receptor is not granted. The second docking was carried out in flexible mode (FRA) where, in addition to the ligand, degrees of freedom are also granted in the 4 Å vicinity of the binding site of the receptor.
As expected, RRA and FRA methods displayed two different interaction patterns. Interactions obtained with RRA included one weak π-alkyl interaction between the aromatic ring of the ligand and Val156 (5.18 Å), three strong π-σ interactions between the aromatic ring and Val235 (2.77 Å), a hydrogen bond between one H atom from the NH 3 + group and the backbone of Phe234 (2.26 Å), and another hydrogen bond between the O atom from the 5-OMe substituent and Ser239 (2.61 Å) ( Figure 6A,B).
something that the receptor is not granted. The second docking was carried out in flexible mode (FRA) where, in addition to the ligand, degrees of freedom are also granted in the 4 Å vicinity of the binding site of the receptor.
As expected, RRA and FRA methods displayed two different interaction patterns. Interactions obtained with RRA included one weak π-alkyl interaction between the aromatic ring of the ligand and Val156 (5.18 Å), three strong π-σ interactions between the aromatic ring and Val235 (2.77 Å), a hydrogen bond between one H atom from the NH3 + group and the backbone of Phe234 (2.26 Å), and another hydrogen bond between the O atom from the 5-OMe substituent and Ser239 (2.61 Å) ( Figure 6A,B).
The FRA method provided different results in terms of interaction distances and involved residues. More specifically, one weak, four intermediate and one strong interaction were identified. The weak π-π interaction was identified between the aromatic ring and Phe339 (5.05 Å). The four intermediate interactions comprehend a π-π interaction between the aromatic ring and Val156 (4.83 Å), a H bond between the O atom (as an acceptor) from the 5-OMe substituent and the backbone of Ser239 (3.55 Å), a H bond between the 3-OMe substituent and Asp155 (3.34 Å), and a H bond between the 4-OMe substituent and Ser159 (3.50 Å). The strong interaction detected is a salt bridge interaction between one H atom from the NH3 + group and Asp231 (2.41 Å). Of note, the common residues involved according to both methods were only Val156 and Ser239 ( Figure 6C,D). The FRA method provided different results in terms of interaction distances and involved residues. More specifically, one weak, four intermediate and one strong interaction were identified. The weak π-π interaction was identified between the aromatic ring and Phe339 (5.05 Å). The four intermediate interactions comprehend a π-π interaction between the aromatic ring and Val156 (4.83 Å), a H bond between the O atom (as an acceptor) from the 5-OMe substituent and the backbone of Ser239 (3.55 Å), a H bond between the 3-OMe substituent and Asp155 (3.34 Å), and a H bond between the 4-OMe substituent and Ser159 (3.50 Å). The strong interaction detected is a salt bridge interaction between one H atom from the NH 3 + group and Asp231 (2.41 Å). Of note, the common residues involved according to both methods were only Val156 and Ser239 ( Figure 6C,D).
Therapeutic Hypotheses
Mescaline, along with other psychedelics such as LSD and psilocybin, has been widely employed in psychiatric and therapeutic contexts, before its use was restricted by the Schedule I of the UN Convention on Drugs in 1967 [91]. Due to its regulation, clinical research over recent decades has been heavily limited [90]. Preclinical studies using animal models have been employed to investigate the effect of mescaline on behaviour. As mentioned previously, mescaline is able to induce an increase in locomotor activity and exploratory behaviour, although some studies report a dose-dependent reduction with an inverted U-shape curve (locomotion and exploratory behaviour increase at lower doses and decrease at higher doses) [102][103][104]. Mescaline is also able to induce an increase in animal reactivity [105].
Regarding mescaline's therapeutic potentials, recent epidemiological studies support the anecdotal reports and preliminary research conducted in the 1900s. For instance, Agin-Liebes et al. (2021) conducted an epidemiological study with a sample of 452 adults, who completed an anonymous survey regarding the recreational use of mescaline in naturalistic settings [106]. The results indicate that mescaline administration is associated with improvements in general mental health and well-being, addressing issues such as anxiety, substance use disorder, depression, and PTSD symptoms. Most of the subjects with histories of these disorders (68-86%) reported improvement in their condition. Significant subjective improvements were reported in participants with histories of depression (n = 184), PTSD (n = 55), anxiety (n = 167), alcohol use disorder (n = 48) and drug use disorder (n = 58). The same dataset of 452 participants was used by Uthaug et al. (2022) for an epidemiological study, yielding similar results regarding the participant's improvement in their psychiatric conditions [107]. The study also differentiated between mescaline from two different types of cactus species-Lophophora williamsii (Peyote) and Trichocereus pachanoi (San Pedro)-reporting no difference in drug effects.
The presented evidence-although of limited value-points to the notion that mescaline could have therapeutic properties worth investigating further. More research-especially controlled, randomized, double-blind clinical trials-is needed to determine its clinical efficacy and to determine where to place mescaline regarding its suitability for employment in therapies aimed at treating various psychiatric conditions.
Ethnobotany
About 1400 shrubby species that grow in tropical areas around the world belong to the Psychotria genus [108][109][110][111][112][113][114][115]. In particular, Psychotria viridis ( Figure 7A) is a member of the rubiaceae family that grows spontaneously in the humid areas of Central and South America [116]. The leaves of Psychotria viridis are rich in N,N-dimethyltryptamine (DMT, Figure 7B), a psychedelic alkaloid [117]. Depending on the different American ethnic groups, the dried leaves can be smoked or added to a drink that also contains Banisteroriopsis caapi leaves [118]. This drink is called ayahuasca and is used by shamans to contact deceased ancestors [119]. The hallucinogenic effect of ayahuasca is due to the presence of carbolines (harmine and harmaline) which inhibit the action of monoamine oxidase and enhance the action of DMT [9,10,118,119].
Central Nervous System Pathways
Like most psychedelics, DMT is considered a partial agonist of serotonin receptors ( Figure 8). Receptor binding studies have shown a DMT affinity to the 5-HT2A receptor of around 75 nM [117,120]. However, the 5-HT2A receptor appears to be necessary but not sufficient to explain the hallucinogenic phenomena reported in previous studies [121,122]. Indeed, the anxiolytic response observed in mouse models appears to be mediated by DMT binding to 5-HT1D and 5-HT3 receptors [117]. The psychedelic response to DMT can also be mediated by interaction with 5-HT1A and 5-HT2C receptors, although the head twitch response in mice is blocked only by 5-HT2A receptor antagonists [120,121].
Central Nervous System Pathways
Like most psychedelics, DMT is considered a partial agonist of serotonin receptors ( Figure 8). Receptor binding studies have shown a DMT affinity to the 5-HT2A receptor of around 75 nM [117,120]. However, the 5-HT2A receptor appears to be necessary but not sufficient to explain the hallucinogenic phenomena reported in previous studies [121,122]. Indeed, the anxiolytic response observed in mouse models appears to be mediated by DMT binding to 5-HT1D and 5-HT3 receptors [117]. The psychedelic response to DMT can also be mediated by interaction with 5-HT1A and 5-HT2C receptors, although the head twitch response in mice is blocked only by 5-HT2A receptor antagonists [120,121].
The mechanism of action of DMT on 5-HT2 receptors involves the second messenger pathway of phospholipase C and A2. Phospholipases hydrolyse membrane lipids, generating inositol-1,4,5-triphosphate (IP3) and diacylglycerate, which leads to the activation of protein kinases and increase of intracellular calcium [117,122,123]. This mechanism of action is also common to other psychedelic molecules but needs further future studies.
DMT has also shown effects on the general serotonergic tone through the increase of serotonin at the synaptic level due to an inhibitory action on the SERT and VMAT2 transporters [124,125].
In the last 10 years some researchers have correlated the hallucinogenic effects after DMT administration to agonism towards glutamate receptors: mGluR2/3 and NMDA [125][126][127][128]. Metabotropic mGluR2/3 receptors are target sites for mediating hallucinogenic effects [125]. Agonists of the presynaptic mGluR2/3 receptors block the release of glutamate, while, on the contrary, the antagonists increase the synaptic glutamatergic tone, generating hallucinatory symptoms [125]. Depending on the dosage, DMT can show an agonist or antagonist profile at mGluR2/3 receptors, showing a heterogeneous behavioural phenomenology. However, heteroreceptor complexes consisting of the co-localization of mGluR2 receptors and 5-HT2A receptors can induce a second messenger cascade specific to the psychedelic phenomena associated with DMT administration [122,125,127].
DMT can also regulate the activity of ionotropic NMDA receptors directly, by modulating memory and learning processes, or indirectly, by activating the sigma-1 receptor [129][130][131][132]. The sigma-1 receptor is a chaperonin localized in the endoplasmic reticulum of cells of the cerebral or peripheral tissues [133]. Given the widespread distribution of the sigma-1 receptor, it has been studied in various diseases and neurobiological conditions such as addiction, depression, amnesia, pain, stroke and cancer [133]. DMT binds to the sigma-1 receptor at micromolar concentrations, contributing to the psychedelic response [130,132]. Agonism at the sigma-1 receptor is also involved in neurotrophic and neuroprotective processes [134,135]. Although totally convincing data on the direct involvement of DMT in the neuroprotective activity of the sigma-1 receptor has not been reported, it cannot be excluded [130,136]. Little research has been undertaken on the effects of DMT on acetylcholine signalling. The data collected show that DMT reduces the concentration of acetylcholine in the striatum but not in the cortex [125,127].
Around the 2000s, trace amine associated receptors (TAARs) were discovered such as those derived from the amino acid metabolism of phenylalanine, tyrosine and tryptophan (phenylethylamine, tyramine and tryptamine respectively) or derived from psychostimulants [137]. DMT can also bind to TAARs as an agonist causing activation of adenylate cyclase and subsequent accumulation of cAMP contributing to the psychedelic response along with 5-HT2A receptors [117,125].
Regarding the dopaminergic pathways, DMT does not bind to dopamine receptors and does not modulate its release at the synaptic level [125,138]. This point is fundamental, above all, for the aspects associated with the low desire to repeat the psychedelic experience after the first administration of DMT, contrary to what is observed with other psychotropic molecules.
Finally, DMT can promote synaptic plasticity by increasing the expression of the transcription factors c-fos, egr-1 and egr-2 and of the neurotrophic factor BDNF [117,139]. The mechanism of action of DMT on 5-HT2 receptors involves the second messenger pathway of phospholipase C and A2. Phospholipases hydrolyse membrane lipids, generating inositol-1,4,5-triphosphate (IP3) and diacylglycerate, which leads to the activation of protein kinases and increase of intracellular calcium [117,122,123]. This mechanism of action is also common to other psychedelic molecules but needs further future studies.
DMT has also shown effects on the general serotonergic tone through the increase of serotonin at the synaptic level due to an inhibitory action on the SERT and VMAT2 transporters [124,125].
In the last 10 years some researchers have correlated the hallucinogenic effects after DMT administration to agonism towards glutamate receptors: mGluR2/3 and NMDA [125][126][127][128]. Metabotropic mGluR2/3 receptors are target sites for mediating hallucinogenic effects [125]. Agonists of the presynaptic mGluR2/3 receptors block the release of glutamate, while, on the contrary, the antagonists increase the synaptic glutamatergic tone, generating hallucinatory symptoms [125]. Depending on the dosage, DMT can show an agonist or antagonist profile at mGluR2/3 receptors, showing a heterogeneous behavioural phenomenology. However, heteroreceptor complexes consisting of the co-localization of mGluR2 receptors and 5-HT2A receptors can induce a second messenger cascade specific to the psychedelic phenomena associated with DMT administration [122,125,127].
DMT can also regulate the activity of ionotropic NMDA receptors directly, by modulating memory and learning processes, or indirectly, by activating the sigma-1 receptor [129][130][131][132]. The sigma-1 receptor is a chaperonin localized in the endoplasmic reticulum of cells of the cerebral or peripheral tissues [133]. Given the widespread distribution of the sigma-1 receptor, it has been studied in various diseases and neurobiological conditions such as addiction, depression, amnesia, pain, stroke and cancer [133]. DMT binds to the sigma-1 receptor at micromolar concentrations, contributing to the psychedelic response [130,132]. Agonism at the sigma-1 receptor is also involved in neurotrophic and neuroprotective processes [134,135]. Although totally convincing data on the direct involvement of DMT in the neuroprotective activity of the sigma-1 receptor has not been reported, it cannot be excluded [130,136].
Little research has been undertaken on the effects of DMT on acetylcholine signalling. The data collected show that DMT reduces the concentration of acetylcholine in the striatum but not in the cortex [125,127].
Around the 2000s, trace amine associated receptors (TAARs) were discovered such as those derived from the amino acid metabolism of phenylalanine, tyrosine and tryptophan (phenylethylamine, tyramine and tryptamine respectively) or derived from psychostimulants [137]. DMT can also bind to TAARs as an agonist causing activation of adenylate cyclase and subsequent accumulation of cAMP contributing to the psychedelic response along with 5-HT2A receptors [117,125].
Regarding the dopaminergic pathways, DMT does not bind to dopamine receptors and does not modulate its release at the synaptic level [125,138]. This point is fundamental, above all, for the aspects associated with the low desire to repeat the psychedelic experience after the first administration of DMT, contrary to what is observed with other psychotropic molecules.
Finally, DMT can promote synaptic plasticity by increasing the expression of the transcription factors c-fos, egr-1 and egr-2 and of the neurotrophic factor BDNF [117,139].
Structural and Computational Studies
Parts of the DMT structure are present in some important biomolecules, such as serotonin, making it a structural analogue that can interact with the same receptors located in the CNS as 5-HT1A, 5-HT1B, and 5-HT2A [140].
To understand the mechanisms and molecular interactions that occur between DMT and different macromolecular targets, computational studies have been carried out over the years. Navarro et al. (2015) attempted to understand the interactions between DMT and 5-HT2A receptor by means of RRA and FRA techniques [141].
FRA simulations showed a H bond of the H(N) from the indole moiety with Ser159 (2.3 Å), an attractive charge interaction of the N atom from NHMe2 with Asp231 (5.46 Å), a H bond between the H(N) from NHMe2 with an oxygen atom of Phe234 (2.90 Å), a πalkyl interaction with Val156 (5.27 Å), a π-π interaction with Phe339 (5.32 Å) and another π-π interaction with Phe339 (4.84 Å). FRA results also show the presence of additional interactions between Asp231 and Phe234 and the NHMe2 group of the DMT molecule ( Figure 9C,D). FRA simulations showed a H bond of the H(N) from the indole moiety with Ser159 (2.3 Å), an attractive charge interaction of the N atom from NHMe2 with Asp231 (5.46 Å), a H bond between the H(N) from NHMe2 with an oxygen atom of Phe234 (2.90 Å), a π-alkyl interaction with Val156 (5.27 Å), a π-π interaction with Phe339 (5.32 Å) and another π-π interaction with Phe339 (4.84 Å). FRA results also show the presence of additional interactions between Asp231 and Phe234 and the NHMe2 group of the DMT molecule ( Figure 9C,D).
Other information concerning the interactions of DMT, and other targets can be retrieved from the work of Contreras et al. who evaluated the DMT-5-HT1B complex through computational techniques such as docking and molecular dynamics [142].
At a molecular level, the mechanism of the interaction between DMT and 5-HT1B is not yet fully known. In this regard, the aim of the work was to understand how the presence of DMT could somehow influence the stability of the receptor at the atomic level. To overcome the limitation of the current treatments there is an ongoing search for molecules that can contribute to the treatment of disorders such as anxiety and depression [143]. The resolved structure of the receptor (PDB ID: 4IAR) was used for molecular docking studies. The binding energy obtained from the docking was −6.65 ± 0.07 kcal/mol, comparable to that computed for serotonin (−6.50 ± 0.14 kcal/mol), a result that indicates a good value of binding energy. By comparing the behaviour of DMT and serotonin, the authors also confirmed that the two molecules share most interactions with the 5-HT1B receptor ( Figure 10A,B). firmed that the two molecules share most interactions with the 5-HT1B receptor ( Figure 10A,B).
Like serotonin, DMT interacts with Asp129 in 5-HT1B through a hydrogen bond, and Ile130, Thr134 and Thr355 residues are also involved in the interaction. As for non-hydrogen bonds, serotonin and DMT share π-sulphur interactions with Cys133, and aromatic interactions with Phe331. However, DMT also forms π-alkyl bonds with Ala216 and Phe330. [70,142].
Additionally, molecular dynamics simulations within a timeframe of 100 ns allowed the assessment of the behaviour of the DMT-5-HT1B complex, which confirmed the stability of the assembly. Subsequently, the structure fluctuation was also evaluated via root mean square fluctuation (RMSF), which for DMT is between 4 and 5 Å. SASA was also evaluated and, again, was comparable to that of serotonin (22000 Å 2 ) throughout the simulation.
The total number of hydrogen bonds established during the molecular dynamic simulation of the DMT-5-HT1B complex was also considered and the results are similar to those of serotonin [142].
In previous studies concerning the 5-HT1B receptor, relevant residues for the interaction with the macromolecule itself were identified. These are residues Asp129, Ile130, Tyr208, Cys133, Thr209, Ser212, Trp327, Ala216, Phe331, Phe330, Asp352, Ser334, Tyr390, and Thr355 [140,144]. In this regard, it is also interesting to see how DMT can form interactions with some of the abovementioned residues. This helps to explain the high degree of stability that DMT has within the receptor. Subsequently, the fact that the SASA value is high agrees with the fact that the complex, in some areas, has a higher fluctuation value (represented by the RMSF parameter) at the important amino acids for the interaction.
The combination of these data with the results previously published in the literature, allowed the authors to understand that, since the number of hydrogen bonds is quite low, A B Figure 10. Three-dimensional model depicting 5-HT1B receptor (PDB ID 4IAR). Residues coloured in green are those involved in interactions with DMT, determined by Contreras et al. 2022 [70,142].
Like serotonin, DMT interacts with Asp129 in 5-HT1B through a hydrogen bond, and Ile130, Thr134 and Thr355 residues are also involved in the interaction. As for nonhydrogen bonds, serotonin and DMT share π-sulphur interactions with Cys133, and aromatic interactions with Phe331. However, DMT also forms π-alkyl bonds with Ala216 and Phe330.
Additionally, molecular dynamics simulations within a timeframe of 100 ns allowed the assessment of the behaviour of the DMT-5-HT1B complex, which confirmed the stability of the assembly. Subsequently, the structure fluctuation was also evaluated via root mean square fluctuation (RMSF), which for DMT is between 4 and 5 Å. SASA was also evaluated and, again, was comparable to that of serotonin (22,000 Å 2 ) throughout the simulation.
The total number of hydrogen bonds established during the molecular dynamic simulation of the DMT-5-HT1B complex was also considered and the results are similar to those of serotonin [142].
In previous studies concerning the 5-HT1B receptor, relevant residues for the interaction with the macromolecule itself were identified. These are residues Asp129, Ile130, Tyr208, Cys133, Thr209, Ser212, Trp327, Ala216, Phe331, Phe330, Asp352, Ser334, Tyr390, and Thr355 [140,144]. In this regard, it is also interesting to see how DMT can form interactions with some of the abovementioned residues. This helps to explain the high degree of stability that DMT has within the receptor. Subsequently, the fact that the SASA value is high agrees with the fact that the complex, in some areas, has a higher fluctuation value (represented by the RMSF parameter) at the important amino acids for the interaction.
The combination of these data with the results previously published in the literature, allowed the authors to understand that, since the number of hydrogen bonds is quite low, they do not substantially affect the binding mechanism, but they do contribute to conformational changes. Of note, further studies are surely needed to clarify the molecular mechanism of this relevant class of ligands on the 5-HT1B receptor.
Therapeutic Hypotheses
Although DMT is known for its psychedelic properties, latest research has shown its clinical utility, addressing a variety of medical conditions (especially those of the central nervous system). As a matter of fact, in the last decade DMT has been investigated for its neuroprotective effects, which are thought to be mediated by binding with the sigma-1 receptor [145]. In-vitro experiments have displayed DMT's cytoprotective effect against hypoxia: diverse cell types (human cortical neurons derived from iPSC, dendritic cells, monocyte-derived macrophages) cultured in severe hypoxic conditions have been treated with DMT. Results suggest that DMT is able to prevent cellular stress and robustly boosts cell survival [146]. These data have been confirmed by in-vivo studies using animal models of cerebral ischemic injury. For instance, Szabò et al. (2021) have induced a global ischemic episode in rats under anaesthesia, coupled with the induction of spreading depolarizations (aiming increment metabolic stress and neurodegeneration), causing transient cerebral hypoxia [136]. Continuous administration of DMT via intravenous route (1 mg/kg/h) has been proven to be effective in reducing the depolarization by activation of the sigma-1 receptor and has shown neuroprotective properties, rescuing hippocampal cells from ischemia-induced apoptosis. Similar results were achieved by Nardai et al. (2020), who induced a transient cerebral occlusion in mice before treating them with a single dose (1 mg/kg, i.p.) followed by a maintenance dose (2 mg/kg-bw-h, via osmotic pump) of DMT [147]. Treated rats displayed less ischemic injury volume compared to controls, with better overall recovery. Moreover, the authors reported lower apoptotic protease activating factor 1 and increased BDNF levels in DMT-treated animals, possibly indicating an interplay between anti-apoptotic and neurotrophic factors. In this regard, recent research has uncovered the potential role of DMT in stimulating neurogenesis, a multi-faceted process that leads to the formation of new neurons. DMT has been shown to promote the in-vitro proliferation of neural stem cells and to stimulate the differentiation of these cells into the three main neural subtypes (neurons, astrocytes and oligodendrocytes) [148]. DMT is also able to activate the subgranular zone of the dentate gyrus in-vivo, a part of the hippocampal formation in which most adult neurogenesis takes place. This neurogenic propriety seems to have a robust functional effect, since DMT-treated mice (2 mg/kg, i.p., for 21 days) performed better in memory and learning tasks, for which the hippocampus plays an essential role [148]. The increased neurogenesis and subsequent cognitive improvement provided by DMT treatment could have profound implications for neurodegenerative diseases such as Parkinson's (PD) and Alzheimer's (AD) and for medical conditions defined by significant loss of neural cells in affected areas of the CNS, such as stroke. Previous literature [149][150][151][152] indicates that neurogenesis and neuroprotective factors could be implemented as a valid tool in the therapeutic strategies for psychiatric and neurological disorders, aiding in the restoration and preservation of residual functionality in the cerebral regions affected by the disease. This concept has recently been assessed using DMT-containing Ayahuasca in an in-vitro model of Parkinson's disease. Katchborian-Neto et al. (2020) tested the neuroprotective effect of ayahuasca and its matrix plants, B. caapi and P. viridis, in a human neuroblastoma cell line (SH-SY5Y) with neurodegeneration induced by 6-hydroxyldopamine (6-OHDA), a well-known in-vitro PD model [153]. The lower doses of the compounds were effective in stimulating neuronal proliferation and displayed significant neuroprotective properties, assessed by improved cell viability and protection against 6-OHDA-induced cell damage. In the context of neurodegenerative disorders such as AD, PD and amyotrophic lateral sclerosis, recent research (for reviews, see [134,135]) suggests that acting on the sigma-1 receptor, for which DMT is an agonist, could be an effective therapeutic strategy, given the incredible versatility of this receptor and its role in mediating neuroprotection and maintaining neuronal homeostasis. In order to address these promising considerations, further work is essential to better uncover the potential role of DMT in the treatment of neurodegenerative disorders.
Along with clinical conditions of the CNS, researchers and clinicians have focused their consideration on the effectiveness of this substance for the treatment of psychiatric conditions and mental illnesses. DMT and, in particular, the DMT-containing Ayahuasca, have gained attention for their mood-stabilizing and relaxing proprieties, as reported by previous scientific research [154][155][156][157]. Palhano-Fontes et al. (2019) conducted the first randomized, placebo-controlled trial, which investigated the effects of a single dose of Ayahuasca (1 mL/kg, containing 0.35 mg/kg of DMT) in 29 patients affected by treatmentresistant depression [158]. The authors reported evidence of a robust, fast-acting antidepressant effect of the preparation, sustained throughout the period of observation (seven days) compared with placebo. These results have been confirmed by a subsequent study [159] in which patients with major depressive disorder (n = 7) were treated with an escalating dose of DMT (a first 0.1 mg/kg dose, followed by a 0.3 mg/kg dose, intravenous), distributed in two session 48 h apart. The authors reported a significant reduction in depressive symptomatology only one day after receiving the second DMT dose, corroborating the findings of Palhano-Fontest et al. about the rapid mood-stabilizing effects of the com-pound. In addition to these results, Almeida et al. (2019) tried to uncover the role between the depression-mitigating effects of ayahuasca and serum BDNF levels [160], given this neurotrophic factor's potential role as a biomarker in depression: serum BDNF levels in patients with depression are lower than average and increase after treatment with serotonergic antidepressant [161,162]. Forty-eight hours after treatment (1 mL/kg of ayahuasca) BDNF serum levels were higher in both patients and healthy controls, compared to placebo, and just the experimental group (not the placebo group) displayed a significative inverted correlation between depression symptomatology and levels of BDNF.
Apart from clinical and preclinical studies, ayahuasca's therapeutic potential seems to be confirmed by several observational studies, conducted on participants taking ayahuasca in naturalistic setting [163][164][165][166]. Although the clear limitation of observational studies, it is important to notice that this field of literature is unanimous in confirming the potential healing properties of ayahuasca and DMT, reporting robust improvements in psychopathological symptoms, boost in overall mood and improvements in mental health well-being that are persistent over time after a single dose.
All these results seem favourable in support of the use of psychedelics as antidepressant drugs, despite further research being necessary to corroborate previous findings and, most importantly, to determine whether the observed anti-depressant effects are stable over a longer period of time. Another aspect that should not be underestimated is the safety of treatments based on ayahuasca and DMT. Few clinical studies have emphasized the toxicological aspects of the use of ayahuasca and DMT. For this reason, particular attention must be given to the treatment with ayahuasca and DMT and the presence of a psychotherapist or psychiatrist is essential.
Additionally, supplementary investigations are required to establish whether more robust and effective outcomes can be achieved by a subchronic administration, compared to the single-dose administration employed in the previous studies. In this regard, preclinical studies conducted on the chronic administration of DMT seem promising in terms of ameliorating symptomatic conditions such as depression and anxiety. For instance, Cameron et al. (2019) tested the microdosing of DMT in adult rats, intraperitoneally injecting low doses of the substance (1 mg/kg) periodically (one dose every three days) for an extended period of time (two months) [167]. They found that the use of a subhallucinogenic dose in a chronic manner is efficacious in eliciting an antidepressant effect, without having impact on working memory and social interactions.
Apart from mood disorders, there has been speculation about a possible link between DMT and other medical conditions such as autism spectrum disorders (ASD). For example, Shomrat and Nesher (2019) propose a new view in this field, suggesting that DMT metabolism could be a factor in the structural alteration observed in autistic individuals [168]. Given that abnormalities in dendritic spine formation and cortical overgrowth are hallmarks of ASD [169] and that research evidence suggests that DMT has neuritogenic properties [170], the authors suggest that dysfunction of the pineal gland, the source of endogenous DMT, could potentially be involved in the etiopathogenesis of ASD.
Although promising, the latest research concerning the use of DMT as a therapeutic is still limited by the lack of placebo-controlled trials and research targeting various psychiatric conditions. Further investigations are required to unlock the full potential of this psychedelic in the clinical field.
Ethnobotany
Psilocybe mushrooms are Basidiomycota members of the family Strophariaceae [171]. The species of this genus are cosmopolitan, the best known are the P. cubensis ( Figure 11A) and the P. Mexicana, which grow in Central America and have been historically used for shamanic rites [172]. Mushrooms are prepared in various ways depending on the shaman's preferences [86]. They can be eaten fresh, dried, or infused and trigger hallucinogenic responses in users. The ritualistic use of Psilocybe mushrooms in Mesoamerica is docu-mented by the 14th century Codex "Yuta Tnoho" or "Vindobonensis Mexicanus I", which depicts a sacred ceremony where deities consume sacred mushrooms prior to the first dawn [173]. However, the discoveries of the Tassili mural in Algeria, reporting fungi associated with P. mairei, and the Selva Pascuala mural in Spain, a rock painting representing fungoid figures that have been associated with P. hispanica, date the use of these natural products back 7000-9000 years [174].
Central Nervous System Pathways
After oral administration of increasing doses of psilocybin, it loses its phosphate group and is totally converted to psilocin in the acidic environment of the stomach or by alkaline phosphatase in the intestine and kidneys [182][183][184]. Therefore, evaluations of the pharmacological profile of psilocybin have been performed with its main derivative [182]. Psilocin is identified in plasma within half an hour of administration and reaches peak concentrations within three hours [184]. Plasma AUC increases in proportion to dose, indicating linear pharmacokinetics in doses between 0.3 and 0.6 mg/kg of psilocybin [183,184]. The average bioavailability of psilocin is around 50% and its average half-life is around three hours. More than 80% of psilocin is metabolized by glucuronidation and released in the urine as psilocin-O-glucuronide [183,184].
The nonspecific binding of psilocin to different receptors can modulate the interactions between multiple neuronal pathways by altering the functional connectivity between brain areas [32,99,177,186,187,189,190]. These psilocin-induced alterations could also underlie its hallucinatory response. Some authors have observed an increase in glucose consumption in the prefrontal cortex, anterior cingulate, temporal cortex, and putamen after psilocybin administration [8,86,186,189,190]. The increase in glucose metabolic rate in these brain areas correlates positively with the "ego dissolution" due to the psychedelic response [189,190].
Simultaneously with the psychedelic effects, psilocin has shown antidepressant and anxiolytic effects at the basis of its therapeutic use. Indeed, psilocin, acting on serotonergic receptors, deactivates or normalizes the hyperactivity of the medial prefrontal cortex which is typically hyperactive during depressive phenomena [32,189]. This antidepressant action also seems to involve limbic areas including the amygdala, which is considered the centre of perception and processing of emotions. Psilocin causes an indirect variation of the dopaminergic and serotonergic tone in a differentiated way in some mesolimbic A B C Figure 11. Psilocybe cubensis (A); chemical structure of psilocybin (B); psilocin (C).
The ingestion of Psilocybe mushrooms induces hallucinations and synaesthesia resulting in a trance-like experience that is thought to allow dissociation of the soul from the body [173]. Traditionally, shamans have used these natural products as sacraments to enhance the healer's divinatory capacities for different purposes, such as bodily ailment diagnosis and healing [175,176]. Apart from the ritualistic ones, other uses of Psilocybe mushrooms in the traditional medicine of Mesoamerica comprise the treatment of rheumatism, toothache and stomach pain, for example [173].
The psychotropic effects of Psilocybe mushrooms are attributable to two indole alkaloids known as psilocybin and psilocin ( Figure 11B,C) [177]. Psilocybin is the phosphoric ester of psilocin which is present only in trace amounts. These compounds were first identified by Hoffmann and colleagues working on a sample of P. mexicana collected by Heim [178]. Together with the genus Psilocybe, mushrooms also belonging to the genera Conocybe and Stropharia show marked hallucinogenic actions [171]. Conocybe is known to contain psilocybin [179], and its analogue baeocystin [180]. On the other hand, thanks to the phylogenetic analyses based upon DNA sequence comparison, hallucinogenic Stropharia species have been reclassified into the genus Psilocybe [181].
Central Nervous System Pathways
After oral administration of increasing doses of psilocybin, it loses its phosphate group and is totally converted to psilocin in the acidic environment of the stomach or by alkaline phosphatase in the intestine and kidneys [182][183][184]. Therefore, evaluations of the pharmacological profile of psilocybin have been performed with its main derivative [182]. Psilocin is identified in plasma within half an hour of administration and reaches peak concentrations within three hours [184]. Plasma AUC increases in proportion to dose, indicating linear pharmacokinetics in doses between 0.3 and 0.6 mg/kg of psilocybin [183,184]. The average bioavailability of psilocin is around 50% and its average half-life is around three hours. More than 80% of psilocin is metabolized by glucuronidation and released in the urine as psilocin-O-glucuronide [183,184].
Psilocin has been observed to exert its pharmacological action by enhancing neuroplasticity and neuritogenesis by acting through the tropomyosin kinase B (TrkB) receptor and through the mammalian target rapamycin receptor (mTOR) [193]. Furthermore, psilocin can also increase the expression of neurotrophic factors such as BDNF, resulting in the increase of hippocampal neurogenesis and, at the behavioural level, the extinction of behaviours related to conditioned fear [193]. These effects on neurogenesis and neuroplasticity may underlie the potential therapeutic effects of psilocin in depressive and anxious states. Figure 12. After oral administration, psilocybin loses its phosphate group and is totally converted to psilocin, which consequently represents the main derivative responsible for its pharmacological activity. (A) Psilocin has a good affinity for the 5-HT2A receptor, and this binding is responsible for the "mystical" hallucinatory effects induced by psilocin. In increasing order of affinity, psilocin can also bind to 5-HT2B, 5-HT1D, dopamine D1, 5-HT1E, 5-HT1A, 5-HT5A, 5-HT7, 5-HT6, D3, 5-HT2C and 5-HT1B receptors. (B) Activation of the 5-HT2A receptor in the prefrontal cortex by psilocin results in increased glutamatergic activity with glutamate release with AMPA and NMDA receptors on cortical pyramidal neurons. (C) Psilocin has been observed to exert its pharmacological action by enhancing neuroplasticity and neuritogenesis by acting through BDNF and mTOR pathways. This figure was partially generated using Servier Medical Art, provided by Servier and licensed under a Creative Commons Attribution 3.0 unported license.
Structural and Computational Studies
Many computational studies have been performed to evaluate the behaviour of psilocin on serotonin receptors and to pave the way for new therapeutic approaches.
Concerning the 5-HT2A receptor, Cao et al. (2022) resolved and deposited the structure of psilocin-5-HT2A complex (PDB ID: 7WC5) [194,195]. The work performed by Cao et al. showed that, in addition to the central binding pocket, psilocin is also able to interact with a second binding site where the indole group fits into a pocket, described as an Figure 12. After oral administration, psilocybin loses its phosphate group and is totally converted to psilocin, which consequently represents the main derivative responsible for its pharmacological activity. (A) Psilocin has a good affinity for the 5-HT2A receptor, and this binding is responsible for the "mystical" hallucinatory effects induced by psilocin. In increasing order of affinity, psilocin can also bind to 5-HT2B, 5-HT1D, dopamine D1, 5-HT1E, 5-HT1A, 5-HT5A, 5-HT7, 5-HT6, D3, 5-HT2C and 5-HT1B receptors. (B) Activation of the 5-HT2A receptor in the prefrontal cortex by psilocin results in increased glutamatergic activity with glutamate release with AMPA and NMDA receptors on cortical pyramidal neurons. (C) Psilocin has been observed to exert its pharmacological action by enhancing neuroplasticity and neuritogenesis by acting through BDNF and mTOR pathways. This figure was partially generated using Servier Medical Art, provided by Servier and licensed under a Creative Commons Attribution 3.0 unported license.
The nonspecific binding of psilocin to different receptors can modulate the interactions between multiple neuronal pathways by altering the functional connectivity between brain areas [32,99,177,186,187,189,190]. These psilocin-induced alterations could also underlie its hallucinatory response. Some authors have observed an increase in glucose consumption in the prefrontal cortex, anterior cingulate, temporal cortex, and putamen after psilocybin administration [8,86,186,189,190]. The increase in glucose metabolic rate in these brain areas correlates positively with the "ego dissolution" due to the psychedelic response [189,190].
Simultaneously with the psychedelic effects, psilocin has shown antidepressant and anxiolytic effects at the basis of its therapeutic use. Indeed, psilocin, acting on serotonergic receptors, deactivates or normalizes the hyperactivity of the medial prefrontal cortex which is typically hyperactive during depressive phenomena [32,189]. This antidepressant action also seems to involve limbic areas including the amygdala, which is considered the centre of perception and processing of emotions. Psilocin causes an indirect variation of the dopaminergic and serotonergic tone in a differentiated way in some mesolimbic areas of the brain [191]. Indeed, after psilocybin administration, serotonin increases in the medial prefrontal cortex but not in the nucleus accumbens, while dopamine increases in the accumbens but not in the cortex [191,192]. This different response is related to the activation of 5-HT2A and 5-HT1A receptors in the different mesolimbic areas [185]. The increase in dopamine and serotonin is responsible for the increase in mood and psycostimulation. Furthermore, activation of the 5-HT2A receptor in the prefrontal cortex by psilocin results in increased glutamatergic activity with glutamate release with AMPA and NMDA receptors on cortical pyramidal neurons [192].
Psilocin has been observed to exert its pharmacological action by enhancing neuroplasticity and neuritogenesis by acting through the tropomyosin kinase B (TrkB) receptor and through the mammalian target rapamycin receptor (mTOR) [193]. Furthermore, psilocin can also increase the expression of neurotrophic factors such as BDNF, resulting in the increase of hippocampal neurogenesis and, at the behavioural level, the extinction of behaviours related to conditioned fear [193]. These effects on neurogenesis and neuroplasticity may underlie the potential therapeutic effects of psilocin in depressive and anxious states.
Structural and Computational Studies
Many computational studies have been performed to evaluate the behaviour of psilocin on serotonin receptors and to pave the way for new therapeutic approaches.
Concerning the 5-HT2A receptor, Cao et al. (2022) resolved and deposited the structure of psilocin-5-HT2A complex (PDB ID: 7WC5) [194,195]. The work performed by Cao et al. showed that, in addition to the central binding pocket, psilocin is also able to interact with a second binding site where the indole group fits into a pocket, described as an extended binding site, mainly containing hydrophobic residues. A salt bridge is formed between the basic nitrogen and the Asp155 residue and a hydrogen bond between Asn352 and the OH-group in the indole of psilocin was detected. Subsequently, during the analysis of the interactions between the 5-HT2A receptor and psilocin, additional residues involved in the formation of the complex, such as Val156, Phe339, Asn363, Leu228, and Trp151, were identified ( Figure 13A,B) [195]. extended binding site, mainly containing hydrophobic residues. A salt bridge is formed between the basic nitrogen and the Asp155 residue and a hydrogen bond between Asn352 and the OH-group in the indole of psilocin was detected. Subsequently, during the analysis of the interactions between the 5-HT2A receptor and psilocin, additional residues involved in the formation of the complex, such as Val156, Phe339, Asn363, Leu228, and Trp151, were identified ( Figure 13A,B) [195]. [70].
The study of the role of psilocybin and of its active metabolite then continued, though focused on another receptor, 5-HT2C. In this case, molecular docking was enrolled, and Gumpper et al. (2022) showed that psilocin is also able to bind to the binding pocket of the receptor [196]. Here the compound establishes bonds with a salt bridge with Asp134 through the amine compound, a π-stacking interaction between the tryptamine ring and Phe328, and another bond between the indole -OH and Asn331. Further, other interactions with Trp130 and Phe327 were identified in the assembly ( Figure 14A,B).
In order to validate the interactions that were found, the authors performed a molecular docking experiment in which they calculated a binding energy of −5.76 kcal/mol (Glide Docking Score) and an RMSD of 1.05 Å [196].
Also in this case, Gumpper The study of the role of psilocybin and of its active metabolite then continued, though focused on another receptor, 5-HT2C. In this case, molecular docking was enrolled, and Gumpper et al. (2022) showed that psilocin is also able to bind to the binding pocket of the receptor [196]. Here the compound establishes bonds with a salt bridge with Asp134 through the amine compound, a π-stacking interaction between the tryptamine ring and Phe328, and another bond between the indole -OH and Asn331. Further, other interactions with Trp130 and Phe327 were identified in the assembly ( Figure 14A,B).
Phe328, and another bond between the indole -OH and Asn331. Further, other interactions with Trp130 and Phe327 were identified in the assembly ( Figure 14A,B).
In order to validate the interactions that were found, the authors performed a molecular docking experiment in which they calculated a binding energy of −5.76 kcal/mol (Glide Docking Score) and an RMSD of 1.05 Å [196]. In order to validate the interactions that were found, the authors performed a molecular docking experiment in which they calculated a binding energy of −5.76 kcal/mol (Glide Docking Score) and an RMSD of 1.05 Å [196].
Also in this case, Gumpper et al. deposited the structure consisting of 5-HT2C and psilocin in the Protein Data Bank (PDB ID: 8DPG).
Therapeutic Hypotheses
Psilocybin is a medium-lasting, well-tolerated classic psychedelic that is potentially safe and effective [189,197,198]. Among psychedelics, psilocybin represents the pathopening drug in modern psychedelic-supervised therapy. In 2021, the FDA twice designated psilocybin the designation of "breakthrough therapy", to accelerate its drug development and review process. At first, the FDA supported psilocybin for the treatment of depression and severe treatment-resistant depression. Subsequently, the FDA supported the Compass Pathways company in testing psilocybin as a promising therapeutic option for major depressive disorder (MDD). After the conclusion in 2022 of this clinical trial, in which 233 participants were enrolled, the authors were able to demonstrate that a single administration, though of a high dosage of psilocybin (25 mg), was able to reduce the Montgomery-Åsberg Depression Rating Scale (MADRS), a clinical parameter used to establish the depression severity, by 12 points [199]. Another important finding, which emerged from this study but that needs to be further researched, is the high incidence rate (77%) of adverse events, including headache, nausea, dizziness, suicidal ideation, and self-injury, which occurred in all dose groups. Unfortunately, studies on the safety of psilocybin treatments have only been performed in healthy individuals. At the same time, clinical studies on the efficacy of psilocybin have not yet evaluated the parameters of safety.
Although the safety and efficacy of psilocybin as an antidepressant are the focus of most recent clinical trials (more than 100 clinical trials have been registered), there are still no definitive data that can establish whether psilocybin can offer substantial clinical improvement over existing antidepressant therapies [200,201]. It has recently been reported that the antidepressant response of psilocybin is not statistically stronger than the conventional antidepressant escitalopram [202].
Across two different clinical trials, post-treatment fMRI data confirmed that psilocybin can modify and increase the brain network organization in patients treated with 2 × 25 mg oral psilocybin, three weeks apart, as well as six weeks of daily placebo.
To further understand whether psilocybin exerts its antidepressant effect by increasing synaptogenesis, in an upcoming, open-label study (ClinicalTrials.gov Identifier: NCT05601648 November 2022) participants will undergo positron emission tomography (PET) imaging before and one week after 25 mg psilocybin treatment by using 11C-UCB-J, a radiotracer that binds to SV2A, which is itself a marker of synaptic density and synaptogenesis. This study will allow researchers of the Washington University School of Medicine to assess the relationship between neurotrophic and antidepressant effects produced by psilocybin.
In addition, the promising neurotrophic and anti-inflammatory properties of psilocybin are generating interest as a therapeutic hypothesis for neurodegenerative diseases, although preclinical results are consistent, clinical trials are nowadays limited and mostly relate to the treatment of depression associated with neurodegenerative diseases such as Parkinson's and Alzheimer's [31,203]. Clinical evidence has supported additional therapeutic opportunities for this "magic mushroom" drug [204,205].
Again, the FDA supported an investigational new drug (IND) clinical trial to explore how psilocybin-assisted therapy impacts the treatment of anorexia nervosa (ClinicalTrials.gov Identifier: NCT04505189). For this study, a small group of patients with a primary diagnosis of anorexia nervosa as defined by DSM-V criteria will take part in eight study visits, including three psilocybin dosing sessions with varying doses up to the maximum of 25 mg per single session.
Studies are underway that aim to assess whether psilocybin is more feasible, tolerable, and efficacious for the treatment of post-traumatic stress disorders when administered alone or in combination with assisted therapy.
Concerning alcohol, smoke and substance abuse and anxiolytic effects, psilocybin effects are in line with the results demonstrated by other psychedelic drugs, as the long-lasting improvements were detectable up to six months after psilocybin administration [206][207][208].
Although several challenges are already present in effectively transposing psilocybin into the clinic, great efforts in clinical trials have been made to define what the optimal psilocybin formulation might be. This is the proper goal of a small interventional clinical trial planned in 2022 where safety, adverse effects, and physiological and psychological effects of PEX20 (Oral Psilocin), PEX30 (Sublingual Psilocin), and PEX10 (Oral Psilocybin) will be compared (ClinicalTrials.gov Identifier: NCT05317689).
Therefore, despite the lack of double-blind randomized studies, several clinical trials recently completed or underway shared the common goal to increase the knowledge of this highly attractive molecule with high therapeutic potential and for which several states are questioning its "decriminalization."
Ethnobotany
Claviceps purpurea is a fungus, belonging to the Ascomycetes Clavicipetali, which infests cereal crops and in particular rye [209] ( Figure 15A). The sclerotium, Secale cornutum, is the richest part of the alkaloids ( Figure 15B). These include ergolinic alkaloids derived from lysergic acid such as ergotamine, ergometrine, ergocristine, ergocriptine, and ergoconine [210]. Alkaloids have found multiple applications in the cardiovascular and gynaecological fields [211]. However, the greatest impact on the CNS came with the semi-synthetic synthesis of D-lysergic acid diethylamide (LSD, Figure 15C), deriving from lysergic acid. Indeed, in 1938 the chemist Albert Hofmann synthesized LSD for the first time in the Swiss laboratories of the Sandoz AG Pharmaceutical Company, deriving from lysergic acid, and accidentally tested its hallucinatory and psychedelic effects [212,213]. LSD has had a major social impact since the 1960s by profoundly influencing Western culture [212,213]. The cerebral effects of LSD concern the emotional-ideational aspects and above all sensory perception (colours are perceived more vividly) [214][215][216][217]. These hallucinogenic effects affect the sight, hearing, touch, and perception of one's body. Furthermore, subjects who used LSD experienced introspective trips that enabled them to perceive inner problems and reality from other points of view, beyond the usual schemes [214][215][216][217].
sergic acid, and accidentally tested its hallucinatory and psychedelic effects [212,213]. LSD has had a major social impact since the 1960s by profoundly influencing Western culture [212,213]. The cerebral effects of LSD concern the emotional-ideational aspects and above all sensory perception (colours are perceived more vividly) [214][215][216][217]. These hallucinogenic effects affect the sight, hearing, touch, and perception of one's body. Furthermore, subjects who used LSD experienced introspective trips that enabled them to perceive inner problems and reality from other points of view, beyond the usual schemes [214][215][216][217].
Central Nervous System Pathways
LSD is totally absorbed in the intestine after oral administration. Absorption can be affected by the pH of the stomach and duodenum [218,219]. Indeed, the administration of LSD with food induces plasma concentrations that are halved when compared to administration on an empty stomach [220]. The different routes of administration do not determine qualitative differences in the hallucinatory effects of LSD but only in the intensity and speed of onset [220][221][222]. LSD can cross the blood-brain barrier as previously observed in mice, rats, cats and monkeys [220,221,223]. In humans, after an administration of 2 µg/kg i.v. LSD levels were approximately 7 ng/mL after 30 min and disappeared after 10 h [220]. Some variation was observed between species in the half-life of LSD at the same dosage and route of administration: 7 min in mice, 130 min in cats, 100 min in monkeys, and 175 min in humans [223]. LSD is almost completely metabolized to 13-and 14-hydroxy-LSD and their conjugates glucuronic acid, 2-oxo-LSD, and nor-LSD and only a small part of the unchanged drug is excreted [224]. LSD can be metabolized in humans to 2-oxo-LSD and 2-oxo-3-hydroxy-LSD by certain NADH-dependent liver microsomal enzymes [220,221,223]. In addition, lysergic acid ethylamide (originating from dealkylation of the diethylamide radical at position 8 of the side chain), nor-LSD and di-hydroxy-LSD have also been identified in human blood and urine [218,219]. These metabolites, and LSD itself, can be identified in urine for up to four days after ingestion.
In general, LSD users have reported that the ingestion of about 75-150 µg of LSD profoundly alters their state of consciousness, leading to euphoria towards affective people, and greater introspective capacity [213,214]. Users show altered sense perception in their bodies, with hallucinations and synaesthesia lasting up to 10 h [92,215]. Some authors have reported traumatic experiences ("bad trips") after the administration of LSD, but the treatment was not performed under controlled conditions or assisted by medical personnel [23,[212][213][214][215]. For this reason, the context in which the administration of psychedelic substances takes place is essential for achieving the desired therapeutic goals.
Serotonin signalling is also involved in the psychedelic response to LSD [95]. Serotonin is produced in the midbrain neurons of the raphe nuclei and is released from neuronal projections in the locus coeruleus, brainstem, and cortex. Neurons in the locus coeruleus control the release of norepinephrine, regulate the sympathetic nervous system, and extend into the cerebellum, thalamus, hypothalamus, cortex, and hippocampus [56,225]. LSD is a 5-HT1A receptor agonist in the locus coeruleus and raphe nuclei. LSD agonism in these areas interferes with serotonin signalling [226][227][228]. Furthermore, LSD is considered a partial agonist of 5-HT2A receptors, especially those expressed on neocortical pyramidal cells [227] (Figure 16). Through the thalamic afferents, LSD can activate the 5-HT2A receptor and induce an increase in cortical glutamate levels [229,230]. Glutamate release could be responsible for LSD-induced alteration of corticocortical and corticosubcortical transmission [231]. It has been reported that the difference between 5-HT2A receptor agonists with hallucinogenic activity and those without is due to the different activation of the heterotrimeric proteins G i/o and G a/11 [232,233]. Furthermore, the activation of 5-HT2A receptors only in the cortex is sufficient to trigger the psychedelic response in genetically modified mice that express 5-HT2A receptors only at the cortical level [227]. This implies that cortical pathways are the medial site of the hallucinatory response following LSD administration. However, LSD is not a molecule that selectively binds to specific receptors and therefore the understanding of its mechanisms of action is still not entirely clear. Indeed, LSD also has a high affinity for other serotonergic receptors such as 5-HT1B, 5-HT1D, 5-HT1E, 5-HT2C, 5-HT5A, 5-HT6 and 5-HT7 [94,213,217,222,229,230,233].
Structural and Computational Studies
To assess the action of the LSD, at the level of 5-HT2B receptor, Wacker et al. (2017) characterized the complex through computational studies and structural resolution techniques [234]. Their analysis revealed that LSD can bind to the receptor within the orthosteric binding site with a volume of 2898.7 Å 3 [235]. The ligand has been shown to be able to insert itself into the orthosteric binding pocket through a salt bridge that occurs between Asp135 and the basic nitrogen present in the structure. Subsequently, the authors identified the way in which the ergoline moiety of LSD can establish aromatic contacts with Phe340 and Phe341, and how the nitrogen in the indole group forms a hydrogen bond with Gly221. Another component capable of forming interactions is the diethylamide moiety, which, thanks to one ethyl group, forms non-polar contacts with Leu132 and Trp131, while the other ethyl group interfaces with Leu362. Further interactions detected by the authors are Val136, Ser139, Leu209, Phe217, Ser222, Ala225, Trp337, and Asn344 ( Figure 17A,B).
As a result of this work, a 3D structure has been deposited in the PDB (5TVN) [234]. Other structures of 5-HT2B-LSD complex are also present in the database, such as PDB Figure 16. LSD can agonistically bind the serotonin 5-HT1A receptors in the locus coeruleus, raphe nuclei, and cortex causing the inhibition of serotonin's activation and release. Simultaneously, through the thalamic afferents, LSD can activate the 5-HT2A receptor, inducing an increase in cortical glutamate levels. Furthermore, it has been observed that the activation of 5-HT2A receptors in the cortex triggers the psychedelic response in genetically modified mice expressing 5-HT2A receptors only at the cortical level. Moreover, LSD also has a high affinity for other serotonergic receptors such as 5-HT1B, 5-HT1D, 5-HT1E, 5-HT2C, 5-HT5A, 5-HT6 and 5-HT7. This figure was partially generated using Servier Medical Art, provided by Servier and licensed under a Creative Commons Attribution 3.0 unported license.
Structural and Computational Studies
To assess the action of the LSD, at the level of 5-HT2B receptor, Wacker et al. (2017) characterized the complex through computational studies and structural resolution techniques [234]. Their analysis revealed that LSD can bind to the receptor within the orthosteric binding site with a volume of 2898.7 Å 3 [235]. The ligand has been shown to be able to insert itself into the orthosteric binding pocket through a salt bridge that occurs between Asp135 and the basic nitrogen present in the structure. Subsequently, the authors identified the way in which the ergoline moiety of LSD can establish aromatic contacts with Phe340 and Phe341, and how the nitrogen in the indole group forms a hydrogen bond with Gly221. Another component capable of forming interactions is the diethylamide moiety, which, thanks to one ethyl group, forms non-polar contacts with Leu132 and Trp131, while the other ethyl group interfaces with Leu362. Further interactions detected by the authors are Val136, Ser139, Leu209, Phe217, Ser222, Ala225, Trp337, and Asn344 ( Figure 17A,B). 2017 [70].
Given the high rate of homology between the 5-HT2A and 5-HT2B receptors, the research group of Kim et al. (2020) performed X-ray resolution of the 5-HT2A-LSD complex and noted that similarities with the homologous receptor (5-HT2B) were already present in the literature [234,236]. Additionally in this structure, the ligand inserted itself within the same orthosteric binding pocket. At the same time, the presence of the ligand allows the transducers to engage and proceed with the signalling cascade, a typical process of Gprotein coupling [237]. In this case, the identified interactions involve a salt bridge between Asp155 and the nitrogen present in the molecule. In addition, hydrophobic interactions were identified with Ile152, Asn343, Leu229, Val156, Trp336, Trp151, Val235, Phe340 and Phe339. Moreover, a hydrogen bond between the nitrogen present in the indole group and Ser242 was identified ( Figure 18A,B) [236]. The resulting resolved structures from this work were deposited in the PDB (6WGT) in 2020, and another group also released another structure in 2022 (7WC6) [194].
Therapeutic Hypotheses
The recent interest in the therapeutic effects of LSD is supported by several preclinical and clinical studies. Concerning preclinical studies, LSD has been tested to treat stressinduced anxiety-like behaviour in mice. In particular, male mice exposed to chronic restrain stress were treated daily with 30 µg/kg LSD for one week and the treatment was As a result of this work, a 3D structure has been deposited in the PDB (5TVN) [234]. Other structures of 5-HT2B-LSD complex are also present in the database, such as PDB ID: 7SRQ [194].
Given the high rate of homology between the 5-HT2A and 5-HT2B receptors, the research group of Kim et al. (2020) performed X-ray resolution of the 5-HT2A-LSD complex and noted that similarities with the homologous receptor (5-HT2B) were already present in the literature [234,236]. Additionally in this structure, the ligand inserted itself within the same orthosteric binding pocket. At the same time, the presence of the ligand allows the transducers to engage and proceed with the signalling cascade, a typical process of Gprotein coupling [237]. In this case, the identified interactions involve a salt bridge between Asp155 and the nitrogen present in the molecule. In addition, hydrophobic interactions were identified with Ile152, Asn343, Leu229, Val156, Trp336, Trp151, Val235, Phe340 and Phe339. Moreover, a hydrogen bond between the nitrogen present in the indole group and Ser242 was identified ( Figure 18A,B) [236]. The resulting resolved structures from this work were deposited in the PDB (6WGT) in 2020, and another group also released another structure in 2022 (7WC6) [194]. 2017 [70].
Given the high rate of homology between the 5-HT2A and 5-HT2B receptors, the research group of Kim et al. (2020) performed X-ray resolution of the 5-HT2A-LSD complex and noted that similarities with the homologous receptor (5-HT2B) were already present in the literature [234,236]. Additionally in this structure, the ligand inserted itself within the same orthosteric binding pocket. At the same time, the presence of the ligand allows the transducers to engage and proceed with the signalling cascade, a typical process of Gprotein coupling [237]. In this case, the identified interactions involve a salt bridge between Asp155 and the nitrogen present in the molecule. In addition, hydrophobic interactions were identified with Ile152, Asn343, Leu229, Val156, Trp336, Trp151, Val235, Phe340 and Phe339. Moreover, a hydrogen bond between the nitrogen present in the indole group and Ser242 was identified ( Figure 18A,B) [236]. The resulting resolved structures from this work were deposited in the PDB (6WGT) in 2020, and another group also released another structure in 2022 (7WC6) [194].
Therapeutic Hypotheses
The recent interest in the therapeutic effects of LSD is supported by several preclinical and clinical studies. Concerning preclinical studies, LSD has been tested to treat stressinduced anxiety-like behaviour in mice. In particular, male mice exposed to chronic restrain stress were treated daily with 30 µg/kg LSD for one week and the treatment was
Therapeutic Hypotheses
The recent interest in the therapeutic effects of LSD is supported by several preclinical and clinical studies. Concerning preclinical studies, LSD has been tested to treat stressinduced anxiety-like behaviour in mice. In particular, male mice exposed to chronic restrain stress were treated daily with 30 µg/kg LSD for one week and the treatment was able to prevent the stress-induced anxiety-like behaviour, the decrease of cortical spine density and the serotonergic transmission decline [33]. Interestingly, LSD treatment did not cause any anxiolytic or anti-depressant effect in non-stressed mice. LSD has also been demonstrated to exert antidepressant effects in rodents and to promote neuroplasticity in rat cortical neurons; a single administration of LSD 0.15 mg/kg i.p. was able to significantly reduce depressivelike behaviours in rats five weeks after treatment. Indeed, by activating the TrkB receptor, mTOR and 5-HT2A signalling pathways in the prefrontal cortex, it produced an increase in dendritic arbor complexity, promote dendritic spine growth, and stimulate synapse formation, thus revealing a fast-acting, robust and persistent antidepressant effect [170,238]. Additional repeating of LSD administration (0.13 mg/kg) for 11 days was able to reverse deficits in active avoidance learning in bulbectomised rats, a model of depression [214]. Furthermore, cannabidiol (CBD) seems to exert a synergistic effect in combination with LSD on the antidepressant effect in mice, probably due to an allosteric modulation of the 5-HT2A receptor by CBD, which causes a powerful and rapid inhibition of serotonergic and glutamatergic transmission [239]. An additional potential therapeutic application of LSD concerns its positive effect on social behaviour, which can be applied in the context of mental illnesses characterized by dysfunction in social behaviour, such as autism spectrum disorders (ASD) and social anxiety disorder. It has recently been demonstrated that repeated LSD administration in adult male mice exerts a pro-social effect, measured as increased social preference and novelty in the direct social interaction and three chambers tests [240]. Furthermore, this LSD-dependent pro-social effect is flanked by alteration of the cannabinoid system in the brain, in particular decreased hippocampal levels of Nacylethanolamines, N-linoleoylethanolamine, N-arachidonoylethanolamine (anandamide) and N-docosahexaenoylethanolamine, the monoacylglycerol 1 2 -docosahexaenoylglycerol, the prostaglandins D 2 (PGD 2 ) and F 2α (PGF 2α ), thromboxane 2, and kynurenine (the pathway that catabolizes 5-HT), other than by changes in the gut microbioma [241].
In healthy human subjects, neuroimaging studies have demonstrated that LSD induces increases in functional brain connectivity between the thalamus and sensory-somatomotor cortical regions, and from the thalamus to the posterior cingulate cortex [216]; on the other hand, it decreased connectivity to the temporal cortex [216]. These anatomical changes could explain the LSD-induced effect in facilitating a novel experience of the self and its environment and in reducing rigid or ruminative thinking patterns typical of psychiatric disorders [33].
In the past, several clinical trials were carried out and from these it emerged that LSD may be a useful pharmacological compound for the treatment of drug dependence, anxiety and mood disorders, especially in treatment-resistant patients [222,239,242,243]. Furthermore, many clinical studies carried out in the 60s and 70s investigated the possible use of LSD for the treatment of people with ASD and positive evidence emerged. However, this aspect needs to be investigated further with additional studies and clinical trials [29]. Different clinical trials also reported a positive effect of LSD in the context of alcohol-related disorders; indeed, a meta-analysis of randomized controlled trials concluded that a single dose of LSD is associated with a significant decrease in alcohol misuse [244].
There is only one recent clinical study investigating the potential therapeutic use of LSD. The effects of LSD were tested in a double-blind, randomized, active placebocontrolled pilot study in 12 patients suffering from anxiety associated with life-threatening diseases. LSD was administered at the doses of 20 or 200 µg during psychotherapy sessions and anxiety was measured via the State-Trait Anxiety Inventory (STAI). LSD treatment induced a significant long-lasting, up to one-year, reduction in both trait and state anxiety [222,245,246]. LSD has been reported to help patients with serious illnesses manage their emotions related to their life-threatening state of health, which reflects in decreased anxiety and depression, and increased acceptance of their potential death [246].
Safety and tolerability of repeated orally administered low doses of LSD have been tested in a clinical trial. In particular, 5 µg, 10 µg, and 20 µg LSD were administered every four days over a 21-day period to older healthy volunteers and LSD was shown to be well tolerated with no adverse events occurring [247].
On the NIH portal, 33 clinical trials are currently ongoing studying the potential therapeutic effects of LSD in the following neurological/psychiatric disorders: migraine, anxiety, major depressive disorders, ADHD and attention deficit disorder, addiction, alcohol use disorder (AUD), bipolar disorder, methamphetamine dependence, drug abuse, Alzheimer's disease, Huntington's disease, and both acute and chronic pain (www.clinicaltrials.gov, accessed on 15 December 2022).
Considering the strong evidence that has emerged in the last decades, the safe and well tolerated profile, and the current renewed interest in psychedelics for potential therapeutic use in psychiatric disorders, new research and new clinical studies on LSD as a pharmacological tool are needed.
Conclusions
A relevant part of the drug discovery and development approach is traditionally based on the identification of compounds from plant sources, as nature can provide an unmatched variety of complex molecular structures endowed with biological activities [248][249][250][251][252][253][254][255]. However, fully understanding the underlying molecular mechanisms through which such compounds exert their pharmacological role is not trivial.
In this review, we focused our attention on active constituents and derivatives from psychedelic plants, consisting of ibogaine, mescaline, N,N-dimethyltryptamine, psilocin and lysergic acid diethylamide. These compounds represent the main families of psychedelics, and, despite being very diverse from a chemical point of view, it has been shown that they have many aspects in common. Their use during religious and shamanic rites represents an example, and previous observational studies have shown that guided or voluptuary administration can show different effects. This translates into an aspect of primary relevance when considering such compounds as therapeutic options: the differing responses to psychedelics that have been responsible for the observed contradictory results and that are closely related to the environment in which the assisted administration occurs.
Another aspect shared by all the psychedelics discussed in this review, and that may represent a major pitfall for compounds to be developed as drug candidates, is the absence of a known specific target. However, it must be considered that in the context of plant extracts the net behavioural and psychiatric response appears to be due to the phytocomplex and potentiated by the context in which the compound is given. Again, this limitation should be considered for clinical translation. On the other hand, when the compounds are considered singularly, modern drug discovery tools, such as molecular pharmacology, computational studies and structural studies, can assist the medicinal chemist in characterizing and understanding the interaction of the bioactive compound with the macromolecular target. Of course, such preliminary data must be validated by adopting suitable in vitro and in vivo models.
Additionally, a crucial point which is related to both cultural and legislative aspects must not be ruled out. The inclusion of psychedelics among the substances of abuse has made, and still makes, pharmacological research on these molecules difficult. Nevertheless, the scientific community is currently looking with hope to the many clinical trials of psychedelics that have been developed for the treatment of depression, obsessive compulsive disorder, autism, neurodegenerative disorders, etc. In fact, compound repurposing is a constantly growing trend in the rediscovery of known compounds. Furthermore, given that psychopharmacology lacks new compounds, psychedelics and their derivatives may represent an alternative avenue for the development of new therapeutic options.
However, it must be pointed out that many issues need to be addressed before considering the development of the cited compounds in this context. First, purity, bioavailability and formulation aspects must be considered: as previously mentioned, variability is one of the main pitfalls observed during administration of psychedelics. Thus, standardization of the active constituents must be performed. Additionally, a second aspect concerns the protection of wild plant species. For example, mescaline-containing Crassulaceae grow only in the Rio Grande Valley and neighbouring deserts, and the germinated seed needs 20 years to develop into a mature plant. The role of efficient extractive and synthetic procedures is thus of primary relevance. Eventually, ideal dosage, route of administration and treatment regimen of every compound, together with pharmacokinetic aspects, must be fully outlined.
Moreover, the in vivo safety of all these treatments remains an open issue. In the clinic, randomized double-blind clinical trials conducted on a substantial number of subjects are very limited, inducing the current partial understanding of the therapeutic possibilities of these drugs. Note that the low dosage, up to micro-dosing, of psychedelics and medical-assisted care during the pharmacological treatments are inseparable requirements for appreciating the therapeutic potential of these highly discussed molecules. The scientific community must play a crucial role in the coming years to better define pharmacological action and disseminate to society the therapeutic potential, limits, safety, and risks associated with psychedelics therapy. Fundamental will be a strong cooperation with psychiatrists, clinicians, psychologists, pharmacologists, and chemists to map the future of psychedelics in medicine.
Thus, given the great interest that all these molecules are gathering, as testified by the constantly increasing number of scientific publications, it is necessary to deeply investigate, in combination with in silico biomedicine that has preclinical and clinical evidence, their efficacy and toxicological aspects. | 2023-01-17T19:23:08.811Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "7cfc95ea7dc15a45602ffafb0bfdea1857e29b1a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/2/1329/pdf?version=1673345432",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f354b9faad90b7f7f01d8430335b0fd461eecca8",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250693072 | pes2o/s2orc | v3-fos-license | Trigger selection software for beauty physics in ATLAS
The unprecedented rate of beauty production at the LHC will yield high statistics for measurements such as CP violation and Ba oscillations and will provide the opportunity to search for and study very rare decays, such as B → μμ. The trigger is a vital component for this work and must select events containing the channels of interest from a huge background in order to reduce the 40 MHz bunch crossing rate down to 100-200 Hz for recording, of which only a part will be assigned to B-physics. Requiring a single or di-muon trigger provides the first stage of the B-trigger selection. Track reconstruction is then performed in the Inner Detector, either using the full detector, at initial luminosity, or within Regions of Interest identified by the first level trigger at higher luminosities. Based on invariant mass, combinations of tracks are selected as likely decay products of the channel of interest and secondary vertex fits are performed. Events are selected based on properties such as fit quality and invariant mass. We present fast vertex reconstruction algorithms suitable for use in the second level trigger and event filter (level three). We discuss the selection software and the flexible trigger strategies that will enable ATLAS to pursue a B-physics programme from the first running at a luminosity of about 1031cm-2s-1 through to the design luminosity running at 1034cm-2s-1.
Introduction
ATLAS is one of two general-purpose experiments currently being built at the Large Hadron Collider (LHC) in CERN [1]. The LHC is a proton-proton machine with a centre-of-mass energy √ s = 14 TeV scheduled to start up in 2008. From 2008 an initial "low-luminosity" running period is foreseen with a luminosity starting at about 10 31 cm −2 s −1 and rising to 2·10 33 cm −2 s −1 . After that the LHC will run at the design luminosity of 10 34 cm −2 s −1 .
ATLAS has a three level trigger system [2] which reduces the initial 40 MHz rate to about 200 Hz of events to be recorded. The first level trigger (LVL1) is a hardware-based trigger which makes a fast decision (with latency 2.5 µs) as to which events are of interest for further processing. The LVL1 reduces the trigger rate to below 75 kHz and identifies regions of the detector ("Regions of Interest", RoIs) which contain interesting signals (e.g. electrons, muons or jets). The LVL1 RoIs are used in the subsequent trigger levels to guide the reconstruction.
The High Level Trigger (HLT) is software-based and consists of two levels. At Level 2 (LVL2) the full granularity of the detector is used to confirm the LVL1 results and then to combine information from various sub-detectors within the LVL1 RoIs. This stage of event selection employs fast reconstruction algorithms and has a time budget of about 40 ms. The LVL2 output rate is about 1-2 kHz. Finally at the Event Filter (EF), "offline-like" algorithms are used along with the full alignment and calibration information to produce a final decision about whether or not an event is accepted. With an execution time of about 4 s, the rate is reduced to 200 Hz.
The ATLAS has a well-defined B-physics programme which includes CP violation (in B 0 d → J/ψK 0 S and B 0 s → J/ψφ, B 0 s oscillations (using the B 0 s → D s π/a 1 channels), and a search for rare decays (e.g. B 0 s,d → µ + µ − (X) and radiative decays B 0 s → φγ, B 0 d → K * γ). Since at the LHC design energy the cross-section for bb is relatively high (1% of pp collisions will contain a bb pair) the challenge for the B-physics trigger is to select the events of interest from a background of non-bb and other bb events. ATLAS is a general-purpose experiment with an emphasis on high-p T physics and as such has only a limited bandwidth (about 5-10 %) for the B-physics events. Thus accommodating the B-physics programme requires a highly selective trigger with exclusive or semi-inclusive reconstruction of decays.
This paper discusses flexible luminosity-dependent strategies for the ATLAS B-physics trigger and focuses on software tools for B-physics event selection, in particular, a fast vertex fitting tool specially developed for the LVL2 B-physics trigger.
ATLAS B-physics trigger strategy
Triggering B-physics events in ATLAS is initiated by the LVL1 muon trigger. Although the branching ratio for b → µ is only 10 %, this channel provides a clean signature already at the LVL1 and allows for flavour tagging. The main non-prompt background is due to real muons from π and K in-flight decays. This background is reduced at the LVL2 trigger by matching track parameters of muon candidates with those of tracks reconstructed in Inner Detector.
At LVL2 the muon candidates identified at LVL1 will be confirmed firstly in the muon detector (using tracking based on the precision muon chambers) and then by matching muon tracks with tracks reconstructed in the Inner Detector (ID). The track reconstruction in the ID and Muon detector uses special-purpose fast algorithms customized for the use at LVL2 [3], [4], [5]. After track reconstruction at the LVL2, event selection is accomplished by a combinatorial search for suitable combinations of tracks, e.g. opposite charge-sign track pairs for J/ψ → µ + µ − or J/ψ → e + e − or track triplets for D s → φ(KK)π. At this stage the vertex fitting algorithms are employed and cuts on invariant mass and vertex fit quality are applied. The event selection is then refined at the Event Filter where the lower event rates allow the use of more sophisticated offline algorithms to reconstruct the tracks (within RoIs or using the "Fullscan" approach) and find B-decay vertices [6].
For initial running at about 10 31 cm −2 s −1 events will be triggered by a single LVL1 muon candidate with a p T threshold of about 4 GeV rising to 6-8 GeV at a luminosity of 2·10 33 cm −2 s −1 . In order to increase the trigger efficiency for hadronic and electromagnetic B-decay channels during initial luminosity running an approach based on LVL2 track reconstruction within the entire volume of the Inner Detector ("Fullscan") will be used for events with a single muon trigger [7].
At lower luminosities (< 2 · 10 33 cm −2 s −1 ) a single muon trigger will be used at LVL1 with a search for additional features at the HLT. The trigger efficiency for di-muon events can be improved by searching for a second muon below the LVL1 threshold in an enlarged RoI around the confirmed LVL1 muon. This search starts in the ID followed by an extrapolation of the reconstructed tracks into the muon system. This approach has been successfully demonstrated for the J/ψ → µ + µ − trigger [5]. At lower luminosities additional LVL1 RoIs will be used to facilitate the HLT reconstruction of hadronic (using Jet RoI) or radiative (using electromagnetic (EM) RoI) decays of the other b quark. For example, additional LVL1 RoIs such as jet or EM will be used by the HLT to select channels containing a J/ψ → e + e − or exclusive B-hadron decays. In the case of J/ψ → e + e − an EM RoI with E T threshold of 2 GeV will be used in addition to the muon RoI that triggered the event. As for the di-muon final state, the efficiency for J/ψ can be increased by searching for a second electron, below the LVL1 threshold, in an enlarged RoI. Electron candidates are identified using information from the Transition Radiation Tracker (TRT) and Calorimeter.
At design luminosity only a di-muon LVL1 trigger will be used. This strategy keeps the trigger rate within allocated budget and provides sufficient bandwidth for rare B decays with two muons in the final state (B 0 s,d → µ + µ − (X)).
3. Example: D s → φ(KK)π selection at low luminosity in the B 0 s → D s π/a 1 channels This event selection is based on tracks reconstructed within a Jet RoI. Pairs of opposite sign tracks passing a φ(K + K − ) mass cut are selected. This selection is followed by a search for a third track to form a triplet K + K − π. A 3-σ cut on invariant mass of track combinations is applied for all vertex candidates. Note that, for the moment, vertexing is employed only at the LVL2 stage.
The trigger rates after various steps of trigger selection are shown in Table 1 for the low luminosity regime with L = 10 33 cm −2 s −1 . Table 1. Trigger rates for D s → φ(KK)π selection at L = 10 33 cm −2 s −1 . Step Rate, kHz 20 5 0.2
LVL2 algorithms for B-physics event selection
The selection of B-physics events benefits from fast and efficient track reconstruction algorithms developed for the LVL2 trigger. They include fast pattern recognition algorithms for silicon detectors (Pixel and SCT) with Kalman filter-based track fit [3]. Track reconstruction and particle identification in the TRT is accomplished by a fast track following algorithm [4] which uses tracks found in the silicon detectors as input and provides a full (i.e. silicon+TRT) track refit thus significantly improving accuracy of track momentum estimation. In addition, there is a stand-alone TRT pattern recognition algorithm which can be used during initial running. The LVL2 Muon tracking features a stand-alone muon reconstruction algorithm, which confirms the muons found at LVL1 using the precision chambers based on monitored drift tubes (MDT) [8]. The better momentum resolution gives a sharper p T threshold, which significantly reduces the trigger rate. The further rate reduction is achieved by a combined Muon-ID algorithm [9], which matches tracks found by the stand-alone muon reconstruction with tracks reconstructed in the ID. Further details regarding performance of the LVL2 muon algorithms can be found in [10].
The LVL2 ID algorithms have been proved to be highly efficient for low-p T (about 1 GeV) tracks which are of interest for B-physics. As an example, Table 2 shows track finding efficiency for tracks with p T ≥ 1.5 GeV from D s → φ(KK)π decay.
A fast vertex fitting algorithm
Vertex finding and fitting is an important part of the B-physics trigger event selection. The LVL2 track reconstruction imposes additional constraints providing input track parameters estimated at perigee points (points of the closest approach to z−axis parallel to the magnetic field) and track parameter errors in the form of covariance (rather than weight) matrices. In contrast, vertex fitting algorithms described in the literature [11], [12] assume uncertainties of the input track parameters to be described by weight (inverse covariance) matrices. However, if only track covariance matrices are available, these algorithms require them to be inverted beforehand thus giving a substantial computing time overhead.
To alleviate this drawback a fast vertex fitting algorithm capable of using track covariance matrices directly has been developed. The basic idea of the algorithm described below is to decouple position-and momentum-related track parameters in the vertex fit input by selecting "at-perigee" rather than "at-vertex" track momenta as the fit parameters. This choice makes it possible to transform input track parameters into two uncorrelated vectors -measured momentum and its linear combination with measured track position at the perigee. This linear combination comprises a new 2D measurement model while the measured momenta and the corresponding blocks of the input track covariance matrices are used to initialize vertex fit parameters and covariance matrix. This approach provides a mathematically correct and numerically stable initialization of the vertex fit. The reduced-size measurement model makes the proposed Kalman filter very fast and suitable for an application in the trigger. A detailed description of the algorithm can be found in Appendix.
The algorithm has been validated on B s → J/ψ(µ + µ − )+Φ(K + K − ) data produced using a full ATLAS Monte Carlo (MC) simulation. Tracks were reconstructed by the LVL2 algorithms [3], [4]. In order to test the performance of the algorithm for the reconstructed tracks corresponding to the products of the B s decay, the Monte Carlo truth information was used to identify these tracks which were combined into J/ψ(µ + µ − ) and B s (µ + µ − K + K − ) vertices and fitted using the algorithm described above.
The normalized residuals (pulls) of the fitted vertex coordinates and χ 2 -probability distribution for the J/ψ(µ − µ + ) vertices are shown in Fig.1.
The pull distributions in Fig.1 are nearly perfectly Gaussian with a r.m.s close to 1. This indicates that covariance matrices produced by the vertex fit correctly reflect the actual estimation errors of the vertex parameters. The χ 2 -probability distribution is flat. It means that the χ 2 values produced by the vertex fit for the signal vertices are distributed in accordance with the χ 2 law. There is a small excess of vertices with a χ 2 -probability below 0.01. Typically, these vertices contain poorly reconstructed tracks, e.g. with pixel hit-outliers -hits located near the true trajectory and erroneously included in track fit.
The computing time of the vertex fit has been measured on a Xeon 2.4 GHz processor as a function of track multiplicity n. The number of iterations in the vertex fit has been fixed at 5. Average computing time is 0.2 ms and 0.36 ms for J/ψ(µ + µ − ) (n = 2) and B s (µ + µ − K + K − ) (n = 4) vertices respectively. The vertex fitting algorithm is very fast and its computing time is negligible with respect to the available LVL2 time budget.
Conclusion
We have outlined flexible strategies for the triggering of B-physics events at ATLAS from the initial running to final design luminosities. The trigger software for the B-physics event selection software based on efficient innovative algorithms has been presented. This software has been successfully validated on simulated data and is ready for testing on the first LHC data.
Appendix A. A fast Kalman filter for vertex fitting A vertex is reconstructed from n tracks with track parameters m k measured at the perigee points and covariance matrices V k , k = 1, . . . , n. A Kalman filter performs vertex fitting progressively, track-by-track, so that after adding the k−th track the fit parameter vector X k is : where R is a vertex position and q i , i = 1, . . . , k are track momenta at the perigee points. This vector is related to the measured track parameters by the following measurement equation : where m r i , m q i are the measured track position and momentum, respectively, h(·, ·) is a 2D non-linear function, and cov( i ) = V i .
If X k is an estimate of the vector X k and Γ k is the covariance matrix for this estimate, then, in the block-matrix notation : where C k is a 3×3 vertex covariance, D k is a 3k×3k joint covariance matrix of the track momenta, and E k is a 3k × 3 matrix of mutual correlations between the vertex and track parameters. If a k + 1-th track is added to the vertex the vector X k is augmented and its prior estimate (prediction) and predicted covariance Γ k+1 are : where V qq k+1 is the qq-block of the covariance V k+1 : The Kalman filter updates the prediction as follows : where K k+1 is the Kalman filter gain and d k+1 is a 2D residual: A 2 × 2 covariance matrix S k+1 of the residual (A.4) is given by where A k+1 , B k+1 are matrices obtained by linearizing the measurement function h(·, ·) in the vicinity of the prediction X k+1 : In turn, the Kalman filter gain is given by where a (3k + 6) × 2 matrix, M k+1 , is defined by the following matrix expression The updated covariance matrix of the estimate X k+1 is Finally, the χ 2 contribution of the (k + 1)-th track to the total χ 2 of the vertex fit is given by The system of equations (A.2)-(A.9) provides the basis for a fast vertex fitting algorithm. This algorithm has a number of computational advantages. Firstly, since it uses the so-called "gain matrix" formalism [12], track covariance matrices V k , k = 1, . . . , n are utilized directly and as soon as the last track is processed the estimate of the full fit parameters vector X n and its covariance Γ n are available immediately. Another advantage is that the only matrices to invert are 2 × 2 symmetrical matrices S k , k = 1, . . . , n. This feature is especially important since vertex fitting is an iterative procedure -after processing all tracks the linearization (A.6) is re-computed using the estimated vertex position and the Kalman filter cycle (A.2)-(A.9) is repeated for each track. Typically, a few iterations are required for convergence so that the overall computational cost of the vertex fit can be significantly reduced by using the fast Kalman filter (A.2)-(A.9) in the iteration loop. | 2022-06-28T00:04:44.224Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "b8e2ab5169f69e2c26ae2182c01230ba06672028",
"oa_license": null,
"oa_url": "https://iopscience.iop.org/article/10.1088/1742-6596/119/2/022020/pdf",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b8e2ab5169f69e2c26ae2182c01230ba06672028",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
259555590 | pes2o/s2orc | v3-fos-license | Genetic Investigation of Consanguineous Pakistani Families Segregating Rare Spinocerebellar Disorders
Spinocerebellar disorders are a vast group of rare neurogenetic conditions, generally characterized by overlapping clinical symptoms including progressive cerebellar ataxia, spastic paraparesis, cognitive deficiencies, skeletal/muscular and ocular abnormalities. The objective of the present study is to identify the underlying genetic causes of the rare spinocerebellar disorders in the Pakistani population. Herein, nine consanguineous families presenting different spinocerebellar phenotypes have been investigated using whole exome sequencing. Sanger sequencing was performed for segregation analysis in all the available individuals of each family. The molecular analysis of these families identified six novel pathogenic/likely pathogenic variants; ZFYVE26: c.1093del, SACS: c.1201C>T, BICD2: c.2156A>T, ALS2: c.2171-3T>G, ALS2: c.3145T>A, and B4GALNT1: c.334_335dup, and three already reported pathogenic variants; FA2H: c.159_176del, APTX: c.689T>G, and SETX: c.5308_5311del. The clinical features of all patients in each family are concurrent with the already reported cases. Hence, the current study expands the mutation spectrum of rare spinocerebellar disorders and implies the usefulness of next-generation sequencing in combination with clinical investigation for better diagnosis of these overlapping phenotypes.
Introduction
Spinocerebellar disorders encompass a diverse clinico-genetically heterogeneous continuum of neurodegenerative phenotypes including cerebellar ataxia and hereditary spastic paraplegias. Cerebellar ataxias are a group of movement disorders characterized by progressive degeneration of the cerebellum, causing balance and gait abnormalities [1,2]. Hereditary spastic paraplegias are caused by the degeneration of the corticospinal tract and dorsal column, causing lower-extremity spasticity, hyperreflexia, and extensor plantar responses [3]. Both disease entities may manifest together or with a wide range of remarkable overlapping neurological and non-neurological clinical features that include spastic paraparesis, cognitive impairment, spasticity, dystonia, dysarthria, dysphagia, and ocular abnormalities [4,5]. Globally, the prevalence of spinocerebellar disorders is estimated to be 1 per 10,000 individuals and until now, >200 genetic loci have been associated with these disorders following all modes of monogenic inheritances: autosomal dominant, autosomal recessive, X-linked, and mitochondrial [5][6][7][8][9][10][11].
The classical clinico-genetic strategies for identification of the underlying cause of spinocerebellar disorders are usually constrained due to their highly overlapping nature, however, next-generation sequencing (NGS) technologies provide an unbiased and hypothesis-free approach to determining the underlying disease-causing genetic factors. Since the advent of NGS, the spinocerebellar disorders' gene discoveries have significantly increased and about 47.3% of these genes have been identified using different NGS platforms [5]. NGS coupled with precise clinical phenotyping is a paradigmatic approach for the identification of genetic variants causing these rare heterogeneous disorders and can help us to understand the underlying patho-mechanisms which can ultimately be used to devise better diagnosis and prognosis strategies as well as enable more specialized and individualized therapies for spinocerebellar disorders.
Herein, whole-exome sequencing was performed to investigate nine unrelated consanguineous Pakistani families presenting with different spinocerebellar phenotypes. The study identified six novel and three previously reported genetic variants in seven genes, thus expanding and strengthening the genotypic-phenotypic spectra of the rare spinocerebellar disorders.
Ethical Approval and Sample Collection
This study was reviewed and approved by the ethics committee of the National Institute for Biotechnology and Genetic Engineering (NIBGE) in Faisalabad, Pakistan, and was conducted in accordance with the principles of the Declaration of Helsinki. Families presenting with rare neurological conditions were recruited from different areas of Pakistan, including Matta Swat, Banu, Faisalabad, Vehari, and Bahawalpur. After written informed consent from parent(s)/guardians, medical history and blood samples were collected from all the available affected and healthy individuals of the families and DNA extraction was performed using standard protocols.
Genetic Analysis 2.2.1. Whole-Exome Sequencing (WES)
To investigate the underlying genetic cause of the disease, WES of the probands from each family was performed. WES of Family A (IV:2, V:1), Family B (V:1), Family G (V:1), Family H (V:1), and Family I (V:1, V:8) was carried out at Novogene Co., Ltd. (Cambridge, UK); whereas WES of Family C (V:1), Family D (V:1), and Family F (IV:5) was performed at Macrogen, Korea using the Agilent SureSelect Human All Exome V6 Kit (Agilent Technologies, Santa Clara, CA, USA) as described elsewhere [12,13]. The Illumina platform NovaSeq 6000 (Illumina, Santa Clara, CA, USA) was used for paired-end (PE150) sequencing. The sequencing reads were aligned against the human reference genome using Burrows-Wheeler Aligner v0.7.17 (BWA). The hg19 reference assembly was used for Family A (IV:2, V:1), Family B (V:1), Family G (V:1), Family H (V:1), and Family I (V:1, V:8); whereas the sequencing reads alignment of Family C (V:1), Family D (V:1), and Family F (IV:5) were performed against hg38 genome assembly. The BAM files generated were sorted and the duplicate reads were marked using SAMtools v1.8 and Picard v2.18.9 (http://sourceforge.net/projects/picard/), respectively. Genotyping was performed with the Genome Analysis Toolkit v4.0 (GATK). Functional annotation and variant filtering were performed with Annotate Variation (ANNOVAR) and FILTUS, respectively. After annotation, the output file was retrieved in the form of a CSV file that was filtered for the identification of potentially pathogenic variants. WES of Family E (V:4, VI:1) was performed using an Ion PI Sequencing 200 Kit (200 bp read length, Life Technologies, Carlsbad, CA, USA) as described elsewhere [14]. The sequencing reads were aligned against hg19 assembly and variant detection was performed using v2.1 of the LifeScope Software (Life Technologies, Carlsbad, CA, USA). Custom R scripts were used to identify potentially damaging variants.
Variant filtration was performed in accordance with the pedigree's phenotype and pattern of inheritance. Considering that the pathogenic variants are rare in the population, the data variants were filtered against publicly available human polymorphism databases including 1000 Genomes project (https://www.internationalgenome.org/), Genome Aggregation Database (gnomAD, (https://gnomad.broadinstitute.org/), Exome Sequencing Project (ESP, https://evs.gs.washington.edu/EVS/), dbSNP, NHLBI Exome Variant Server, Complete Genomics 69, and priority was given to exonic and splice site variants with a minor allele frequency (MAF) of <1% in these databases. The variants were then prioritized based on their predicted pathogenic impact, i.e., higher weightage was given to frameshift, non-sense, splice site, and missense variants. The deleterious effect of the variants was then assessed using various in-silico prediction tools including Mutation taster, Polyphen-2, SIFT, and CADD scores above 10. Computational assessment of splicing effects used SpliceSiteFinder-like, MaxEntScan, NNSplice, GeneSplicer, ESEfinder, and RESCUE-ESE embedded in Alamut Visual Plus v1.6.1 (Sophia Genetics, Bidart, France) as well as SpliceAI Visual [15]. Further, pathogenic and likely pathogenic variants were prioritized based on the American College of Medical Genetics and Genomics (ACMG) variant classification system. The potential candidate variants were then visually inspected using the Integrative Genomics Viewer (IGV, https://software.broadinstitute.org/software/igv/) to remove any artefacts and false positives.
Homozygosity Mapping
AutoMap (Autozygosity Mapper, Basel, Switzerland) with default parameters was used for the identification of the runs of homozygosity in the whole-exome sequenced individuals except for Family C and Family E [16]. AutoMap offers reliable homozygosity mapping results directly from standard WES outputs (i.e., VCF files). Family C was presenting an autosomal dominant inheritance pattern and homozygosity mapping of Family E was performed manually.
Sanger Sequencing
Sanger sequencing was performed using standard protocols for co-segregation analysis of the candidate pathogenic and likely pathogenic variants. Primers were designed using the Primer blast tool (www.ncbi.nlm.nih.gov/tools/primer-blast/) and the targeted region was amplified. Sequencing analysis was performed using an ABI-3730 DNA analyzer (Applied Biosystems, Waltham, MA, USA) and data obtained were visualized by Sequencher 5.0 software.
Clinical Findings
In this study, we investigated nine consanguineous Pakistani families presenting distinct neurological features. The clinical findings for all the patients are summarized in Table 1 and their representative images are given in Figure 1.
Clinical Findings
In this study, we investigated nine consanguineous Pakistani families presenting distinct neurological features. The clinical findings for all the patients are summarized in Table 1 and their representative images are given in Figure 1. Patient V:4 is a six-year-old male, presenting with a more severe phenotype compared to the other affected individuals in the family. The disease onset was at three years of age. He never achieved the walking milestone and presented foot deformities. The mother was also presenting with similar clinical features but with severe hip dysplasia. The course of the disease is non-progressive in all the affected individuals of the family. Family G Family G comprises three affected individuals, two females (V:1, V:2), one male (V:3), and one healthy sibling born to first cousins ( Figure 2). The patients presented with the typical clinical features of the disease including ataxic gait, lower limb muscle atrophy, and brisk reflexes. The course of the disease was progressive and onset was around 3 years of age in all affected individuals. Dysarthria and planter reflex were observed in patients V:2 and V:3, while the proband (V:1) showed no planter reflex and normal speech. Moderate cognition impairment was also observed in all the affected individuals. No seizures were manifested by any of the patients.
Family H
Family H has four affected individuals, three males (V:1, V:2, V:4) and one female (V:3) who were born to consanguineous parents ( Figure 2). The disease onset was around 16 years of age. All the patients were manifesting a similar clinical phenotype. Initially, gait disturbance was observed followed by wheelchair dependence. No ocular abnormality was present in the patients. Mild to moderate intellectual disability was observed in all affected individuals. The course of the disease is progressive.
Family I Figure 2). Patient V:1 was a 32-yearold male, manifesting progressive ataxic gait, action tremor, dysarthria, slight dysmetria, and nystagmus. He achieved normal developmental milestones and the disease onset was at 12 years of age. The patient had a history of head trauma with cranial surgery at the age of 17 years and experienced febrile seizures in childhood until the age of 7-8 years. Patient V:4 was a 27-year-old female with disease onset at 12 years, presenting progressive ataxic gait with supported walking, action tremor, dysarthria, and dysmetria. Patient V:8 was a 37-year-old male, who achieved normal developmental milestones. The disease onset was at 15 years. High steppage gait with support, choreic movements, nystagmus, and clawed fingers/toes were observed in the patient. Patient V:11 is a 23-year-old female with disease onset at 16 years of age, presenting with broad-based ataxic gait, choreic movements, weak eyesight, and fainting episodes.
Data analysis of Families G, H, and I revealed previously reported variants FA2H: c.159_176del (p.53_58del), APTX: c.689T>G (p.Val230Gly), and SETX: c.5308_5311del (p.Glu1770fs), respectively [17][18][19][20] (Figures Figure 2 and S3). According to ACMG guidelines, all variants are classified as either pathogenic or likely pathogenic except the ALS2 variants, which are classified as variants of uncertain significance (VUS). The list of filtered variants in all the families is available in Table S1. Importantly, the c.2171-3T>G variant is predicted to cause cryptic acceptor site activation that would likely result in a frameshift through the inclusion of two nucleotides and a slight shifting of exonic splice enhancer (ESE) hexamers (Figure 3). The in silico prediction tools predicted all of the identified variants to be deleterious (Table 2). Retrospectively, homozygosity mapping performed manually for Family E (individuals V:4, VI:1) (Table S2) and using VCF files of all the other affected individuals (except Family C) that are subjected to WES have identified the homozygous regions encompassing the identified rare homozygous variants in this study ( Figure S1).
Discussion
The human brain is a complex entity, the development and functionality of which are regulated by the genetic code. The variation in this genetic code can give rise to various disease conditions including neurological conditions/diseases with potentially lifelong consequences. The global prevalence rate of these life-threatening ailments was estimated to be 10.2%, furthermore, the causality rate was also found to be very high, i.e., approximately 16.8% [21]. These disorders manifest with immense clinical variability, often with overlapping phenotypic features and genetic heterogeneity, hence posing a great challenge to identifying the underlying genetic cause of the disease and understanding the involved patho-mechanisms. Consanguineous unions serve as the most likely framework for the study of genetic disorders with a recessive mode of inheritance, as consanguinity increases the chances of inheriting pathogenic variants in the homoallelic state from parents to offspring. Pakistan is the fifth most populous country and has a very high rate of consanguinity, with more than 60% consanguineous unions [22]. Thus, the Pakistani population presents a unique opportunity to discover the recessive genetic causes of rare inherited disorders. Currently, NGS is the highly pertinent approach for identifying the underlying genetic cause of rare disorders in both research and clinical settings, leading to the resolution of the diagnostic odyssey. However, in Pakistan, availability of the limited resources restrains the use of NGS for diagnostic purposes.
Herein, nine unrelated consanguineous families were studied. The families were manifesting a range of spinocerebellar phenotypes including spastic paraplegia 15 (Family A), spastic ataxia (Family B), spinal muscular atrophy with lower extremity predominance type 2 (Family C), infantile ascending hereditary spastic paraplegia (Family D and E), spastic paraplegia 26 (Family F), spastic paraplegia 35 (Family G), ataxia with oculomotor apraxia type 1 (Family H), and ataxia with oculomotor apraxia type 2/spinocerebellar ataxia with axonal neuropathy 2 (Family I). They were investigated for the identification of the genetic basis of the disease by employing whole-exome sequencing followed by Sanger sequencing.
In our cohort, there are a total of 31 affected individuals, manifesting overlapping neurological features similar to the already reported cases. The key clinical symptoms observed are gait abnormalities or no ambulation in 31/31 patients, cognitive deficit in 11/31 patients, and muscular atrophy in 12/31 patients. The additional clinical features of all patients are summarized in Table 1, and are in line with the established clinical presentation of the respective diseases.
In our study, we identified four novel pathogenic variants in ZFYVE26, SACS, BICD2, and B4GALNT1, two ALS2 variants of uncertain significance (VUS) in six unrelated consanguineous families (Family A-F), and three previously reported variants in FA2H, APTX, and SETX in Families G-I [17][18][19][20]. The ALS2 VUS are considered as the underlying cause of the disease because the phenotypic features observed in our patients are consistent with the ALS2-related disease, but further functional analysis is required to prove the possible pathogenicity. Family C is following an autosomal dominant inheritance pattern, all of the other families present an autosomal recessive mode of inheritance. In Family C, three likely pathogenic variants have been identified including BICD2, PTPN11, and LAMA5. BICD2 and PTPN11 are known to cause autosomal dominant disorders (OMIM ID 609797 and OMIM ID 176876, respectively), whereas LAMA5 is involved in an autosomal recessive phenotype (OMIM ID 601033). On the basis of the mode of inheritance and the phenotypic features manifested by the affected individuals in our families, we considered the variant affecting BICD2 as the plausible disease-causing variant.
We identified frameshift biallelic variants in ZFYVE26: c.1093del, B4GALNT1: c.334_335dup, and SETX: c.5308_5311del in Families A, F, and I, respectively, and a novel non-sense variant in SACS: c.1201C>T in Family B, potentially resulting in truncated proteins lacking the crucial functional domains and hence playing a role in disease pathology ( Figure S2). ZFYVE26 encodes a zinc-finger protein spastizin that is highly expressed in the brain and is a component of the adaptor-related protein complex 5 (AP5) that plays a crucial role in autophagic lysosomal reformation. Its dysfunctioning can result in the loss of neuronal cells [23,24]. Our study reports the second identified case of ZFYVE26 related spinocerebellar degeneration in the Pakistani population [25]. B4GALNT1 translates the β1, 4-N-acetylgalactosaminyl transferase-1 (GalNAcc-T) enzyme involved in the synthesis of sialic acid-containing complex gangliosides (GM2 and GD2) which are present in the plasma membrane of cells, predominantly in neurons. They are involved in signal transduction, synaptic plasticity, and endocytosis; hence crucial for the nervous system [26]. The alteration in the enzyme results in disruption of the ganglioside metabolic pathway and subsequently leads to neurodegenerative lysosomal storage disorders [27][28][29]. SETX encodes for the Senataxin protein, which has DNA/RNA helicase activity and is considered to play a vital role in DNA double-strand repair and RNA splicing mechanisms [30].
Family C is segregating the novel heterozygous missense variant in BICD2: c.2156A>T in an evolutionarily conserved region and thus possibly affecting the protein's structure. The BICD2 (bicaudal D homolog 2) protein is implicated in axonal transport along microtubules and maintains the functional integrity of lower motor neurons. The protein regulates the trafficking of crucial cellular cargoes such as Golgi, secretary vesicles, and mRNA by interacting with the dynein-dynactin complex and small GTPase RAB6. The mutation in BICD2 results in impaired axonal transport, leading to motor neurodegeneration and causing the disease [31,32].
In Family D, a novel homozygous splice site variant in ALS2: c.2171-3T>G located in the splice acceptor site of intron 10 was identified. The c.2171-3T>G variant is predicted to reside in an AG-exclusion-zone, slightly shift ESE hexamers, and abolish the native splice acceptor site, likely resulting in a frameshift through the inclusion of two nucleotides. Whether or not this results in all resulting transcripts showing the predicted frameshift or leaky splicing that includes evidence of other effects such as exon skipping or partial wild-type splicing remains to be determined. The NetGene2-2.42 server (https://services. healthtech.dtu.dk/services/NetGene2-2.42/) predicted that this variant results in the skipping of the wild-type acceptor splice site of intron 10 and thereby activation of the nearby dormant cryptic splice sites, presumably leading to non-sense mediated decay (NMD) of the mutant transcript or aberrant protein production [33,34]. ALS2 encodes the alsin rho guanine nucleotide exchange factor protein which is highly expressed in the central nervous system, particularly in the cerebellum, and is involved in the activation of small GTPase RAB5, thereby regulating the endosome as well as mitochondrial trafficking and fusions in the neurons [35]. The homozygous missense variants in ALS2: c.3145T>A and APTX: c.689T>G are identified in Family E and H, respectively. APTX encodes for the aprataxin nuclear protein, which is involved in DNA single-strand break repair and is highly expressed in the cerebellum, cerebellar cortex, spinal cord, basal ganglia, and other nervous system tissues [36,37].
The previously reported homozygous 18bp deletion in FA2H: c.159_176del was identified in Family H and resulted in the loss of highly conserved amino acid residues, thus affecting protein function and causing the disease. FA2H encodes for the lipid biosynthetic enzyme fatty acid 2-hydroxylase, which is involved in the formation of the brain cell's myelin sheath which protects the neuronal axons from damage and enhances the nerve conduction rate [38]. Recently, three Pakistani families reported the same variant (FA2H: c.159_176del), hence classifying it as a founder effect mutation in the Pakistani population [18].
The present study expands the mutation spectrum of spinocerebellar disorders and further strengthens the usefulness of WES as an efficient and convenient approach for identifying the underlying cause of these rare genetic diseases owing to their highly clinically overlapping presentation, particularly in inbred populations such as Pakistan's. This will also help to improve the diagnosis and prognosis of rare spinocerebellar disorders, thus leading to the establishment of better genetic counselling and carrier screening opportunities and thereby reducing the disease burden.
Supplementary Materials: The following supporting information can be downloaded at: https:// www.mdpi.com/article/10.3390/genes14071404/s1, Figure S1: Homozygosity mapping performed using VCF files for all the affected individuals, that were subjected to WES (Family A-IV Table S1: List of filtered variants; Table S2: Homozygosity mapping regions Family E-V:4, VI:1. Informed Consent Statement: Written informed consent was obtained from all the participating individuals or their guardians, before the commencement of this study.
Data Availability Statement:
The data from this study can be made available upon request. | 2023-07-11T15:29:09.771Z | 2023-07-01T00:00:00.000 | {
"year": 2023,
"sha1": "1f814001a02eca0092823ce5883198edd3357cc0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/14/7/1404/pdf?version=1688622677",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "49ed08892dbb50b213e01548d62ce8ab4c79406c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
} |
85004976 | pes2o/s2orc | v3-fos-license | Spatial variability of pores in oxidic latosol under a conservation management system with different gypsium doses.
A estrutura do solo e alterada quando submetida ao processo agricola, ou seja, uma nova organizacao espacial do sistema poroso e formada com reflexo na qualidade fisica do mesmo. Dessa forma, objetivou-se, neste trabalho, visualizar e quantificar, por meio da tomografia computada de raios-X, a variabilidade na distribuicao do diâmetro de poros em Latossolo oxidico submetido a um sistema de manejo conservacionista que utiliza diferentes doses de gesso. Foram abertas tres trincheiras aleatorias e longitudinais a linha de plantio em um Latossolo Vermelho distrofico gibbsitico muito argiloso, submetido as seguintes doses de gesso: G0: ausencia de gesso; G7: 7 Mg ha-1 e G28: 28 Mg ha-1 de gesso adicional, aplicados na superficie do solo na linha de plantio. Amostras com estrutura preservada foram coletadas em cilindros de acrilico, nas profundidades de 0,20-0,34, 0,80-0,94 e 1,50-1,64 m, apos seis anos de cultivo dos cafeeiros ,para quantificacao dos poros 3D, detectados pela tomografia computada de raios-X. A variabilidade espacial da estrutura do solo foi avaliada por meio de semivariogramas gerados a partir das imagens 3D na escala de cinza. A distribuicao do diâmetro dos poros detectaveis foi feita por meio da mineracao de dados. Paras as inferencias estatisticas, foram utilizados os pacotes 'geoR' para os semivariogramas e 'randomForest' para a mineracao de dados em linguagem R. A maior continuidade espacial dos poros ocorreu no tratamento G7 nas tres profundidades. Os efeitos combinados do sistema de manejo promoveram a maior variabilidade espacial da estrutura do solo no tratamento G28.Com base nos semivariogramas, pode-se inferir que a adocao do sistema em estudo promoveu modificacoes na rede de poros em todas as direcoes (X, Y e Z), porem com melhor continuidade dos poros na direcao vertical (Z).
INTRODUCTION
The management of agricultural systems affects the soil physical attributes and thus the pore spatial organization, which justifies studies that seek to evaluate soil structural quality. Soil structure is formed by aggregates and a wide network of pores, especially inter-aggregate pores, ensuring good functioning of the hydric, physico-chemical and biological processes and plant performance (Luo;Lin, Li, 2010;Martin et al. 2012).
In these Latosols, when subjected to agricultural management systems, the inter-aggregate pores are seriously affected by the first few passes of machinery, given its high susceptibility to compaction (Severiano et al. 2013.) Therefore, any alteration made by anthropic factors should be evaluated, among other purposes, to detail the possible causes of spatial pore variability (Munkholm;Heck;Deen, 2012;Luo;Lin;Li, 2010).
In this context, the use of a management system that employs conservation practices in coffee crops in the Cerrado region is gaining popularity for combining chemical and physical improvements in soil -water relationships acheive to rising productivity and pursuing of environmental sustainability. Within the principles of conservation agriculture (Raij, 2008), this system provides conditions that enhance the coffee root system performance, so that there are improvements in the utilization of the available water at depth and efficient use of nutrients distributed in the soil profile (Serafim et al. 2011).
For the evaluation of changes in soil porosity promoted by management, X-ray CT scan is effective for enabling three-dimensional mapping of the soil structural components, allowing a qualitative and quantitative evaluation (Borges;Pires, 2012).
So, the aim of this work was to visualize and quantify, through X-ray CT scan, the pore distribution in a oxidic Latosol submitted to a conservation management system with different gypsum doses.
Study area
The study was conducted in a coffee plantation in the municipality of São Roque de Minas, physiographic region of the Upper São Francisco, MG, whose coordinates are 20º15'45" S e 46º18'17" W with 850m altitude. The experimental cultivation was established in November/2008 and it has been conducted according to the practices of a soil conservation management system. The region climate is Cwa, according to the Köppen classification, with annual precipitation of 1,344 mm and a well-defined dry season from May to September (Menegasse;Gonçalves;Fantinel, 2002).
According to the premises of the conservation management system of coffee (Coffea arabica L.) crop, the cultivar yellow Catucaí was planted in a narrow row spacing of 2.50x0.65m (between plants and between rows, respectively). Tillage was conducted using one plowing and two harrowings with application of amendments throughout the total area (4 Mg ha -1 dolomitic limestone + 1.92 Mg ha -1 agricultural gypsum).
We used a subsoiler followed by a spade applied fertilizer for the opening of the 0.60 m deep and 0.50 m wide furrow and mixed the soil at 0.40 m depth (Ticianel, 2013), allowing the incorporation of basic fertilizer (formula 08-44-00, enriched with 1.5% Zn and 0.5% B). This furrow was corrected at greater depth with dolomitic limestone 8 Mg ha -1 (2 kg m -1 ) in all treatments. The coffee seedlings were planted between the second half of October and the first half of November 2008.
After planting, the treatments received different doses of additional gypsum, this amendment being covered with soil material and mixed with the inter-row plant material (mixture piled-up at the base of the coffee plant stem).
Together with the installation of the crop, Brachiaria ducumbens (Syn. Urochloa) was implanted between the planting lines, which were periodically cut with a brush cutter, which minimizes competition with the main crop and allows the plant residue produced to be distributed along the row as well as between rows. This practice is used to improve soil structure and serves as protection against erosive agents (Lima et al. 2012). The related crop operations were carried out by means of animal traction equipment; only the harvesting was done mechanically. The nutritional monitoring and fertilization management of the coffee crop was conducted based on leaf analysis (Serafim et al. 2011;Carducci et al. 2013).
The parcels contain 10 rows with 36 plants each, totaling 360 plants per plot with an area of 585 m 2 . The boundary corresponds to 3 plants at the beginning of the plot and two rows on the sides, totaling 360 m 2 .
For this study treatments were included: G0absence of additional gypsum; G7 -7 Mg ha -1 and G28 -28Mg ha -1 of additional gypsum, both applied on the surface of the plant row. The selection of these treatments was based on the hypothesis of possible structural alterations promoted by the application of gypsum and for this we evaluated the recommended dose ages found in the literature (G7), the reference dose for the system adopted itself (G28) and the treatment without additional gypsum (G0).
Soil sampling and physical, chemical and mineralogical characterization
For the sampling and characterization of soil, three random trenches were dug lengthwise along the plant row, with dimensions of 0.70m (width) x 1.50 m (length) x 1.50 m (depth) and then samples were collected for physical, chemical and CT scan analyses. It is noteworthy to mention that at the sampling time (September/2011) the crop had three years of cultivation.
The intact soil cores were sampled by hand, using plexiglass cylinders (0.065 m diameter and 0.14 m height) equipped with a specially designed aluminum sampling ring for CT scan analysis and more volumetric rings (0.065 cm diameter and 0.025 m height) for the remaining analyzes. The sampling occurred between plants (0.65 m) and just below the gypsum layer at depths of 0.20-0.34, 0.80-0.94 and 1.50-1.64 m, with three replicates for each treatment (G0, G7 and G28), totaling 27 samples.
In the laboratory, the disturbed soil samples collected rightly after the undisturbed samples collection were air-dried and passed through a two-millimeter mesh sieve for further analyze.
The SiO 2 , Al 2 O 3 and Fe 2 O 3 contentswere determined by sulfuric acid digestion and used in calculations of the Ki (SiO 2 /Al 2 O 3 ) and Kr [SiO 2 / (Al 2 O 3 + Fe 2 O 3 )] molecular ratios (Embrapa, 2013). The values of Ki and Kr less than 0.75 characterize this soil as very weathered, which is evidence of its sesquioxidic gibbsitic mineralogy. The kaolinite and gibbsite contents were derived from XRD measurements and stoichiometric ratios derived from their ideal chemical formulas, according to Resende, Bahia Filho and Braga(1987) (Table 1).
The particle size analysis was performed by employing slow agitation of the soil suspension, using NaOH 1mol L -1 for 16 hours. Mean values in g kg -1 were, respectively for clay, silt and sand: 819, 157 and 24 at a depth of 0.20-0.34 m; 848, 127 and 25 at a depth of 0.80-0.94 m and 886, 89 and 25 at a depth of 1.50-1.64 m. Subsequently, we conducted fertility analyzes, where aliquots of these samples were taken to determine the pH, sorptive complex and organic matter contents of the soil (Embrapa, 2011) ( Table 2).
Acquisition, reconstruction and binarization (Thresholding) of the 3D images
For the CT scan the soil cores in plexiglass cylinders were dehydrated in an oven at 40° C until a constant weight, aiming to minimize possible interference by water films on the X-rays attenuation, and then scanned at 120kV and 170 mA with an integration time of 3500 mS, generating an 2D axial projection of X-ray attenuation imagery, in microCT scan (EVS/GE MS8X-130),thirdgeneration preclinical, cone-beam, equipped with a tungsten X-ray tube. Excitation energy of 100 kV and 130 mA was employed for all samples, in the Soil Image Laboratory at University of Guelph, Canada.
In order to obtain greater accuracy in the analysis and elimination of possible structural alterations generated during the sampling, a 0.033 m slice from the middle of the core was selected for scanning. As the X-ray source emits polychromatic X-rays (Clausnitzer;Hopmans, 2000), we employed prefiltering with a high-pass copper foil (0.5 mm) in order to reduce beam-hardening artifacts and to maximize the contrast between the different phases of the soil core (solid and air).
The 2D axial projections were acquired and reconstructed with 20µm spatial resolution (pixel size) and they were saved in 16-bit radiometric resolution. Then 3D subvolumes of interest were selected in the exact center of each original image and the reconstruction was done using a proprietary filtered back-projection software called ''eXplore Reconstruction Utility'' ( GE Healthcare, 2006). The final isometric volume (666 voxel x 666 voxel x 550 slice) was reconstructed at 60 µm voxel size, aiming to maximize both region of interest and spatial resolution, within a manageable 500 MB file size ( Figure 1). Ciênc. Agrotec., Lavras, v.38, n.5, p.445-460, set./out., 2014 For the purpose of comparing the X-rays attenuation, in CT imagery, we used the values in Hounsfield scale (defined relative to air [-1000 HU] and water [0 HU]) and a calibration procedure through the use of two capillary tubes (one filled with water and the other with air inserted between the inner wall of the plexiglass tube and the sample at the time of scanning). Subsequently, the coefficients of water and air were calculated and a Gaussian smoothing filter (radius of 1 voxel) was used to reduce image noise and artifacts with the aid of MicroView ( GE Healthcare, 2006) prior to subsequent analyses in NIH ImageJ (Rasband,2012).
3D image processing and analyzes
The 3D image processing followed the protocol of the Soil Image Laboratory, University of Guelph, Canada. This is the first step in 3D image processing, which involves the conversion of each grayscale voxel value of the grayscale image (proportionally expressing the locations of the X-rays attenuation coefficients) into a binary image, distinguishing the void and non-void in the selected subvolume images.
The thresholding was done in NHI ImageJ (Rasband, 2012), by a method based on the work of Schlüter, Weller and Vogel (2010) employing both Laplacian edge detection and seeded-region-growing to assign the voxels associated with the zero-crossing, but first enforcing clamping of the gray-scale image (considering the histogram peak positions for air and solid), to minimize unnecessary edges.
With the public domain image analysis software NIH ImageJ, all analyzes were done in full 3D mode, which allowed differentiation of data categories according to the desired micro morphological size and shape classes.
To obtain the spatial data the "Semivariance 3D" plugin of the NIH ImageJ was used, with the X-rays attenuation value of the grayscale image. We obtained semivariances for the 3D image in grayscale, i.e., the orthogonal directions (X, Y, Z), where the semivariance is standardized [0-1]. The construction of the experimental semivariogram identified the spatial dependence amplitude of the variable under study and also defined the spatial variability structure (Goovaerts et al. 1999).
The "Analyze Particle" function of NHI ImageJ was used to calculate the pore dimensions (volume and area). This function detects and measures the objects in the binary image to obtain data relative to pores present in the image volume under study (n = 550). From this result the equivalent diameter to a sphere (3D images, spatial geometry simulation) and as the shape factor, the sphericity of the pore (0 = less rounded, angular, 1 = more rounded and smooth) were calculated by equation 1: where: π ≈ 3.141592…8, V = volume (mm³) and A = area (mm²).
Statistical analyses
Spatial analyses consisted of the construction of the experimental semivariogram from 3D images in grayscale.
The semivariance is a function of the distance h (Equation 2), which is estimated in a discrete set of lag distances expressed by a scatterplot that allows the variographic analysis of the spatial dependence amplitude of the variables studied (Faraco et al. 2008), in this case the X and Y orthogonal directions (horizontal direction) and Z (vertical direction), defining then the parameters required for the estimation of characteristics resulting from the spatial variability structure (Ávila, Mello;Silva, 2010). It is noteworthy that although we were working with 3 axis -directions (X, Y, Z), the calculation of semivariance was given by R², since we were working with the planes X, Y and Z. Thus, the standardized semivariograms were estimated by the classical method, through the estimator: in which: is the semivariance estimator, is the number of pairs of measured values , and, separated by a vector distance h, are realizations of the random variable (Journel, Huijbregts, 1978;Isaaks, Srivastava, 1989).
We then proceeded to estimate the experimental semivariance for each replication and using the average semivariance it was fitted to the exponential model (Equation 3), that presented the better fit of the data, and then the model parameters were determined: nugget effect (C 0 ), sill (C 0 + C 1 ) and theoretical and practical range (a), in R language (R Development Core Team, 2012), more specifically with the 'geoR' package (Ribeiro Junior, Diggle, 2001), both having free access and in accordance with the GPL (General Public License): (1) (3) ( ) Lavras, v.38, n.5, p.445-460, set./out., 2014 Subsequently prediction intervals (PI) were constructed for the set of replications, in order to compare the spatial variability, as with the PI, so it can be stated that 95% of the samples were predicted.
The detectable pores were classified based on data mining by 'randomForest' package (Liaw;Weiner,2012), and other functions developed in R language (R Development Core Team, 2012), where it was possible to generate various pore diameter classes and to detect the most contrasting ones. It generated 26 contrasting pore diameter classes distributed through a range of 0.2 to 1 mm in diameter, which corresponds to the limit between fine macropores and large mesopores (Bullock et al. 1985).
RESULTS AND DISCUSSION
The empirical semivariograms indicated secondorder stationarity for the variables evaluated, reflected by clear and well-defined sill (asymptote), which permitted identification of the spatial dependence range of the orthogonal directions (X, Y and Z) of the 3D images in grayscale. Thus, the spatial variability structure was defined (Goovaerts et al. 1999), as well as the parameters necessary to estimate its characteristics (Figures 2-10).
This assessment was needed to verify the spatial dependence of the structural components (mineral portion and pore space) in each level combination (3 treatments x 3 depths x 3 directions) obtained through the axes(X, Y and Z) of the 3D images. This allowed the identification of alterations promoted by the management system in question.
It was not possible to verify a distinct anisotropic structure only by visual observation of the semivariogram, thus by means of the prediction intervals (PI), the distinctions between events were revealed, where the 0.20-0.34 m, the G0 and the G7 showed the lowest range (a) (Figures 2 and 5), unlike G28 where a more pronounced narrowing occurred in the 0.80-0.94 m depth (Figure 9).
From the models obtained, the practical range (a) of the spatial dependence for the X, Y and Z axes, the greater (a) values were found in G7, at all depths, especially in the vertical direction (Z), followed by G0 and G28, respectively, at the 0.20-0.34 and 1.50-1.64 m depth range and the horizontal directions (X and Y) at the 0.80-0.94 m depth range.
The (a) indicates the magnitude of the spatial dependency, so that the higher range (a)in G7 relative to the other treatments suggest the higher spatial continuity of the soil structure in the X, Y and Z directions (Ávila, Mello;Silva,2010). This facts probably due to amore homogeneous pore diameter distribution (Figure 11), with a possible decrease in macroporosity, which agrees with Schaffrath et al. (2008), that a possible pores homogenization may be related to a more uniform distribution of meso-aggregates C 0 = Nugget effect was null; C 0 +C 1 = still; aT = range theoretical (mm); aP = range practical (mm).
Treatments
Depth ( Table 3 -Parameters of the exponential model fit for the orthogonal directions (X, Y, Z) for the treatments, G0: absence of additional gypsum, G7: 7 Mg ha -1 and G28: 28 Mg ha -1 additional gypsum applied on the soil surface of the plant row.
(Costa Júnior et al. 2012), which was also observed by Silva et al. (2013) at a depth of 0.15 m in G7, working in the same area of this experimental research. The formation and stability of macro-and microaggregates, which influence the distribution of space voids is strongly related primarily to the oxidic mineralogy and clay texture. This is similar to what occurs with Latosols of the chapadas of the Cerrado, due to electrostatic interactions between the oxides (mainly Al and Fe), and external agents, such as those related to the adopted management system, since the biological agents have only a secondary importance (Costa Júnior et al. 2012;Vollant-Tuduri et al. 2005).
The spatial analysis performed, revealed that the effect of external agents, was identified, which is related to the gypsum amendment associated with other agricultural practices (Table 3).
In all experimental units (27 cases), the nugget effect was zero null (Table 3), due to the adequate spatial resolution (60 µm) which generated a greater level of detail of the soil structure organization, i.e., within the limits of (a)and the still, most of the variability can be explained by the spatial component. The still had very similar values with a slight decrease in G7.
The highest IP, as well as the lowest (a) values in the 0.20-0.34 m depth, occurred in the G28 (Figure 8 and Table 3) being explained by the presence of a greater volume of inter-aggregate pores, with diameters ranging from 0.3 to 0.6 mm ( Figure 11).
The considerable occurrence of these pore diameters was most likely due to the combined effects of the management practices, primarily by the tillage occurred in preparing the plant row, that tends to homogenize the aggregates through the destruction of larger aggregates (Schaffrath et al. 2008;Cremon et al. 2009;Costa Júnior et al. 2012). Ciênc. Agrotec., Lavras, v.38, n.5, p.445-460, set./out., 2014 These new aggregates were formed possibly by the action of biological factors (Silva et al. 2013;Martin et al. 2012), via the constant supply of the organic material throughout the soil profile, by both the mowing of the interrow Brachiaria sp. and the renewal of the coffee roots, that increase the soil microorganism activity (mycorrhizal fungi, preferentially). This once associated acttore organize soil structure and diversify aggregates into different sizes (Costa Júnior et al. 2012, Martin et al. 2012Cremon et al. 2009;Salton et al. 2008) which can promote the heterogeneity found in the pore diameter.
Accordingly, the soil management systems are considered a major source of spatial variability of soil physical properties (Schaffrathet al. 2008) and thus, exceptionally at the 0.20-0.34 m depth, the whole effect of the management practices for this system are being expressed by the pore diameter distribution.
The distinction between treatments was evident throughout the range generated especially in the classes of 0.3 to 0.6 mm in diameter with a greater number and volume of pores visible, i.e., they occupied a larger part of the soil matrix in the following order: G28 > G0 > G7, especially at the 0.20-0.34 m depth.
The increase in the pore volume with a diameter less than 0.5 mm (Figure 11) especially at the 0.20-0.34 m depth, primarily for G0 and G28, may be related to the contribution of plant residue on the soil surface. That is associated to the development of specific mycorrhizal fungi, which favor the formation of a large number of smaller pores, originated from grouping of solid particles according to the growth of hyphae with a less diameter to than those of the pores, thus forming a network of these pore with fine roots of the plants, which favors the emergence of smaller aggregates (Martin et al. 2012).
On the other hand the amount of additional gypsum applied probably has secondary effects on the aggregate formation due to similarities in the pore distribution of G0 and G28 at the 0.20-0.34m depth.
It is noteworthy that the gypsum is probably solubilized in a gradual manner, since besides being applied on the surface, it is covered with a thick layer of soil (≈ 50 cm) together with an accumulation of biomass from the inter-row cut grass, and over time, accommodation of the particles occurs over it, thus creating a gypsum reservoir. Furthermore, as the region receives uneven rainfall distribution, a gradual release of the gypsum chemical components is favored (Serafim et al. 2011).
The adverse behavior of G7 in the same depth, regarding the number and volume of pores in the detectable diameter interval, deserves further studies.
It was found that in the G7, the exchangeable calcium (Ca 2+ ) is the predominant ionic species in the soil solution, with values of 1.6 and 13 mmol c dm -3 in G0 and G7, respectively, at 0.15-0.25 m depth after 16 months of crop implantation. After 2.5 years was 27 and50 mmol c dm -3 in G0 and G7, respectively, and in G28 was 48 mmol c dm -3 (Silva et al. 2013;Ramos et al. 2013).
In G7 quantities are insufficient to act as an aggregate dispersing agent (Spera et al. 2008;Cremon et al. 2009). Even it is important to point out that the highest spatial continuity of pores found in the range evaluated could be positive if there was a proportional increase in the smaller pores number responsible for the higher retention of available water to plants.
The por spatial distribution of greater diameter throughout the soil profile influences the equilibrium of soil physical-hydric processes. The knowledge of pores spatial geometry which controls the increment in number and area of pores, as well as their tortuosity, contributes to the understanding of the alterations that may occur in the soil processes dynamics, influenced by agricultural practices (Luo;Lin;Li, 2010).
With respect to the sphericity that indicates the surface roughness of the pores (0 = roughness, 1 = rounded), it increased with decreasing pore diameter, as observed by Tippkötter et al. (2009) (Figure 11). Sphericity was similar throughout the soil profile, which is attributed to the micro-granular type structure favored mainly by the high gibbsite content (Table 1) that acts in the formation of the very small micro-aggregates and more rounded micropeds (Vidal-Torrado et al. 1999;Ferreira, Fernandes;Curi 1999).
The sphericity of the pores was remarkable at the 0.20-0.34m depth and this fact may be related to both the oxidic-gibbsitic mineralogy and the effects of soil tillage in the plant row, which tends to form smaller and more rounded aggregates, via the breakdown of the larger ones, as macro-morphologically observed by Cremon et al. (2009).
CONCLUSIONS
A greater spatial continuity of pores was detected in G7 at the depths evaluated.
A highly homogeneous distribution of the visible pores volume in each class occurred in G7 especially at 0.20-0.34m depth.
The largest pore number and volume were detected in G28 in the 0.20-0.34m depth, as well as the greatest spatial variability of soil structure, promoted by the effect of combined practices of the management system.
Based on geostatistical analyses, it can be inferred that the adoption of the management system under study promoted changes in the pore network in all directions (X, Y and Z), but with better pores continuity in the vertical direction (Z).
ACKNOWLEDGMENTS
To FAPEMIG for research funding; to Consórcio Embrapa Café for the vehicle loan; CNPq for granting the scholarship; UFLA for the institutional support; to Empresa AP for technical and logistical support. To the DCS-UFLA D.Sc. student Teotônio Soares for the R language support. | 2019-03-22T16:15:25.062Z | 2014-10-24T00:00:00.000 | {
"year": 2014,
"sha1": "85e69ad8a6fd8ed9837b9100dfd82f9a28157109",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/pdf/cagro/v38n5/a04v38n5.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "dd5f332d6e746b4a83b257ca26e7f17bc2ed2a91",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
247814172 | pes2o/s2orc | v3-fos-license | Properties and Effect of Fresh Concentrated Extract of Garlic on Different Bacteria and Fungi
Aims: Study some characteristics of fresh concentrated extract of garlic (FCEG) and analyze its effect on the growth of different bacteria and fungi. Place and Duration of Study: Sample: Faculty of Chemical Sciences. Autonomous University of San Luís Potosí, S.L.P., between march and November 2020. Methodology: To obtain the FCEG, 15 heads of raw garlic, previously peeled, were ground in a mortar and the suspension obtained was filtered through gauze, pressing to obtain a greater amount of filtrate, and kept covered at 4 o C. For the study of some of their characteristics, the yeast Candida albicans was used. Petri dishes containing Sabouraud dextrose agar were inoculated with 1 x 10 6 yeasts/mL and 50 μL of FCEG, and were incubated at 28 o C for 5 days, comparing growth with respect to a control without FCEG (all inhibition experiments described below were performed with the same protocol), while for the study of antifungal properties analyzed the effect of the extract on different strains of bacteria, yeasts, and fungi. Original Research Article Rodríguez et al.; AJRB, 10(2): 15-32, 2022; Article no.AJRB.84761 16 Results: Some of the factors that modify antifungal activity of FCEG are dilution, incubation temperature, protein concentration, half-life, capping and uncapping of tubes, and treatment with activated carbon. Regarding the growth inhibition analyses, it was found that all the species analyzed were susceptible to FCEG, among which the following stand out: the the dimorphic fungus Histoplasma capsulatum, causing systemic mycoses, the yeasts Cryptococcus neoformans and C. albicans, as well as different species of Dermatophytes, Aspergillus, and bacteria. Conclusion: FCEG shows a good antimicrobial effect against a wide variety of bacteria, yeast, and fungal species, which makes its application in medical therapy and agriculture, as well as being cheap, easy to obtain and does not cause side effects, although more studies are required for its therapeutic application.
INTRODUCTION
Antimicrobial resistance (AMR) occurs in viruses, bacteria, fungi, and parasites as a an inevitable manifestation of its capacities to evolve. The decreased effectiveness of antimicrobials existing is a consequence of the very complex interaction between natural selection, environment, and patterns of use of these drugs. This has given result that AMR has become a problem of global public health: as the variety of existing pathogens that affect humans and animals have developed various mechanisms to defend against available antimicrobials [1]. AMR is acquired through genes that code for resistance to antibiotics or through mutation or alteration of some gene. These mutations can be intrinsic or spread through community's microbial cells vertically (by inheritance) or in horizontally (by using elements mobile genes and extrachromosomal plasmids). This genetic diversity among populations microbial combined with a generation time very quickly, particularly among bacteria, provides microorganisms with a response extraordinary adaptation to the selective pressure of antimicrobials [2]. Therefore, are required approaches multidisciplinary and multisectoral like One Health or other related concepts such as Planetary Health or Eco Health that involve the disciplines of human, animal, and environmental health in a context of synergy of capabilities and experience. Also, a wire is required driver who can amalgamate the efforts of the different disciplines with an intersectoral and inter-disciplinary approach to optimize these sources allocated to AMR control, improve epidemiological surveillance, implement measures mitigation of its effects, as well as protection for the population at higher risk of acquiring multidrug-resistant infections [1,3].
Other lines of research in antimicrobial antibiotics have focused on the use of some natural or modified plant extracts, and other natural compounds, with highly satisfactory results, such as: the effect of essential oils of oregano (Oreganum compactum), thyme (Thymus vulgaris), and rosemary (Rosmarinus officinalis) against Aspergillus niger [4], the study of the fungi toxic effects of Allium sativum (L.) and Ocimum gratissimum (L.) using aqueous extraction methods on six fungal pathogens [5], a bioactive compound (trichodermin) isolated from the culture extracts of the fungus Trichoderma brevicompactum, and has a marked inhibitory activity on Rhizoctonia solani and Botrytis cinerea [6], the antileishmanial activity of a mixture of Tridax procumbens and A. sativum in mice [7], the bacterial effect of the extract of Another line of research in antimicrobial antibiotics has focused on the use of garlic, to which it is common to attribute a wide variety of properties, among which are: antifungal and antileishmanial activity [4,7], the treatment of infections that does not affect the central nervous system [12], the in vitro anticoagulant effect of garlic extract on the blood coagulation cascade [13], evaluate the relative antidiabetic potential of garlic aqueous [14], and other medicinal properties such as: thorax, asthma, and skin diseases, blood disorder, hypertension, heart attack, antioxidant, etc., [15]. Regarding the antimicrobial antifungal properties of garlic, there are a wide variety of studies, among which are: the antifungal activity of some essential oils and their major phenolic components against A. niger [4], the in vitro effects of garlic (A. sativum L.) on fungal pathogens isolated from rotted Cassava roots [5] [29], molds, yeasts, coliforms, and E. coli [30] on the Gram positive bacteria Bacillus subtilis and Staphylococcus epidermidis and other Gram negative bacteria like E. coli and Shigella dysinteriae [31]. The information consulted supports the health benefits associated with garlic consumption. The crushing of garlic bulbs allows obtaining alliin, which turns into allicin due to enzymatic oxidation. This compound has a fundamental role in the garlic's medicinal properties, and the garlic plant's activity depends on its ability to produce allicin. Preclinical studies have shown the improvement of the immune system, and specific proteins associated with its immunostimulant effect have been identified. In clinical studies, supplements with garlic derivatives have been administered to patients, and a decrease in the incidence of influenza and acute respiratory diseases has been observed [32]. But there are few reports of its effect against environmental contaminating fungi, and of some of its properties such as: its half-life, storage temperature, the effect of different dilutions, and the minimum inhibitory concentration of the FCEG. Therefore, the objective of this work was to analyze some characteristics of FCEG, and their effect on the growth of different bacteria and fungi, in order, for in the future, obtain a product that is competitive with the common antimycotics used in the market.
Obtaining of Fresh Extract of Garlic (FCEG)
To obtain the FCEG, 15 heads of raw garlic (from the Republic market in the City of San Luis Potosí, México), previously peeled, are placed in an extractor and ground to obtain a garlic suspension, which is filtered through gauze, pressing the extract for better performance. Subsequently, it is transferred to an amber bottle and stored at 4°C.
Analysis of the Properties of FCEG
For the study of the properties of the FCEG, the dimorphic fungus yeastlike C. albicans was used, due to its ease in quantifying the number of yeasts, and its rapid growth in the medium used: Sabouraud Dextrose Agar (SDA).
Effect of different FCEG dilutions on the growth of C. albicans
100 µL of the FCEG are token, and diluted with different volumes of sterile saline solution (SS) at 0.85% (w/v), according to the following Table 1.
Subsequently, aliquots of 50 µL are taken from each of the dilutions to be analyzed, and the antifungal effect is analyzed at 1 x 10 6 yeast/mL of C. albicans, incubating at 28°C, for 5 days in SDA, comparing growth with respect to a control without FCEG.
Effect of different FCEG concentrations on the growth of C. albicans
Different FCEG concentrations are token (0-50 µL=0-1.0 mg/mL of protein) and diluted with different volumes of sterile saline solution (SS) to 0.85% (w/v) and were added 1 x 10 6 yeast/mL of C. albicans, incubating at 28°C, for 5 days in SDA, comparing growth with respect to a control without FCEG. to 10 x 10 6 yeast/mL) spread in Petri dishes, they were added with 50 µL of FCEG, incubating at 28°C, for 5 days in SDA, comparing growth with respect to a control without FCEG.
Half-life of antifungal activity of FCEG to 4°C
The FCEG in covered containers was incubated at 4°C for 60 days, to determine the time in which it loses its antifungal properties, it was analyzed by sowing every 5 days 1 x 10 6 yeast/mL of C. albicans in SDA in the presence of 50 µL of FCEG, incubating at 28°C, for 5 days, comparing growth with respect to a control without FCEG.
Effect of tube capping on the antifungal activity of FCEG at 4°C and 28°C
The FCEG is aliquoted by duplicate in volumes of 5 mL in 8 mL test tubes and incubated uncovered at 4°C and 28°C for 24 hours, taking aliquots of 50 µL from each tube every 4 hours, and added to Petri dishes containing 1 x 10 6 yeast/mL of C. albicans, to determine the effect antifungal seeded in SDA at 28 o C for 5 days.
Effect of the temperature on the antifungal activity of FCEG
The FCEG is aliquoted by duplicate in volumes of 5 mL in 8 mL test tubes and incubated at 60°C for 60 minutes, taking aliquots of 50 µL from each tube every 20 minutes, and added to Petri dishes containing 1 x 10 6 yeast/mL of C. albicans, to determine the effect antifungal seeded in SDA at 28 o C for 5 days.
Effect of adsorption in activated carbon
on the antifungal activity of FCEG 5 g of activated carbon (CAGR, ground and grain) were placed in 125 mL Erlenmeyer flasks, and 20 mL of FCEG were added, incubating them at 28°C for 6 days under constant agitation (100 rpm). Subsequently, aliquots of 50 µL were taken every 24 hours, and added to Petri dishes containing 1 x 10 6 yeast/mL of C. albicans, to determine the effect antifungal seeded in SDA at 28 o C for 5 days.
Effect of FCEG on Lee Medium Filamentation for C. albicans
Resuspend a pellet of a young colony suspected of C. albicans in 0.
Determination of the Antifungal Effect of FCEG on the Growth of Different Fungi
Samples of the different species of fungi (obtained from Laboratory of Experimental Mycology/FCQ/UASLP) to be analyzed were taken and resuspended in 1 mL of SS. Subsequently, 50 µL aliquots of each suspension of the different fungi were taken and inoculated in petri dishes containing SDA, and 50 µL of the FCEG were added, spreading with a glass rod in the form of a triangle, and they were incubated at 28°C for 7 days, and the growth was compared with the controls of the different fungi seeded without FCEG.
Determination of the Antimicrobial Effect of FCEG on the Growth of Different Bacteria
Samples of the different species of bacteria (obtained from Laboratory of Microbiology/ FCQ/UASLP) to be analyzed were taken and resuspended in 10 mL of McFarland solution. Subsequently, 50 µL aliquots of each suspension of the different bacteria were taken and inoculated in petri dishes containing SDA, and 50 µL of the FCEG were added, spreading with a glass rod in the form of a triangle, and they were incubated at 35°C for 24-48 hours, and the growth based on the optimum density obtained from McFarland turbidity standard was compared with the controls of the different bacteria seeded without FCEG [34].
Determination of Protein
This was determined by the method of Lowry et al. (1951), using bovine serum albumin as standard [35].
Analysis of Some Properties of FCEG
In this work, some properties of the antifungal activity of FCEG were analyzed, such as: halflife, temperature, effect of different dilutions, minimum inhibitory concentration of FCEG, etc., using the fungus like yeast C. albicans as standard, obtaining the following results summarized in Table 2.
In relation to these parameters, there are few related reports, like: for evaluate the antimould activity of oregano, thyme, rosemary, and clove essential oils and some of their main constituents: eugenol, carvacrol and thymol against A. niger, and all the investigated agents showed no inhibitory effect on this fungus, which grow at concentrations lower than 10% (v/v) [4], for the lyophilized extract of garlic diluted to 10%, the minimum inhibitory concentration was 500 ug/mL for T. mentagrophytes, T. rubrum and Microsporum canis [19], the antifungal activities of fresh garlic (A. sativum) and ginger (Z. officinale) on the growth of three known pathogenic fungi were investigated. The test organisms were Aspergillus spp., Penicillium spp., and C. albicans. Under the conditions analyzed the more concentrated extracts (100 g/50 mL) inhibited the fungal growth more than the less concentrated extracts (100 g/100 mL). The actions of these extracts in these organisms were more on garlic. Between the two extracts used, the best in terms of inhibition is garlic followed by ginger [20]. (MIC). The interaction of the garlic crude extract with the fungi was observed using visible light microscopy. The garlic crude extract induced inhibition zones of 12 mm in A. parasiticus and 15.5 mm for A. niger, it also inhibited growth in 13 and 46.8% respectively. The CMI for A. parasiticus was found to be the 1:2 dilution (50 µL of crude extract) and the 1:32 dilution (3.12 µL of crude extract) for A. niger. The treatment also inhibited the production of mycelium and sporulation of the two fungi species [23]. On the other hand, was analyzed the in vitro sensitivity of an isolate of T. rubrum and an isolation of T. mentagrophytes against ajoene, finding that this compound was able to inhibit the growth of both isolates showing a minimum inhibitory concentration (MIC) of 60 µg/mL and a concentration minimum fungicide (CMF) of 75 µg/mL [36]. The extract concentration also affects the antimicrobial activity, Allium extract concentrated (100%) have more lethal effect, followed by 75, 50, and 25% the least effective, obtained from McFarland turbidity methodology, and Pseudomonas aeruginosa showed resistance against extracts of both A. sativum and Allium tuberosum, with no zone of inhibition. The highest antimicrobial activity of A. tuberosum was noticed against S. aureus and B. subtilis with 43.9 and 40.7-mm zone of inhibition using 100% extract, respectively, followed by 75, 50, and 25%. The least antimicrobial activity was noticed for Enterococcus faecalis with zone of inhibition of 20.07 mm in the 100% extract, whereas P. aeruginosa showed resistance against all concentrations evaluated [37]. Bernaldez and Vicencio (2021), determined the antibacterial activity of soap from garlic extract using the paper-disc method and Kirby-Bauer antibacterial sensitivity test against S. aureus and E. coli, and to determine the physical properties of garlic soap and the presence of saponin through phytochemical screening. Garlic soap showed antibacterial activity against E. coli and S. aureus. Mean zone of inhibition was numerically higher in plate extract obtained using garlic soap (14.70 mm-18 mm), compared to commercial soap [38], and was studied the antibacterial activity of the ethanolic extracts of A. sativum "Garlic" on S. aureus ATCC 25923 cultures. Ethyl alcohol of 96° was used for the extraction of the metabolites of the plants and concentrations at 100%, 75% and 50% were prepared, and the antibacterial activity was demonstrated by the method of disc diffusion in agar or Kirby Bauer, the ethanolic extracts of the A. sativum "Garlic" leaf presented inhibition halos of 20,433 mm, 17,126 and 10,659 for the concentrations of 100%, 75% and 50% respectively [39]. With respect to stability, we found that it is completely lost between 55 and 60 days after obtaining it at 4°C in covered containers, and evaluating the suitability of five different garlic cultivars for the processing of unsalted garlic paste, chopped fried garlic, and fried sliced garlic, the concentration of allicin in the products was evaluated immediately after processing and at 45-day intervals during 180 days of storage, and the amount of allicin lost during the process to obtain paste for the different varieties was less than 9.5%, and it reached a maximum loss of 22% for the commercial varieties during storage (180 days) [40].
C. albicans is a dimorphic fungus, which is in the form of yeast when it is in the saprophytic state, while in the parasitic state, it forms filaments (hyphae and pseudohyphae) of variable length. This capacity is related to the pathogenicity of this yeast fungus, in addition to various metabolic products, as well as some components of the cell wall are involved in the mechanisms of pathogenicity by this species [41]. The transition from yeast to hyphae is one of the virulence attributes that enable to C. albicans to invade tissues. It has been found that the growth of filamentous form has advantages over yeast in penetrating the cell or tissue, and although the hypha may be suitable to open the gap between tissue barriers, thanks because its tip is the site of secretion of enzymes capable of degrading proteins, lipids, and other cellular components, it facilitates their infiltration into solid and woven substrates [42]. There are three findings that support the hypothesis of that filamentation is required for virulence by this fungus: 1. Filament formation is stimulated at 37ºC in presence of serum, with neutral pH. 2. The newly formed filaments (called tubes germ cells) are more adherent to cells mammals than yeasts and adherence is the requirement for tissue penetration. 3. Yeasts captured by macrophages produce filaments and can lyse them, therefore, the formation of filaments is a form to evade host defense mechanisms [43,44]. And finally, in this work, the FCEG significantly inhibits germ tube induction in C. albicans, which can be very important and effective for the treatment of diseases related to C. albicans, but more studies are required to determine the role of germination inhibition, in addition to the fact that in the literature we did not find related reports.
Analisys of the Antifungal and Antibacterial Effect of FCEG on the Growth of Different Fungi
On the other hand, when we analyze the antifungal effect of FCEG on the growth of different species of fungi and bacteria, it was found that it inhibits completely the growth of all the fungi and bacteria analyzed (Figs. 9 -16 26,27], and [28], S. mutans [29], molds, yeasts, coliforms, and E. coli [30] on the Gram positive bacteria B. subtilis and S. epidermidis and other Gram negative bacteria E. coli and S. dysinteriae [31].
CONCLUSION
Actually, the pharmaceutical industry has largely focused on high-throughput biochemical screening programmed for the discovery and development of new drugs, the use of natural products for medicinal and antimicrobial purposes is an ancient practice [15,45], and the use of garlic, to which it is common to attribute a wide variety of properties, among which are: antifungal and antileishmanial activity [4,7], but, until 1944 when Cavallito and Bailey [46] isolated and described the properties of allicin, the compound responsible for garlic's characteristic pungent odor, that researchers gained a clearer insight into the chemical wonder carefully packaged by nature in the composite bulbs of this edible Allium, and thus began decades of extensive research on allicin, "the heart of garlic" [47], and the ajoene [(E, Z) 4,5,9-tritiadodeca-1,6,11-triene-9-oxide] is an organosulfur compound derived from garlic, from which systematically demonstrated its potential inhibitory activity in vitro and in vivo on most of the fungi that cause human mycoses this compound acts selectively on the plasma membrane by inhibiting the synthesis of phosphatidylcholine, with accumulation of phosphatidylethanolamine and, consequently, causing cell death [15,48], and [49], and the effects of garlic oil were studied with Penicillium funiculosum as a model strain. Results showed that the minimum fungicidal concentrations (MFCs, v/v) were 0.125 and 0.0313 % in agar medium and broth medium, respectively, suggesting that the garlic oil had a strong antifungal activity. The main ingredients of garlic oil were identified as sulfides, mainly including disulfides (36%), trisulfides (32%) and monosulfides (29%) by gas chromatograph-mass spectrometer (GC/MS), which were estimated as the dominant antifungal factors [50]. Too, was studied their powder has a good adsorption capacity for mercury [51], and in this work we analyze some characteristics of FCEG, and their effect on the growth of different bacteria and fungi, with the next conclusions:
DISCLAIMER
The products used for this research are commonly and predominantly use products in our area of research and country. There is absolutely no conflict of interest between the authors and producers of the products because we do not intend to use these products as an avenue for any litigation but for the advancement of knowledge. Also, the research was not funded by the producing company rather it was funded by personal efforts of the authors. | 2022-03-31T15:39:40.531Z | 2022-03-24T00:00:00.000 | {
"year": 2022,
"sha1": "038f6d24ad6e93a41c837088f99deec303fa4e1e",
"oa_license": null,
"oa_url": "https://journalajrb.com/index.php/AJRB/article/download/30219/56698",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f881070bfe2810399394acffa0618700131cd4a",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
} |
204756566 | pes2o/s2orc | v3-fos-license | Financial inclusion and intimate partner violence: What does the evidence suggest?
Financial inclusion is an area of growing global interest in women’s empowerment policy and programming. While increased economic autonomy may be expected to reduce the prevalence of intimate partner violence, the mechanisms and contexts through which this relationship manifests are not well understood. This analysis aims to assess the relationship between women’s financial inclusion and recent intimate partner violence using nationally-representative data from 112 countries worldwide. Levels of both financial inclusion and recent intimate partner violence varied substantially across countries (ranging from 2–100%, and 1–46%, respectively), and across regions. In multivariate global analyses, increased levels of women’s financial inclusion were associated with lower levels of recent intimate partner violence after accounting for asset-based enablers of economic autonomy and gender norms; this relationship was lost upon the inclusion of measures of national context (i.e., development and fragility). These results underscore that the relationship between financial inclusion and recent intimate partner violence is complex, follows many pathways, and is affected by context. In low and middle income countries, asset-based enablers of economic autonomy, gender norms and national context explained much of the relationship between financial inclusion and recent intimate partner violence. In those low and middle income countries with high levels of controlling behavior by male spouses, financial inclusion was associated with higher levels of recent intimate partner violence. These findings further suggest that initiatives that aim to prevent intimate partner violence by way of increased economic autonomy may be ineffective in the absence of broader social change and support, and indeed, as seen in countries with higher levels of men’s controlling behavior, backlash may increase the risk of violence. Efforts to improve women’s financial inclusion need to recognize that its relationship with intimate partner violence is complex, and that it requires an enabling environment supportive of women’s rights and autonomy.
Introduction
Nearly one in five women globally (19%) have experienced sexual or physical violence from an intimate partner in the past year, with wide variations in prevalence across countries [1]. Beyond the violation of human rights, intimate partner violence (IPV) has substantial health implications for women and their children, including increased risk of abortion, premature birth, low birth weight, sexually transmitted infections, depression and substance abuse, as well as death [2]. There has thus been an increasing focus on modalities of prevention, including ways in which women's economic autonomy may reduce the risk of IPV.
Both preventing IPV and expanding women's economic autonomy have become prominent features in the global goals and policy agenda [3,4]. While these two phenomena address distinct areas of women's lives, there is increasing evidence that they may be linked. Possible mechanisms through which increased economic autonomy may reduce IPV include lessening financial stress on the household, reducing women's financial dependence on men and enabling women to leave relationships if they so choose [5][6][7][8][9]. The level and duration of violence reduction seen in interventions designed to boost women's economic autonomy appears to be context-and population-specific, however, and in some cases, has been associated with increased rather than decreased IPV risk [5,10,11].
There is evidence suggesting that the ability to exit an abusive relationship, which may be facilitated by augmented economic autonomy, can deter further IPV. For example, state level family law reforms facilitating divorce in the United States reduced the risk of marital violence [12]. In contrast, studies from Ghana and Bangladesh suggest that microfinance loans and cash transfers may increase IPV due to disagreement over use and control of the additional income [11,13]. Contexts where gender roles are in the process of shifting, and thus where gender power dynamics are in flux, may be more risky for women. This variability is echoed in research on a broader array of economic empowerment indicators, including women's employment and control over resources or assets [5,14]. More recent multi-country studies have explored the relationship between economic measures of women's status and IPV, suggesting that restrictions on legal rights and employment are associated with higher levels of IPV [15,16].
Financial inclusion, encompassing both access to and use of appropriate financial services, is emerging as a focal area of efforts to increase women's economic autonomy [3]. Financial inclusion has been identified as a key enabler for many of the Sustainable Development Goals, including the goal of achieving gender equality and enhancing women's empowerment [17]. Levels of women's financial inclusion are highly varied across countries, from a low of 13% financial account ownership among women in South Sudan, to effectively universal in many high income economies [3]. Financial inclusion of women has shown promise as a way to increase women' economic empowerment, including indication of increased savings and financial resilience, as well as diversification of food purchases, in diverse settings including Kenya, Nepal and Niger [18][19][20][21].
To date, there is limited research exploring the relationship between financial inclusion and IPV, though evidence from India suggests that bank account ownership is associated with reduced risk of IPV [22]. Women's access to more resources may be expected to increase their autonomy, yet might also lead to backlash and a heightened risk of violence if men seek to maintain power differentials. A summary of factors influencing potential pathways connecting financial inclusion and IPV is outlined in Fig 1. The majority of theoretical models examining factors affecting IPV have used the ecological model to look broadly at risks from an individual up to a societal lens [23][24][25]. This analysis differs from these approaches in that nationally-representative measures are used to assess cross-national associations at the macro level. Our conceptual model therefore focuses on explicating factors important to the relationship between women's financial inclusion and recent IPV at the societal (national context) and community (gender norms, enablers of economic autonomy) levels.
This paper thus aims to address the gap in knowledge connecting the role of financial inclusion and risk of IPV. Specifically, we assess, for a large and diverse sample of countries, whether women's financial inclusion-defined as having an account (alone or jointly) at a bank or another type of financial institution or personally using a mobile money service in the past 12 months-is associated with lower levels of recent IPV, accounting for key contextual, normative and enabling factors.
Study design and sample
This study is an ecological analysis of the relationship between financial inclusion and IPV using publicly available, cross-sectional, country-level data from multiple sources (see Table 1). We drew data on recent IPV from the UN Women Global Database on Violence against Women where available; countries missing IPV data in this database were added where feasible using nationally-representative individual sources outlined in S1 Table [26].
The UN Women Global Database on Violence against Women includes IPV prevalence data from multiple sources including population surveys, violence-focused surveys and national , in which ILO models employment-to-population ratios from using data from labor force surveys, household surveys and population censes; all estimates used are from 2016.
Employment norms and cell phone use were taken from the 2015 Gallup World Poll, via reports from Gallup/ILO and the Women, Peace and Security Index [28,29]. Cash earnings, decision-making over own earnings, controlling behavior and justification for wife-beating were taken from DHS StatCompiler for the year most closely matched to the year of recent IPV data (2005-2018) [30].
Human Development Index (HDI) values for the year most closely matched to the year of recent IPV data for (2005-2017) were taken from the United Nations Development Programme [31]. The HDI is comprised of indicators on life expectancy, actual and expected education, and GNI per capita [32]. Fragile states were identified via World Bank categorizations (2018) [33,34]. Regions were defined according to the United Nation's M49 groupings, with the exception of high income countries (as defined by the World Bank's income groupings), which were categorized separately [35,36]. Female education (mean years of schooling) was sourced from Barro-Lee estimates (2005 and 2010) for the year most closely matched with recent IPV data [37].
Data were extracted in March 2019. The sample was limited to the 112 countries with data on recent (past 12-month) IPV, as well as financial inclusion, of which 33 are defined by the World Bank as high income (see S1 Table) [38].
Measures
Measure definitions and sources are shown in Table 1. We defined the primary dependent variable, recent IPV, as the percent of ever-married women reporting any physical and/or sexual violence in the preceding 12 months. Physical violence includes being pushed or shaken, having something thrown at you, having an arm twisted or hair pulled, being slapped, being punched with a fist or something else that could hurt the respondent, being kicked, dragged or beaten up, trying to intentionally choke or burn, or being threatened or attacked with a knife, gun or other weapon. Sexual violence includes being physically forced to have sex or other unwanted sexual acts, or being forced with threats or in other ways to perform unwanted sexual acts. These definitions are in accordance with globally accepted measures of gender-based violence [2].
Financial inclusion is defined as the percent of women aged 15 or older who reported having an independent or joint account at a bank or another type of financial institution, or personally using a mobile money service in the previous 12 months.
We grouped the covariates into three domains: (1) asset-based enablers of economic autonomy, (2) gender norms related to women's lower status and control and (3) national context. Variables in (1) included paid employment, cash earnings, cell phone use and education. Variables in (2) included inequitable employment norms, decision-making over own earnings, controlling behavior and justification for wife-beating. Variables in (3) included HDI, fragile state status and region.
Financial inclusion and intimate partner violence
All measures except HDI, fragile state status and region are derived from individual-level data and expressed as national prevalence estimates.
Analysis
Descriptive analyses included frequency statistics for all variables, overall and by region.
We first assessed the relationship between independent variables and recent IPV using bivariate and multivariate fractional logit generalized linear models, to account for the bounded nature of the outcome (recent IPV). Full-sample regressions were modelled using sequential variable inclusion by measure grouping (asset-based enablers of economic autonomy, gender norms, and national context).
We also modelled limited-sample regressions excluding high-income countries in order to include variables with limited geographic availability (cash earnings, decision-making control over own earnings, controlling behavior, and justification for wife-beating). All regressions included robust standard errors clustered at the regional level, and multivariate models adjusted for IPV year fixed effects.
All datasets used contained only country-level, de-identified data.
Results
Globally, the median prevalence of recent physical and/or sexual violence in assessed countries was 9% (Table 2), ranging from less than 1% in Singapore to 46% in Afghanistan. Countries in Sub-Saharan Africa, and Central and Southern Asia tended to have higher prevalence of recent IPV than Northern America and Europe (Fig 2). The median prevalence of financial inclusion was 39%, with near saturation in high-income countries, and only 21% median prevalence in Western Asia and Northern Africa. Across the global sample, the prevalence of women's employment was 54%, ranging from a regional high of 70% in Sub-Saharan Africa to 23% in Western Asia and Northern Africa. Cell phone use was high in the total sample (82%) and women had a median of nearly nine years of education.
Among indicators of gender norms, inequitable employment norms were highest in Central Asia and Southern Asia (28%) and West Asia and Northern Africa (26%).Most countries had high levels of female participation in decision making regarding their own earnings (global median 91%). However, more than one-half of women (65%) across assessed countries reported controlling behavior by partners (a measure indicative of spousal power imbalances). The median level of beliefs that wife beating was justifiable was 37% and of the sampled countries.
HDI was highest among high income countries (median of 0.89) ( Table 2). Sub-Saharan Africa had both the lowest HDI of any region in the sample (median of 0.50), as well as the vast majority of states classified as fragile (12 of 16).
In bivariate analyses, women's financial inclusion was negatively associated with recent IPV; for every 10% increase in financial inclusion, there was a 2% decrease in recent IPV (Fig 3). Inequitable employment norms, justification of wife-beating and fragile state status were all positively associated with IPV; whereas cash earnings, cell phone use, female education, women's decision-making over their own earnings and HDI were negatively associated with recent IPV.
In multivariate analyses, there was a significant, negative association between financial inclusion and recent IPV in models adjusted for measures of economic autonomy and gender norms (Table 3). Model 3 (adjusted for both asset-based enablers of economic autonomy and gender norms) had the best overall fit, and also demonstrated an association between increased female education and lower levels of IPV. However, upon inclusion of measures of national context, the statistical association between financial inclusion and recent IPV was lost. Model 4 necessarily excluded female education due to its high correlation with HDI (r = 0.90; p-value<0.01; mean years of education for the general population is a component of the HDI [32]). While fragile states tended to have higher levels of IPV in bivariate analyses, there was no association in multivariate analyses. Women's employment and inequitable employment norms were not significantly associated with recent IPV in any model.
Recognizing that high-income countries are distinct from others in our sample, characterized by lower levels of IPV and close to universal financial inclusion (see Table 2), we performed an exploratory set of analyses restricted to low and middle income countries. In these reduced models excluding high-income nations and adjusting for asset-based enablers of economic autonomy and gender norms, the relationship between women's financial inclusion and recent IPV was not statistically significant (Table 4). However, when we additionally controlled for cash earnings, decision-making over own earnings, controlling behavior and justification for wife beating (measures available only for low and middle income countries in this sample), higher levels of women's financial inclusion were associated with higher levels of recent IPV, even controlling for national context. Exploratory analysis of the full model (Table 4, Model 5) revealed that controlling behavior was the most important measure in establishing this significance (results not shown). The relationship between women's financial inclusion and recent IPV was then plotted by tertile of controlling behavior (Fig 4).
Discussion
While the prevalence of both financial inclusion and recent IPV vary widely across countries, in general, higher levels of women's financial inclusion were associated with lower levels of recent IPV, even after accounting for asset-based enablers of economic autonomy and gender norms. In the context of rising women's financial inclusion (from 47% worldwide in 2011 to 65% in 2017 [3]), this is an encouraging finding. This result may be at least partially explained by other research suggesting that increased financial inclusion may enable greater autonomy and exit options [5,12]. It is also in line with longitudinal data from rural India in which both bank account ownership and joint control over husbands' income reduced IPV risk [22]. The relationship between women's financial inclusion and recent IPV lost its statistical association, however, when controls for national context (HDI and fragile state status) were introduced. This suggests that overall levels of development, as proxied by HDI, play a key role in explaining levels of recent IPV when looking at a countries across all income levels.
In models restricted to low and middle income countries, the relationship between levels of women's financial inclusion and recent IPV, adjusting for the gender gap in financial inclusion, was similar to that seen in the global sample. However this association became non- Financial inclusion and intimate partner violence significant when additional controls were introduced for asset-based enablers of economic autonomy, gender norms and national context, suggesting that at present, these contextual factors explain much of the relationship originally seen between financial inclusion and recent IPV in low and middle income countries. The association between HDI and recent IPV was lost upon accounting for cash earnings and gender norms such as controlling behavior and justification of wife-beating, which is in line with previous research, and emphasizes differences in correlates of IPV across contexts [15]. One interesting and cautionary result is the reversal of the relationship between women's financial inclusion and recent IPV within low and middle income countries in models that included additional measures of asset-based enablers of economic autonomy and gender norms. Upon adjusting for cash earnings, decision-making over earnings, controlling behavior, and the justification of wife-beating, low and middle income countries with higher levels of financial inclusion tended to have higher levels of IPV. The significance of this result, however, was conditioned on levels of controlling behavior; the positive association between increased women's financial inclusion and increased recent IPV was only significant in countries where more than 70% of women reported that their spouses exhibited at least one controlling behavior. These countries with higher levels of controlling behavior also tended to be contexts with higher levels of justification of wife beating and higher levels of women's employment, but also lower levels of cash earnings and more inequitable employment norms (S2 Table). These are circumstances where women have increased financial inclusion in a context of high spousal control and generally compromised gender norms, which may be aggravating gendered power struggles, a situation which has been known to result in increased IPV [14]. This finding may also be in part explained by other studies indicating that expanding economic opportunities for women are most likely to reduce IPV when women's social status is Financial inclusion and intimate partner violence also augmented, and that that social status exists within a confluence of other factors perpetuating gender norms [39,40]. Indeed, recent research from Ethiopia has found that the different dimensions of women's empowerment, particularly economic empowerment, are largely distinct from one another [41]. This suggests that unidimensional empowerment initiatives may not necessarily have spillover effects into other dimensions and that multipronged efforts are needed to effect more broad-reaching change [23,[42][43][44].
The issue of male control is also of relevance when considering the measurement of women's financial inclusion. The measure used in this study assesses account ownership (bank or other financial institution) or mobile money service use within the past year among women and girls aged 15 or older; this is a gender-specific version of the indicator used to track progress on Sustainable Development Goal 8.10.2 [45]. The phrasing of the Global Findex Questionnaire does not allow for disaggregation of sole vs. joint accounts: "Do you, either by yourself or together with someone else, currently have an account at a bank or another type of formal financial institution?" [46]. When examining financial inclusion in the context of autonomy, empowerment and risk, this is a key distinction. Women with joint accounts may have more explicit and/or implicit restrictions and limitations on their use of that account than women with sole accounts. Conversely, it is plausible that in some contexts, joint ownership may indicate higher levels of trust and lower levels of control within a relationship [47,48]. These dynamics are poorly understood and understudied, and merit additional research to better understand whether this conflation of 'joint' and 'sole' accounts is masking important differences in the measurement of women's financial inclusion.
While fragile state status was associated with increased risk of recent IPV in bivariate analyses, it did not emerge as significant in multivariate analyses. There are several possible explanations for this. The prevalence of recent IPV in fragile states had much wider variability than was seen in non-fragile states (mean standard errors of 2.8 and 0.9, respectively). There was only modest variability in financial inclusion in fragile states (ranging only from 3%-33%). This contrast is clear in Afghanistan, which has the highest level of recent IPV (46%) seen across assessed countries, and one of the lowest levels of women's financial inclusion (4%), vs. Chad, which has 18% recent IPV and 8% women's financial inclusion. This relationship may have been further compromised in multivariate models by the fact that only 16 of 112 countries were considered fragile states, and sample size was thus limited for this measure. Finally, while previous research has indicated that the normalization of violence in society more broadly may be associated with higher levels of violence against women in the home [49], fragile state status, as noted in Methods, does not necessarily indicate a state in conflict-countries with low policy and institutional capacity are also included in this group [34]. Comoros, for example, is considered a fragile state because of its low Country Policy and Institutional Assessment country rating, and has only 5% prevalence of recent IPV. Heterogeneity in this group of fragile states, from those affected by recent conflict vs. protracted conflict vs. impaired state governance may impact women's economic opportunities, as well as their differential exposure to violence [50], suggesting that this measure may need further review in future research.
These findings help dissect the complex relationship between financial inclusion and recent IPV, though further research is needed for a full understanding of mechanisms, particularly given the contextual and normative factors on which these the relationship is conditioned [51,52]. For example, the ability to use a conventional bank account generally necessitates some mobility, including the ability to physically go to a bank, something that is not possible for one in three women in low and middle income countries [16]. Given that IPV tends to be most prevalent in settings with restrictive gender norms, this presents an ongoing challenge to women's economic engagement [53,54]. While digital financial services such as mobile money accounts may mitigate some of these barriers, these digital accounts still require access to cell phones as well as financial literacy. This is an area for future growth, as currently, mobile accounts are only used by 3% of women globally (4% in low and middle income countries) [3].
Of course these findings must be interpreted in light of limitations inherent in any crosssectional, ecological analysis, namely that causality cannot be inferred, and that results do not necessarily translate to the community or individual level. Data were collected in different years, though the year of IPV data collection was accounted for in current analyses. We were limited to measures that are publicly available, which do not fully assess all aspects of assetbased enablers of economic autonomy, gender norms or national context, and exclude, for example, measures of independent mobility. Nevertheless, these findings are suggestive and lend important insights that help to unpack the complex relationship between women's financial inclusion and IPV.
Both preventing IPV and expanding financial inclusion have attained major prominence on the global development agenda. Encouragingly, our ecological results suggest that financial inclusion may be an important lever in reducing women's risk of IPV, but both IPV and financial inclusion exist in the context of social, cultural and normative barriers and enablers. These barriers and enablers influence underlying mechanisms, as well as the country-level manifestations of these relationships. This means that we should not assume that financial inclusion is a universally positive component of development. Our findings underline the importance of shifting underlying norms and controlling behaviors which heighten the risk of violence, and suggest that efforts to promote women's empowerment may be undermined in the absence of those changes. We add to the body of research suggesting that financial inclusion merits focus within the context of broader efforts to improve the status of women and reduce gender inequitable norms, and that this may offer opportunities to reduce women's risk of violence at the hands of their partners.
Supporting information S1 Table. Year of IPV and financial inclusion data collection for assessed countries. | 2019-10-17T08:57:12.703Z | 2019-10-16T00:00:00.000 | {
"year": 2019,
"sha1": "c5f4a820367c831d3eeda22bf3fc4b8e32108253",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0223721&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1cedbb18a57e1b874a48d87e9f45fa436412f86a",
"s2fieldsofstudy": [
"Sociology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
271301169 | pes2o/s2orc | v3-fos-license | Implementation of Glucose-6-Phosphate Dehydrogenase (G6PD) testing for Plasmodium vivax case management, a mixed method study from Cambodia
Plasmodium vivax remains a challenge for malaria elimination since it forms dormant liver stages (hypnozoites) that can reactivate after initial infection. 8-aminoquinolone drugs kill hypnozoites but can cause severe hemolysis in individuals with Glucose-6-Phosphate Dehydrogenase (G6PD) deficiency. The STANDARD G6PD test (Biosensor) is a novel point-of-care diagnostic capable of identifying G6PD deficiency prior to treatment. In 2021, Cambodia implemented the Biosensor to facilitate radical cure treatment for vivax malaria. To assess the Biosensor’s implementation after its national rollout, a mixed-methods study was conducted in eight districts across three provinces in Cambodia. Interviews, focus group discussions, and observations explored stakeholders’ experiences with G6PD testing and factors influencing its implementation. Quantitative data illustrative of test implementation were gathered from routine surveillance forms and key proportions derived. Qualitative data were analyzed thematically. The main challenge to implementing G6PD testing was that only 49.2% (437/888) of eligible patients reached health centers for G6PD testing following malaria diagnosis by community health workers. Factors influencing this included road conditions and long distances to the health center, compounded by the cost of seeking further care and patients’ perceptions of vivax malaria and its treatment. 93.9% (790/841) of eligible vivax malaria patients who successfully completed referral (429/434) and directly presented to the health center (360/407) were G6PD tested. Key enabling factors included the test’s acceptability among health workers and their understanding of the rationale for testing. Only 36.5% (443/1213) of eligible vivax episodes appropriately received primaquine. 70.5% (165/234) of female patients and all children under 20 kilograms never received primaquine. Our findings suggest that access to radical cure requires robust infrastructure and income security, which would likely improve referral rates to health centers enabling access. Bringing treatment closer to patients, through community health workers and nuanced community engagement, would improve access to curative treatment of vivax malaria.
Reviewer 1 also suggested that the disaggregation by initial point of care for G6PD testing be included in Figure 3 of the manuscript.Figure 3 presents aggregated G6PD testing and treatment data (regardless of initial point of care) once at the health center for clarity and simplification of the figure.The figure indicates that 868 patients were screened for G6PD testing at the health center, 437 of whom were successfully referred to the health center from the community and 430 of whom presented directly to the health center.Therefore, for clarity of the figure this disaggregation was not added.
Comment 3: The perceptions are important of the utility or cost benefit for taking the 14 days of meds.The low use in females and those under 20kg is important enough to move to abstract.
Reply:
We added treatment related findings to the abstract on lines 62-64 (page 2): "Of the eligible 1,213 vivax episodes, only 443 (36.5%) were appropriately treated with primaquine.70.5% (165/234) of female patients and all children under 20 kilograms never received primaquine."As a result of the two suggested additions to the abstract, the word count is now slightly over the 300-word limit.We leave it to the editor to advise if this is acceptable.
Comment 4: The authors should comment on a possible monthly plan to carry G6PD tests to villages to test and treat with primaquine those unable to make the journey to health center.I know there are refrigeration issues, but these might be overcome.The timing of primaquine is not important after treatment with the blood stage treatment also providing post treatment prophylaxis from new blood stages for a few weeks.
Reply: Thank you for your suggestion and thoughts on alternative ways to bring G6PD testing and primaquine treatment closer to patients.The focus of this study was to assess the currently implemented test and treat strategy for patients with acute vivax malaria (as opposed to a strategy of mass test and treat.)In this context, we provide G6PD testing by community health workers as an alternative to be able to test and treat febrile patients in the community.Discussing potential implications for mass test and treat strategies is outside the scope of this work.
Comment 5: The results section is full of good data.The first paragraph of the discussion will benefit from a 500 word concise summary of important results from both quantitative and qualitative results.
Reply:
We have added more details to the first paragraph of the discussion that summarizes the key findings from the study that will then be further discussed in the remainder of the discussion section.The addition now reads as follows (line 677-678, page 30), "A lower percentage of females received radical cure compared to males, and children under 20 kilograms but over six months old were not tested and treated as per current policy."We also added a sentence addressing one of Reviewer 2's comment from line 667 to 669 (page 30), "Though our study focused on the Biosensor, findings relating to the test's implementation may be applicable to and relevant for other PoC G6PD tests that might be available and introduced later in Cambodia."
Reviewer 2:
Comment 1: First, a disclaimer: I am not familiar with methods for qualitative research and suggest that a specialist in this area should also evaluate the manuscript.
However, as a malariologist, I can say that the paper is very well written and addresses a question of major public health interest at times of tafenoquine introduction in vivax malaria treatment.Over more than five decades, primaquine has been deployed in Plasmodium vivax-endemic settings with no previous G6PD testing, but in Cambodia and other Southeast Asia the relatively high prevalence of severe G6PD deficiency variants in human populations is a major barrier to primaquine useand surely to tafenoquine use in the near future.
Reply: Thank you for positive feedback and support regarding the relevance and importance of this research.
Comment 2: There are several findings in this study that are of interest for a broad audience of malariologists and public health specialists.For example, a community health worker (CHW) says: "(…) if we ask [patients] to come back to the village for the treatment [which is only available after testing], they will not come because they can't leave their personal belongings in the forest, [...], they will not come because they earn nothing yet."As a consequence, only 49.2% of eligible patients seen by CWHs reached health centers for G6PD testing.It remained unclear to me what happened to those patients who remained G6PD-untested?Were they treated with chloroquine alone for 3 days (either followed or not by weekly chloroquine) or left untreated?This is a key point, as the policy of requiring previous G6PD testing before providing radical vivax malaria treatment might actually prevent some patients from receiving any treatment!Reply 2: Thank you for highlighting this important consideration.Individuals who are diagnosed with vivax malaria by community health workers in Cambodia are provided with schizontocidal treatment (ASMQ) before being referred to the health center for G6PD testing and primaquine treatment to make sure they are receiving at least schizontocidal treatment.We have clarified this in the "Overview of routine vivax case management policy in Cambodia" sub-section of the methods section (lines 197-198, page 8), "Routine G6PD testing was only introduced at health centers, hence patients diagnosed with vivax malaria by community health workers (CHWs) were treated with schizontocidal drugs in the community but required referral to a health center for testing prior to primaquine administration." We also added clarification in the discussion section (lines 743-746, page 33), "Despite health center staff adopting the Biosensor at health center in over 90% of cases, access to G6PD testing and radical cure was still limited by significant challenges with referral from the community where patients were diagnosed with vivax malaria and provided schizontocidal treatment; less than half of eligible patients were successfully referred putting into question the overall feasibility and appropriateness of G6PD testing at the health center." Comment 3: Overall, the discussion is well balanced.I particularly like the following comment: "Portrayal of primaquine [I would add: also of tafenoquine] as a cure or 'magic bullet' for vivax malaria could erode trust in both the treatment and healthcare providers".However, I miss comments about the barriers to the (future) implementation of tafenoquine in settings like Cambodia.Tafenoquine was the actual trigger of the renewed interest of diagnostic companies on developing point-of-care diagnostics for G6PD deficiency.If G6PD testing is not made widely available to all eligible patients, even those living in remote villages or deep in the forest, tafenoquine will never be successfully implemented in Cambodia.
Reply: Thank you for your thoughtful comment regarding the study finding's relevance to tafenoquine and appreciation of the 'magic bullet' discussion.We agree with the reviewer that findings are relevant for tafenoquine introduction as well.However, in Cambodia the use of ACTs is currently prohibiting considering tafenoquine introduction.To address this comment we have added the following to the discussion section (lines 779-783, page 34-35): "The availability of PoC G6PD diagnostics enables the consideration of higher more effective primaquine doses (67) as well as single dose tafenoquine (68).While the use of tafenoquine is not currently an option in Cambodia because of the drug's restriction for use with chloroquine only, ensuring adequate access to G6PD testing will be crucial for the broad roll out of these novel treatment options." Comment 4: I am a bit annoyed by the tendency to equate "point-of-care G6PD testing" with a particular product (STANDARD G6PD test, SD Biosensor), although several other relatively simple (and quantitative) diagnostic methods have been developed and can be used, if not by CHWs in remote villages, at least by technicians with very basic laboratory skills in small towns and cities.Relying on a single product, from a single manufacturer, to implement a countrywide policy of radical malaria cure seems too risky, given potential issues with timely procurement, distribution, and maintenance of the handheld devices and consumables.A comment on this topic might perhaps be appropriate, at the authors' discretion.
Reply:
We have added a sentence addressing the applicability and relevance of our study findings to other potential G6PD test options to acknowledge that the Biosensor is one option, but the one that is currently being used.The sentence highlighted in italics was added to the beginning of the discussion from line 667 to 669 (page 30), "Cambodia's national malaria program has implemented G6PD testing using the Biosensor as part of vivax case management up to the health center level.Though our study focused on the Biosensor, findings relating to the test's implementation may be applicable to and relevant for other PoC G6PD tests that might be available and introduced later in Cambodia."Furthermore, how the test came to be chosen and implemented in Cambodia is mentioned in the introduction highlighting previous testing of the qualitative CareStart RDT.
Please let us know if you require any more clarification with regards to the amendments made.All authors have reviewed the revised manuscript and approved it for re-submission.Thank you very much for time.
Sincerely, Sarah Cassidy-Seyoum PhD Candidate and corresponding author | 2024-07-21T05:22:08.681Z | 2024-07-19T00:00:00.000 | {
"year": 2024,
"sha1": "7671eec11e471118c0c9254cd6c40c32cd5b3eca",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "6de023b0ec2444f6323e9d5d5d134c148f807b8a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
245153358 | pes2o/s2orc | v3-fos-license | Arginine Depletion in Human Cancers
Simple Summary Thousands of cancer genomes are now publicly available which has led to new insights into the underlying features of cancers. These include the identification of mutational signatures at both nucleotide and amino acid levels. Here, we discuss C > T transitions as a key nucleotide-level mutational signature that leads to a dramatic overrepresentation of arginine substitutions in cancers. We propose that this underlying C > T mutational signature canalizes possible arginine substitution outcomes, favoring histidine, cysteine, glutamine, and tryptophan. This initial asymmetry is then acted on at the amino acid level by purifying selection. Thus, a model of “sequential selection” could explain the documented bias towards arginine substitutions in multiple cancers. Abstract Arginine is encoded by six different codons. Base pair changes in any of these codons can have a broad spectrum of effects including substitutions to twelve different amino acids, eighteen synonymous changes, and two stop codons. Four amino acids (histidine, cysteine, glutamine, and tryptophan) account for over 75% of amino acid substitutions of arginine. This suggests that a mutational bias, or “purifying selection”, mechanism is at work. This bias appears to be driven by C > T and G > A transitions in four of the six arginine codons, a signature that is universal and independent of cancer tissue of origin or histology. Here, we provide a review of the available literature and reanalyze publicly available data from the Catalogue of Somatic Mutations in Cancer (COSMIC). Our analysis identifies several genes with an arginine substitution bias. These include known factors such as IDH1, as well as previously unreported genes, including four cancer driver genes (FGFR3, PPP6C, MAX, GNAQ). We propose that base pair substitution bias and amino acid physiology both play a role in purifying selection. This model may explain the documented arginine substitution bias in cancers.
Introduction
Mutation is an essential feature of biology. It is the most important contributor to the cellular transformations that cause cancer and other diseases and is the primary source of variation acted on by evolution [1][2][3]. Point mutations caused by base pair substitutions, insertions, or deletions are common in human cancers and many other diseases [4]. Base pair substitutions alter the sequence of 64 different codons that code for the 20 amino acids. Point mutations can be silent (no amino acid change), missense (one amino acid is changed to another), nonsense (one amino acid is changed to a stop codon), or frameshifts (insertion or deletion of one or two base pairs).
Of the twenty amino acids, arginine appears to have a central role in gene expression, protein structure and function, and genome evolution. For example, arginine codons play a major role in determining the rate of protein translation [5][6][7], and the positive charge of the arginine side chain is critical for stabilizing protein tertiary structure [8,9]. Further, arginine is subject to a number of post-translational modifications including methylation, acetylation, ubiquitylation, citrullination, and mono-ADP-ribosylation, that impact a wide range of cellular processes such as epigenetics, signal transduction, and DNA damage response [10][11][12][13][14][15]. At the evolutionary level, differences in usage of the six arginine codons can be used as a species classification tool across the three domains of life [16]. Finally, in human cancers, the CGA arginine codon is most frequently mutated to a stop codon (nonsense) [17]. Thus, arginine is arguably one of the most important amino acids in biology.
In the last 10-15 years, numerous analyzed cancer genomes have been made available to the public. These include projects such as The Cancer Genome Atlas (TCGA), The Sanger Cancer Genome Project, and The Cell Lines Project. Genomes have also been made available through various user-friendly databases and collaborations such as the Catalogue of Somatic Mutations in Cancer (COSMIC) and the International Cancer Genome Consortium (ICGC), which compile these data and link them to independent studies from the literature [18][19][20]. This has allowed a largely unbiased analysis of mutation patterns in cancer cells. One observation is that arginine is the most frequently mutated amino acid in human cancers, with a tendency towards arginine loss [21]. In this paper, we review key findings in the literature and provide independent validation and additional data supporting some of these observations.
Data Processing
A file with arginine mutations in all cancer tissues was downloaded as an excel file (.csv) from the COSMIC database (https://cancer.sanger.ac.uk/cosmic accessed on: 1 May 2021, version 94, hg38). For this analysis, only point mutations such as missense, nonsense, and silent mutations were studied and included in the working dataset. COSMIC provides data fields such as chromosome number, genomic position, mutated amino acid residue, and the specific nucleotide change. However, the arginine codon that was mutated, and the codon of the resulting mutated amino acid residue are not provided in COSMIC. Codon information for each point mutation was retrieved from Ensembl using the Newman application program interface (API) requesting program. The code for this program is included in Figure S1, and can be accessed through GitHub repository with additional documentation (https://github.com/devinelakurti/Newman-API-Requesting-Program/ tree/main deposited on: 2 November 2021). Figure S2 illustrates the details of the retrieval process of the arginine codon using the specific data fields given in COSMIC. This schematic illustrates how the data fields provided in COSMIC (chromosome number and genomic position), and the calculated position of mutations, were translated into genomic position ranges of codons. For instance, variable x in the genomic position range represents the genomic position of the point mutation. Mutation sense determines whether the output codon from the API requires analysis of the reverse and complement. Genomic position ranges are thus determined both by whether mutations fall on the Watson or the Crick strand and the position of the mutation in the arginine triplet codon. Based on the retrieved arginine codon, the specific nucleotide change, and the mutated amino acid residue, a program was developed to automate the process of identifying the codon of the resulting mutated amino acid ( Figure S1). The logic of this program is based on possible changes of all arginine codons ( Figure 1A, Table S1). After the retrieval of both the arginine codon and codon of the resulting mutated amino acid, both sets of information were integrated into the rest of the COSMIC dataset for further analyses. We also identified genes with a clear skew towards cysteine, histidine, glutamine, or tryptophan. Genes were called "skewed" if a minimum of 60% of all arginine substitutions produced one amino acid (e.g., histidine) at the expense COSMIC dataset for further analyses. We also identified genes with a clear skew towards cysteine, histidine, glutamine, or tryptophan. Genes were called "skewed" if a minimum of 60% of all arginine substitutions produced one amino acid (e.g., histidine) at the expense of the others. To minimize statistical aberrations, genes with fewer than 40 independent tumor samples contributing to this skew were excluded. Each arrow points to a base pair change within the six codons and its corresponding amino acid substitution. For simplicity, each base of the six codons is colorcoded to correspond with the arrow. For example, if the first red C base of the CGG codon is changed to a T, it will produce a TGG codon which substitutes tryptophan for arginine. Note that some substitutions are more likely than others. Synonymous substitutions are not shown. (B) Arginine codon usage in human cancer and non-cancer cells. The graph shows all the reported arginine substitutions in cancer cells on COSMIC (765,956 counts in 69,455 unique cancer samples, blue bars) compared with observed arginine codon usage in human non-cancer cells (orange bars) as reported on GenScript (https://www.genscript.com/tools/codon-frequency-table, accessed on 5 November 2021). Values are plotted as a percentage of each codon usage compared with total usage. These data largely agree with [22].
To calculate the control percentages reported in Figure 3, all point mutations reported in COSMIC Mutation were compiled in a dataset called coding control. All silent mutations were filtered out to create a separate dataset called silent coding control. Coding control percentage for each amino acid was calculated with the numerator being the number of point mutations that resulted in that particular amino acid and the denominator is all the coding point mutations reported in the coding control dataset. Similarly, silent coding percentage for each amino acid was calculated with the numerator being the number of silent mutations that code for a particular amino acid and the denominator is all the silent mutations reported in the silent coding control dataset. Additionally, another dataset called "non-coding control" was compiled with all point mutations from the "noncoding variants" data in COSMIC. With this noncoding control dataset, control percentages were calculated for each nucleotide change. The numerator is the number of point mutations with the specific nucleotide change of interest and the denominator is the total number of noncoding point mutations in the dataset. All data were analyzed in IBM SPSS, v27. Values are plotted as a percentage of each codon usage compared with total usage. These data largely agree with [22].
To calculate the control percentages reported in Figure 3, all point mutations reported in COSMIC Mutation were compiled in a dataset called coding control. All silent mutations were filtered out to create a separate dataset called silent coding control. Coding control percentage for each amino acid was calculated with the numerator being the number of point mutations that resulted in that particular amino acid and the denominator is all the coding point mutations reported in the coding control dataset. Similarly, silent coding percentage for each amino acid was calculated with the numerator being the number of silent mutations that code for a particular amino acid and the denominator is all the silent mutations reported in the silent coding control dataset. Additionally, another dataset called "non-coding control" was compiled with all point mutations from the "noncoding variants" data in COSMIC. With this noncoding control dataset, control percentages were calculated for each nucleotide change. The numerator is the number of point mutations with the specific nucleotide change of interest and the denominator is the total number of noncoding point mutations in the dataset. All data were analyzed in IBM SPSS, v27.
Non-Synonymous Substitution Bias of Arginine Codons
Six synonymous codons are used for arginine ( Figure 1A), and base pair substitutions in these codons can generate twelve different amino acids (not including synonymous changes) and a stop codon. In addition, arginine is one of only two amino acids for which substitutions in the first codon position can result in synonymous change (the other is leucine). There are 54 possible substitutions from the six arginine codons. Four codons (AGA, AGG, CGA and CGG) can produce synonymous substitutions from mutations in the first position (Table S1). All other synonymous changes result from mutations in the third position. However, in mutations leading to amino acid substitution, over 75% of all arginine substitutions occurring in a cancer context are histidine, cysteine, tryptophan, or glutamine [21]. Interestingly, this skew also resembles evolutionary mutation profiles for arginine [23], suggesting similar selection biases operating in both cancer and evolutionary (speciation) contexts. These findings point to a non-random pattern of amino acid substitutions in human cancers [24].
Organisms commonly display preferences for certain synonymous codons over others [25][26][27] ( Figure 1B). This codon usage bias has a major role in gene expression, regulating translation speed and protein folding [28][29][30], as well as mRNA structure, processing, and stability [31][32][33][34]. Additionally, in cancer cells, codon usage is optimized to accommodate high translation of cell cycle regulatory genes [35]. Codon usage bias is species-specific [36] with biases in arginine usage correlating with speciation [16]. In humans, four codons (AGA, AGG, CGG, CGC) are each used approximately 20% of the time whereas two (CGA and CGT) are used only~10% of the time ( Figure 1B). In vertebrates, an increased preference for G/C-ending codons (base at third position) correlates with an increase in G/C bias across the genome [37,38]. With the exception of AGA, arginine codons generally follow this pattern. For instance, mutations in the CGC, CGG, and CGT codons are most likely to substitute arginine for another non-synonymous amino acid and previous analyses of COSMIC v78 (~18,000 cancer samples) show these three codons (CGC, CGG, CGT) and a fourth (CGA) account for most arginine substitution biases [22]. Our present analyses of COSMIC v94, which contains over 68,000 samples, came to a similar conclusion ( Figure 1B), supporting the idea that this observation reflects a biological rather than a technical bias. Remarkably, we also find that CGC, CGG, and CGT are three of the four most likely codons to generate synonymous arginine substitutions (the other is CGA) (Table S1). Molecular evolution of genomes was initially proposed to occur through a combination of neutral evolution and genetic drift [39][40][41]. This theory postulates that most deleterious mutations are eliminated by natural selection whereas genetic drift fixes mainly neutral mutations that do not drastically change the phenotype. Conversely, fixation of mutations that greatly change the phenotype is very rare. Other models have argued that substitution is subject to a combination of purifying (or negative) selection which eliminates deleterious mutations and positive selection which promotes fixation of beneficial mutations. Page and Holmes argue that these two models are distinct, as evolution would occur by chance with neutral selection and by necessity with purifying selection [42]. The substitution bias of arginine amino acids in both cancers and evolution support a model in which purifying selection drives the evolution of human cancer genomes. In practice, this would mean that certain amino acids are not tolerated when substituted in the wild-type position of arginine [43], and are subsequently "purified" or eliminated. Thus, the only observable mutations in the population would be ones which replaced arginine with a tolerated amino acid.
This analysis, in conjunction with the fact that four of the six arginine codons account for most mutations, indicates that the increased frequency of non-synonymous amino acid substitutions of arginine in human cancers is not merely a statistical consequence of usage bias or the mutation possibility of its six codons. Instead, it suggests that it is an outcome of selection on specific amino acid substitutions in key codons that promote Cancers 2021, 13, 6274 5 of 12 cellular transformation and cancer progression. Arginine substitutions in human cancers thus appear to be driven by purifying selection rather than neutral selection.
Arginine Substitutions in Human Cancers Are Driven Mainly by C/G > T/A Transitions
Base pair changes fall into two general categories: transitions (purine-to-purine or pyrimidine-to-pyrimidine) and transversions (purine-to-pyrimidine or pyrimidineto-purine) [44]. Despite twice as many possible transversions, most mutations that drive evolution are transitions [45][46][47]. A statistical study also showed that transitions outnumbered transversions in human evolution, at least since the divergence from rodents [23]. This parallels mutation signatures in cancers [48] and even quiescent cells [49] which have a higher burden of transitions than transversions. Further, mutation may accumulate independently of DNA replication, suggesting errors during cell division are not the only determinant of mutation [50]. Our analysis of COSMIC v94 shows that most base pair substitutions in cancer genomes are C > T and G > A (Figure 2A) and there is no strand bias for either C > T or G > A mutations ( Figure 2B) which agrees with previous analyses [22]. of selection on specific amino acid substitutions in key codons that promote cellular transformation and cancer progression. Arginine substitutions in human cancers thus appear to be driven by purifying selection rather than neutral selection.
Arginine Substitutions in Human Cancers Are Driven Mainly by C/G > T/A Transitions
Base pair changes fall into two general categories: transitions (purine-to-purine or pyrimidine-to-pyrimidine) and transversions (purine-to-pyrimidine or pyrimidine-to-purine) [44]. Despite twice as many possible transversions, most mutations that drive evolution are transitions [45][46][47]. A statistical study also showed that transitions outnumbered transversions in human evolution, at least since the divergence from rodents [23]. This parallels mutation signatures in cancers [48] and even quiescent cells [49] which have a higher burden of transitions than transversions. Further, mutation may accumulate independently of DNA replication, suggesting errors during cell division are not the only determinant of mutation [50]. Our analysis of COSMIC v94 shows that most base pair substitutions in cancer genomes are C > T and G > A (Figure 2A) and there is no strand bias for either C > T or G > A mutations ( Figure 2B) which agrees with previous analyses [22]. Note that if we consider G > A to be the same as C > T, it is possible to envision how most arginine substitutions could be generated by a C > T transition. In this model, the G > A would become C > T within one round of DNA replication; therefore, the two would look like the same mutation.
Base pair substitution frequencies of the six arginine codons immediately reveal a selection preference for C > T and G > A [22]. The C > T substitution, in particular, is indicative of a selection bias, as C occurs in the first position of four of the codons (CGA, Note that if we consider G > A to be the same as C > T, it is possible to envision how most arginine substitutions could be generated by a C > T transition. In this model, the G > A would become C > T within one round of DNA replication; therefore, the two would look like the same mutation. Base pair substitution frequencies of the six arginine codons immediately reveal a selection preference for C > T and G > A [22]. The C > T substitution, in particular, is indicative of a selection bias, as C occurs in the first position of four of the codons (CGA, Cancers 2021, 13, 6274 6 of 12 CGG, CGT, CGC; Figure 1A). Substitution of C for T in this first position produces stop (CGA to TGA), tryptophan (CGG to TGG), or cysteine (CGT to TGT; CGC to TGC). Note that a third position C > T mutation in CGC (to CGT) is silent ( Figure 1A), which may explain why CGC is the most frequently mutated codon. However, the high frequency of CGG to TGG and CGT to TGT indicates a clear selection bias for tryptophan and cysteine, respectively, whereas the CGA to TGA mutation occurs very rarely because it introduces the stop codon [17]. G > A mutations can produce substitutions in all six codons. Remarkably, a high percentage of mutations convert the CGA codon to CAA (arginine to glutamine). The second most mutated codon is AGG which can be converted to lysine (AAG) or is silent (AGA). Mutations in the other codons produce histidine (CGC to CAC; CGT to CAT), glutamine (CGG to CAG), silent (CGG to CGA), or lysine (AGA to AAA) (Supplementary Table S1). G > A mutations are also most frequent for the AGA codon which results in isoleucine (AGA to ATA) or the AGG codon which results in methionine (AGG to ATG) or serine (AGG to AGT).
These analyses uncover a strong codon substitution bias, in which 75% of arginine substitutions are driven by C/G > T/A transitions in cancer genomes (Figure 2A and [22]). Remarkably, only four of the six arginine codons contribute to these substitutions ( Figure 2C). These C/G rich codons permit C/G > T/A transitions that substitute arginine for four different amino acids (cysteine, glutamine, histidine and tryptophan). COSMIC lists six signatures rather than 12 [19,48,51] as the other half can be generated by mutations on the other strand. In other words, a C > T transition on one strand yields a G > A transition on the other strand, and when coupled with replication, both transitions generate the same mutation [52]. Certain cancers seem to show a bias for coding vs. non-coding strands, as suggested by other studies [49]. However, as our analyses compile data across all cancers, this bias is not obvious.
Instead, our analyses indicate a bias for both C > T and G > A transitions in coding regions (Table S2). Specifically, we find approximately twice as many transitions in coding regions (30.88% for C > T and 44.44% for G > A) over non-coding regions (16.73% for C > T and 17.02% for G > A). In addition, these transitions are more likely to occur within arginine codons than other codons (30.88% arginine vs. 25.18% total for C > T; 44.44% arginine vs. 26.49% total for G > A).
C > T transitions can be produced by deamination of CpG sites [53]. Indeed, mutational signatures due to deamination have been identified in cancer cells [48]. "Clock-like" mutational signatures (i.e., mutations that occur during the lifetime of a cell irrespective of its identity) appear to be a major producer of C > T transitions in cancer cells [51]. However, a study in yeast found that decreased processivity of polymerase delta resulted in primarily C > T transitions, suggesting several mutation processes may be at work [54]. G > A transitions, on the other hand, can be produced by guanine oxidation [22,52]. Regardless of mechanism, we do not find a strand bias for either C > T or G > A when compiling data for all cancers ( Figure 2B), which largely agrees with previous findings [22].
Purifying Selection at the Amino Acid Level May Be Strongly Biased by Selection at the Nucleotide Level
It has been observed that amino acid substitutions in cancer cells are not completely random [21,22,43,55,56]. These analyses revealed that arginine mutations in cancer genomes are strongly biased towards cysteine, glutamine, histidine, and tryptophan. Given that 33% of base pair substitutions in arginine are synonymous (Table S1), a silent mutation should have been the most frequently observed change. Instead, arginine synonymous substitutions are found with approximately three-fold lower frequency than predicted [22]. One argument for the bias in arginine mutations, particularly the most prominent Arg > His mutation, is that these mutations are adapted to the elevated pH in cancer cells [57].
However, an analysis of the arginine codons responsible for this amino acid bias reveals that they are CGC and CGT (cysteine), CGC and CGT (histidine), CGA and CGG (glutamine), and CGG (tryptophan) (Figure 3). Arginine substitutions to each of these amino acids are possible from two different codons (Table S1), and cysteine and histidine have a roughly equal chance of being generated from either codon. Interestingly, this is not the case for glutamine and tryptophan. Our reanalysis, which takes into consideration individual codons, found that for glutamine, substitutions from the CGG codon occur at less than half the frequency of the CGA codon (Figure 3). This reveals strong selection for CGA-driven glutamine substitutions, especially considering the genome usage bias of the CGA codon is half that of the CGG codon ( Figure 1B). Similarly for tryptophan, only~10% of the substitutions are due to mutations of the AGG codon despite virtually no difference in genome usage biases of AGG vs. CGG. Taken together, these data indicate that mutation bias occurs first at the base pair level (i.e., nucleic acid and codon) followed by potential purifying selection at the amino acid level.
Cancers 2021, 13, x FOR PEER REVIEW 7 of 12 (glutamine), and CGG (tryptophan) (Figure 3). Arginine substitutions to each of these amino acids are possible from two different codons (Table S1), and cysteine and histidine have a roughly equal chance of being generated from either codon. Interestingly, this is not the case for glutamine and tryptophan. Our reanalysis, which takes into consideration individual codons, found that for glutamine, substitutions from the CGG codon occur at less than half the frequency of the CGA codon ( Figure 3). This reveals strong selection for CGA-driven glutamine substitutions, especially considering the genome usage bias of the CGA codon is half that of the CGG codon ( Figure 1B). Similarly for tryptophan, only ~10% of the substitutions are due to mutations of the AGG codon despite virtually no difference in genome usage biases of AGG vs. CGG. Taken together, these data indicate that mutation bias occurs first at the base pair level (i.e., nucleic acid and codon) followed by potential purifying selection at the amino acid level.
Cancer or Gene Specific Arginine Mutation Bias
Arginine depletion is common to all cancers and is a hallmark of multiple tumor suppressors [22]. However, cancer types can be characterized by different mutational signatures [48,58], with some showing strong and specific biases towards certain amino acid substitutions. For instance, many cancer types show a clear arginine to histidine bias [55], occurring in both tumor suppressor and non-tumor suppressor proteins. One excellent example of this is gliomas, which show a strong preference for the R132H mutation of isocitrate dehydrogenase (IDH1; [56]). This particular substitution produces a metabolic byproduct that appears to increase the oncogenic potential of gliomas by interfering with histone demethylases and increasing oxidative species-related DNA damage [59]. Perhaps counterintuitively, the presence of this mutation is associated with better prognoses compared with glioma patients with a wildtype IDH1 [60][61][62][63]. This appears to be related to the low NADPH production levels in IDH1 mutant cells which renders patients sensitive to therapy [64].
We generated a complete list of genes showing skews in arginine substitution biases for histidine, cysteine, glutamine, and tryptophan (Tables 1 and S3; see Materials and Methods for criteria). We classified genes as driver and non-driver based on a recently published characterization [65]. Under this classification, IDH1 is a driver gene. How can
Cancer or Gene Specific Arginine Mutation Bias
Arginine depletion is common to all cancers and is a hallmark of multiple tumor suppressors [22]. However, cancer types can be characterized by different mutational signatures [48,58], with some showing strong and specific biases towards certain amino acid substitutions. For instance, many cancer types show a clear arginine to histidine bias [55], occurring in both tumor suppressor and non-tumor suppressor proteins. One excellent example of this is gliomas, which show a strong preference for the R132H mutation of isocitrate dehydrogenase (IDH1; [56]). This particular substitution produces a metabolic byproduct that appears to increase the oncogenic potential of gliomas by interfering with histone demethylases and increasing oxidative species-related DNA damage [59]. Perhaps counterintuitively, the presence of this mutation is associated with better prognoses compared with glioma patients with a wildtype IDH1 [60][61][62][63]. This appears to be related to the low NADPH production levels in IDH1 mutant cells which renders patients sensitive to therapy [64].
We generated a complete list of genes showing skews in arginine substitution biases for histidine, cysteine, glutamine, and tryptophan (Table 1 and Table S3; see Materials and Methods for criteria). We classified genes as driver and non-driver based on a recently published characterization [65]. Under this classification, IDH1 is a driver gene. How can this be reconciled with the fact that IDH1 mutations are associated with favorable prognosis? It appears that if IDH1 mutations occur early, they have an adverse effect on DNA damage repair [66], as well as other cellular transformation processes [59], including TERT reactivation [67] and chromatin remodeling [68]. Our analysis also identified four other driver genes (FGFR3, PPP6C, MAX, GNAQ; Table 1). Fibroblast growth factor receptor 3 (FGFR3) is a well-established cancer driver gene in several cancers, and single molecule inhibitors of this gene are used as therapeutic agents [69]. Protein phosphatase 6 (PP6 encoded by PPP6C) encodes the catalytic subunit of a PP2A like phosphatase [70], a molecular regulator of RAS and other RAS associated pathways (e.g., BRAF/MEK/ERK) involved in cell proliferation [71]. PP6 participates in many processes including DNA damage repair, inflammation, and the immune response, and PP6 mutations are associated with tumor progression [72]. MAX is a cofactor of MYC and other MYC-related transcription factors involved in cell proliferation [73,74]. MAX has tumor-suppressive functions [75]. GNAQ encodes the alpha subunit of a heterotrimeric G-protein and mutations are associated with certain melanoma cancers [76]. The table colors correspond to arginine substitution biases towards the four amino acids (cysteine, histidine, glutamine and tryptophan). 2 Only genes with a substitution bias over 60% are shown. Please see Supplementary Table S3 for further details. 3 Genes in bold are characterized as driver by Martinez-Jimenez et al. [65].
Skewed genes include a number of additional factors with established roles in cancer (e.g., BMP8A and BUB1 [77,78]), as well as others (Tables 1 and S3). The candidates identified in this study did not show any obvious protein class preferences (e.g., kinases versus transcription factors). Indeed, skewed genes impact a wide range of cellular processes such as intracellular signaling, cytoskeletal architecture, metabolism, and mitosis (Tables 1 and S3). Arginine depletion in cancers thus appears to target genes that are likely Cancers 2021, 13, 6274 9 of 12 to increase the transformation and proliferative potential of cells. In addition, our analyses identify a number of other poorly characterized genes (e.g., OR1L6 or OR4CC3) which may be high-confidence candidates to modulate tumor progression, proliferation, and/or metastasis.
Conclusions and Perspectives
The combination of thousands of publicly available cancer genomes and advanced computational techniques has enabled unprecedented insight into common and distinct features of cancers. These include mutational signatures at the nucleotide level and skews or biases at the amino acid level. For instance, multiple studies, including this one, identify C > T transitions as the dominant mutational signature underlying the dramatic overrepresentation of arginine substitutions in cancers. We propose that these two features are linked. Specifically, an underlying C > T mutational signature canalizes possible arginine substitution outcomes, creating an initial asymmetry in favor of histidine, cysteine, glutamine, and tryptophan. Purifying selection acting at the amino acid level then reinforces this asymmetry, which can occur in a protein-or tissue-dependent manner. For example, stomach cancers show a pronounced Arg > His bias, whereas skin cancers have a strong Arg > Cys bias [35]. Determining why such context-dependent behaviors happen, and whether this model of "sequential selection" extends to other signatures, are important next steps for the field. | 2021-12-16T16:10:15.211Z | 2021-12-01T00:00:00.000 | {
"year": 2021,
"sha1": "e7f7ebb9cd3e97eb773fb413d80d04b59b691dc0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/13/24/6274/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "154da5261136e9b611a7a55a09bd67310da8c995",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
266132052 | pes2o/s2orc | v3-fos-license | Children’s multisensory experiences in museums: how olfaction interacts with color
This case study was designed to engage children’s sense of smell through a story-related museum exhibition. Children’s responses to the exhibition, with particular attention to their olfactory perceptions of the odors at the exhibition, were solicited through researcher-child interviews and children’s drawings. Responses from 28 children (girls N = 14, boys N = 14) aged between 4.5–8 years were analyzed after they visited the exhibition using the cross-modal association and multisensory theories. Interview data showed that dark (brown and black) colors elicited children’s negative olfactory associations for both positive and negative odors. Children’s drawings did not seem to make references to the odors at the exhibitions but rather their preferences for the different story characters. We theorize about the associations between smell and colors in children’s responses and distil some key learnings for multisensory museology.
Introduction
The vision of accessible, inclusive and universal museum spaces has been at the core of participatory approaches since the 1970s, and has recently been reinvigorated with the focus on visitors' multisensory stimulation.The complex ways in which senses are combined (multisensory integration) in appreciating the aesthetic qualities of objects and environments (Howes, 2006), have caught the attention of museum researchers and curators.The specific interplay of "hidden senses, " such as smell, and the "higher senses, " such as vision, are only beginning to be elucidated by research and were of specific interest to us in this project.In particular, we build on previous work that highlighted the lack of attention to the engagement of the olfactory sense in museum exhibitions (Ehrich et al., 2021) and the need to study the complex ways in which senses inter-and intra-act in children's everyday experiences (Kucirkova, 2022).The aim of this study was to examine in detail the ways in which children respond to olfactory stimuli in relation to color and other sensory stimuli in a purposefully designed story exhibition for children.
The ways in which senses combine to impact for example stability and balance is little known.For instance, haptic input can lead to increased stability and this is affected by the child's age, with older children (7-9-year-olds) being more stable in their gait and posture than younger (3-5-year-olds) children (the age factor was independent of different levels of touch, such as not touching, holding an object, lightly touching, and firmly touching, Schmuckler and Tang, 2019).What is less known is the impact of multiple sensory stimulations on children's movements in space.
Embodied learning emphasizes the role of the entire body (and not just the brain) in learning and how bodily interactions can be merged and fused with different interactive technologies that incorporate multi-sensory stimuli and augmented or virtual reality (Dourish, 2001).Ale et al. 's (2022) review of embodied cognition studies in child-computer interaction found that in the past 11 years, no studies focused on smell and taste as primary stimuli for embodiment, possibly stemming from the contextual nature and subjectivity of these stimuli, or limited resources and expertise in this area.The authors recommend that future research prioritizes interdisciplinary collaborations to study these senses -a call that we heeded in our project.Furthermore, Verbeek et al. 's (2022) recommendation for more empirical examples of multisensory museology, motivated our focus on the role of olfaction within children's multisensory experiences in museums.
Multisensory experience
Museums offer multisensory experiences as visitors move around exhibition spaces with their whole bodies, thus engaging their visual and hearing senses, proprioception (sense of movement and bodily awareness of space), olfaction (sense of smell), and in some exhibits, also touch and taste [see for example Park et al. 's (2022) report of food tourists enjoying varied taste experiences in museum restaurants].
Neurological findings confirm that humans perceive their environments through a converged and combined interaction of individual senses (Spence, 2011).The intensity, enjoyment and memory of an experience depends on the extent to which the relationships between individual senses (the so-called cross-modal correspondences) match or mismatch along some physical, semantic or cognitive characteristics (Driver and Spence, 1998;Knoeferle et al., 2015;Wang et al., 2022).According to the multisensory integration theory (Durgin et al., 2007), the congruence or match between individual senses needs to be spatially and temporally aligned for a smooth and enjoyable sensory experience (Spence, 2009).Senses might also counteract each other's influence (the so-called cognitive load theories, see Kirschner, 2002).In addition, the individual's own assumptions, which are based on previous experiences but also inborn differences (e.g., Iarocci and McDonald, 2006), influence the totality of an experience.It follows that the stimulation of individual senses needs to be balanced with the needs of individual visitors who might have sensory sensitivities, which require adjustments to either avoid or enhance the sensory input in museums (Schwartzman and Knowles, 2022).
For children, who are in the early stages of calibrating their sensory apparatus, attention to the ways in which individual senses inter-and counter-act each other is especially pertinent.These complex processes have been predominantly studied with children who have an impairment in one or more senses, often in the context of technologies and remediation approaches.For example Güldenpfennig et al. (2020) studied how specially designed tactile prototypes supported the haptic engagement of visually impaired children and triggered more advanced sensory and cognitive functions.For children with the autism spectrum disorder, who exhibit sensory hypersensitivity or hyposensitivity, innovative, research-based approaches that can improve these children's participation in daily activities, are vital (Schaaf et al., 2015).The current research and development of accessible sensory technologies can be divided into tools that offer sensory substitution (compensating for lost senses like vision), sensory expansion (broadening current sensory experiences, such as detecting non-visible electromagnetic radiation), and sensory addition (introducing a new sense like magnetoreception, see Eagleman and Perrotta, 2023).But while it is of considerable interest to researchers and software engineers to design environments and resources that would accommodate diverse sensory responses of diverse children, not all types of sensory stimulation (visual, auditory, tactile, taste/smell stimulations) are equally well-studied.Olfactory stimulations are particularly understudied and that is the case especially for young children.
It is widely known that marketing and consumer research with adults and their olfactory experiences of specific products and places is well-advanced (see Spence and Gallace, 2011 for an overview).For example, the use of ambient scents in galleries has been explored theoretically (Spence, 2020) as well as practically (e.g., in the recent Prado Museum exhibition with odours enhancing Jan Brueghel's painting).However, research and practical examples of a fully multisensory experience that includes the engagement of all six senses, of museum visitors, and especially child visitors, is lagging behind.Following the critique that children's educational experiences are focused on the higher senses of vision and hearing, and linguistic and cognitive forms of engagement, the argument for more multimodal (e.g., Jewitt, 2008) and sensory-oriented (e.g., Mills, 2015) learning has been made.
The sensory turn in children's studies has stimulated scholars' interest in the role of bodily movements across museum spaces (e.g., Hackett, 2016) and children's sonic and music experiences in public areas (Gallagher, 2011).The potential of olfaction for children's literacy learning in particular has been recently highlighted (e.g., Mills et al., 2022).As museums increasingly rest on participatory activities and position children as active and cultural citizens and social participants (Harris and Manatakis, 2013), the need for creating inclusive and empowering multisensory exhibitions that engage all six children's senses (vision, hearing, touch, smell, taste and proprioception) is even more important.The role of olfaction has been seldom studied in children's museum studies and is the specific focus for our study.
The potential role of olfaction
The loss of olfaction in COVID19 patients during the global pandemic in 2020-2022 led to increased public awareness of the role of smell in fully experiencing the world and leading a fulfilling life (e.g., see Otte et al., 2022).Olfactory researchers have documented the important predictive value of smell in degenerative diseases such as Parkinson's (e.g., Doty, 2012) as well as the direct link between olfactory processing and emotions (Ehrlichman and Bastone, 1992) and between smell and memory (see Wilson and Stevenson, 2003).The close link between the sense of smell and emotional processing has been harnessed for associative learning and studied in relation to learner motivations (e.g., Herz et al., 2004).For example, research shows a positive contribution of peppermint to alertness and memory (Moss et al., 2008), and positive associations between rosemary, lemon and peppermint on memory and learning performance on videogames (Choi et al., 2022).Students' preferences for individual smells 10.3389/feduc.2023.1242708Frontiers in Education 03 frontiersin.orgmediate the learning effect, a concept known as the hedonic value of odors.Overall, odors are likely to be an important feature to integrate into museum exhibitions both in terms of enjoyment and the potential for learning.
The extent to which individuals like specific odors or not has been the subject of several studies concerned with cultural and individual differences in olfactory hedonics.The reason that hedonic perception of odors is so important is that it constitutes the primary response to odors as acceptable or repulsive (Yeshurun and Sobel, 2010) and that plays an important survival role in avoiding dangerous places (for example those that smell of gas or smoke) and finding suitable mating partners (Herz, 2002).While earlier cross-cultural studies documented significant cultural differences between some odors (e.g., with participants from Germany and Japan who differed on which odors they perceived as pleasant or disgusting, Schleidt et al., 1988), more recent systematic comparative data report many similarities (Arshamian et al., 2022;Oleszkiewicz et al., 2022).
Although hedonic perception might differ across populations, the lack of language and olfactory vocabulary to describe different odors (Cain, 1979;Engen, 1987;Majid et al., 2018) is shared across cultures (at least western cultures, see Majid, 2021).The lack of olfactory language is particularly present among young children who are at the early stages of developing their general awareness of the world and words to describe it (Doty et al., 1984;Cain et al., 1995;Lehrner et al., 1999).Children's olfactory preferences are detectable early on after birth (Schaal, 1988) but change and further develop until adult age (Ventura and Worobey, 2013).Children's olfactory preferences are varied though often food-related (as shown in our studies in Malawi, Kucirkova andMwenda Chinula, 2023 andNorway, Kucirkova andBruheim Jensen, 2023).Recent studies show similarities in children's olfactory perception across countries (Oleszkiewicz et al., 2022) and provide evidence for the predictive value of early odor perceptions for later life (Lindroos et al., 2022).The recent evidence is thus gradually strengthening the argument that children's olfactory perceptions, preferences and experiences need to be more intensively studied and stimulated.
With a few exceptions, such as the Montessori kindergarten curriculum, there is a distinct lack of activities that would engage children's sense of smell and increase their awareness of odors in their environment.Smell remains a largely untapped sense for both learning, play and interaction possibilities in early childhood.This gap presents museums with a valuable opportunity, which we were keen to explore and reflect on.
Museums and children's olfaction
Museum studies on children and odors are currently few and far between, yet integrating odors into museum exhibitions has the potential to enhance the overall museum experience and what is learned from the experience (Verbeek et al., 2022).It has been shown that odors dispersed throughout a Viking museum acted as retrieval cues for memories of a museum visit several years later (Aggleton and Waskett, 1999).To the best of our knowledge, our research-based exhibition, which integrated odors with a fictional children's story in a public exhibition at a children's museum, was the world's first.We decided to integrate odors with the story in order to strategically make children aware of their sense of smell during a story experience.
We have described the participatory approach of academia-museum collaboration in conceptualizing, developing and curating the exhibition (Kucirkova and Gausel, 2023) and the story-related findings (Kucirkova, forthcoming).In this article, we reflect on the lessons learnt from the multisensory and cross-modal integration theories.
To gain insight into hedonic perception in children we analyzed children's responses about their favorite smells at the exhibition and possible reasons for their hedonic preferences.Each odor was paired with a color and a story character.This allowed us to explore the ways in which crossmodal associations may influence children's experience of the exhibition.In what follows, we outline the findings based on data collected as part of a research week before the official opening of the exhibition and that we interpret here in light of multisensory and cross-modal integration theories, with attention for their implications for museum curatorship.
Exhibition design
The public exhibition was a collaboration between our university research center and a local children's museum, as well as several other organizations and their representatives, including an olfactory expert, two children's librarians and a children's publisher.Capitalizing on the power of story to guide experiences and insights from a prior study in which we interviewed children about their multisensory preferences (Kucirkova and Kamola, 2022), the team members conceptualized an exhibition rooted in a fiction story and augmented with a selection of odors.The story was an adapted version of the traditional fairy-tale The Three Little Pigs, which we embedded into an adventure trail that children could follow in the exhibition area.The trail was aligned with the main storyline and consisted of houses where the three little pigs hid away from the bad wolf chasing them (the straw house, the tree house and the brick house), as well as added props and areas, such as plastic trees, cushions, pigpen for the Mother Pig or cushions and books inside the houses.
The design of the exhibition followed the Nordic tradition of nature-based materials wherever possible with fairly muted colors and reasonable space between the individual props (the exhibition needed to be regulated for the number of visitors to allow for sufficient space).The exhibition was specifically designed for children aged 3-8 years, so all props were child-sized.Nevertheless, there were elements that adults accompanying children could choose to use, such as for example QR codes on posters above the piglets' houses.The QR codes activated a voice-over for the individual story parts.
Unlike the pigs and their houses, the wolf 's story character was not visually represented in any of the posters or images at the exhibition.However, sounds of the wolf 's whines and his threat that he will blow the piglets' house down was played throughout the exhibition at regular intervals from the ceiling loudspeakers.While the exhibition was clearly multisensory with the possibilities for children to move around, sit on cushions and touch all props with their various materials and textures, we were particularly keen to integrate olfactory stimulation into the adventure trail.This was achieved through the selection of five specific odors (aromas) that were embedded in specially designed smell boxes.Children's drawing area.
Olfactory stimulus design
The smell boxes were made of wood and were of 16 × 16 × 16 cm size.Each box corresponded to a specific place in the story conceptually and spatially on the adventure trail.The odors were combined as odor mixtures with the following associations: (1) Mother Pig's pigpen with unpleasant smell of a pig farm, pee and poo and yellow color; (2) Pretty Pig with a sweet smell of fruit and candies and pink color; (3) Reading Pig with a somewhat neutral smell of pine and forest and a green colour; (4) Clever Pig with a positive smell of chocolate and cocoa and brown color and (5) Wolf 's Smell with a negative smell of a wet dog and animal fur and a black color.Each box was easy to open and close with a wooden handle and was screwed to a fixed place inside the pigs' houses or a tree stump in case of the wolf.We called these places "stations" and designed the adventure trail along these, with pink piglets' and red wolf 's footsteps stickers on the floor.The odors were in infused cotton balls placed at equal distances under a perforated plate that was screwed to the bottom each box.The color of the boxes' handles and the perforated plate inside were intended to visually and olfactorily represent the characteristics of the story characters (for example sweet and pink for the vain personality of the Pretty Pig).
Study design Participants
While the museum employees, five of which were active project team members, could informally observe children's interactions during the exhibition, we did not conduct a formal evaluation of public response to the exhibition.We judged the response based on the daily footfall and the fact that the exhibition was extended by 4 months by the museum, very positively covered in national and international media, and requested to be replicated by two other European children's museums in 2023.Our reflection here is based on the observations of children who participated at the research week before the exhibition opened and we had ethical permission to use their responses for research articles.These children were local children from two kindergartens located in the museum's proximity.The children lived in Norway and all spoke Norwegian.Fourteen girls and 14 boys, aged between 4.5-8 years, took part.They visited the exhibition in two groups, the first one had 8 boys and 6 girls on day one and the second had 6 boys and 8 girls on day two of the research week.
To understand children's odor hedonic perception, that is whether they liked or disliked the five odors presented in the smell boxes at the exhibition, we used two non-verbal methods: drawings and pointing.
Drawing method
Drawing is a well-established and popular visual method in qualitative research studies with children, especially if children might struggle to verbalize their feelings and thoughts (Literat, 2013).We selected the drawing method both because of the impoverished language both adults and children have for various smells, as well as its documented power in design evaluation studies with children (e.g., Barendregt and Bekker, 2013), in supplementing researcher observations (e.g., Plowman, 2015) or representing social dynamics (Martikainen and Hakoköngäs, 2022).
We supplied the children with an A4 paper with black-and-white printed faces of the main characters in the Three Little Pigs story (the Mother Pig, The Pretty Pig, The Reading Pig and the Clever Pig) presented in a vertical column with their key props (e.g., Reading Pig holding a book).Children were supplied with a stack of pencils that corresponded to the colors of the stations.These were: yellow for the pig farm and Mother Pig, pink for the Pretty Pig, green for the Reading Pig, Brown for the Clever Pig and Black for the wolf.We also added the colors of orange and purple, which did not appear in any of the stations.The children were invited to color in their favorite pig and draw whatever they liked on a separate or the same sheet of paper after they visited the exhibition.There were six pencils in each color so that if children did not want to share, everyone had equal access to the colors and sheets of paper.The drawing area was set up at the exhibition's entrance with a table big enough for the group of children who visited the area at that time.In addition to the drawing materials, we placed a set of smell boxes on the table.The boxes were exact replicas of the smell boxes inside the exhibition but in a smaller size.This was to prompt children's memory about the smells and to facilitate the pointing method.Figure 1 captures the drawing area with all boxes as prompts for children's experiences.
Pointing method
Our placing of the mini smell boxes in the drawing area was motivated by the objective to ascertain children's olfactory preferences and stimulate a conversation about their olfactory memories from the exhibition.So that children did not need to describe in words or colors which smell they liked most and which they liked least, the researcher asked them to point to the relevant smell box.If the children did not remember the box's smell from the color of the handle, the researcher encouraged children to open the box and asked them directly whether they like or dislike the smell.These answers were noted down, together with children's verbatim responses about the associations they had with the smells.
Pointing method
Children's answers to the researcher's interview question about which box they liked and which one they disliked and additional comments, are summarized in Table 1.
Children's responses to the individual smells indicated that for most of them, the fragrances in the brown, black and yellow boxes were perceived as unpleasant, while the fragrance in the pink box as pleasant.Children generally described the odors with one of two strategies: they either evaluated the smell (e.g., good), or identified the source of the smell (e.g., candies).Only one child used both strategies.Children's pointing to the boxes with the darker colored handles and their accompanying comments indicated that they did not distinguish the chocolate fragrance as positive but rather perceived all three boxes (yellow, brown, and black) as negative.The imitation of the poo smell in the yellow box placed at the beginning of the adventure trail seemed to have influenced children's perception and primed them for a negative perception of all subsequent smells.Furthermore, the choice of colours on the handles and inside the boxes may have influenced children's responses.
Drawing method
Figure 2 shows an arbitrary selection of children's drawings, illustrating the diversity with which children interpreted the coloring task.We systematically analyzed all 28 drawings in relation to the presence and choice of colors and children's choice of their favorite story character.We looked for any references to smell in children's drawings.In total, 26 children engaged with the task and out of these, two children chose to use only the black color.There was a clear match between the color of the story characters as depicted at the exhibition in four children's drawings; a somewhat ambivalent match in another four children's drawings who colored in several characters as their favorite and some of them had the same color as the exhibition story depictions.However, for the vast majority (N = 18), there was a clear mismatch between the colors children chose and the colors we chose for the individual story characters at the exhibition.We could not identify any olfactory references or qualities in children's drawings.As for the characters, the Mother Pig and the Clever Pig were the most popular characters among the children, according to their drawings.
Discussion
Overall the combination of odors with the story trail as part of the museum exhibition was enjoyed by the children.We used the coloring/drawing method as a way to assess the children's associations with odors without requiring them to verbalize them, which is difficult for children and adults.However, on reflection, we note that this method was not well-suited for the purpose of detecting children's associations and memories from the exhibition.Children's drawings seemed to be a reflection of their spontaneous engagement in the drawing task and their memory of the story of Three Little Pigs, not the olfactory qualities of the exhibition.None of the children commented on the different smells in terms of story characters other than two comments associating the wolf with all bad smells.This does not imply that the odors were not beneficial to the museum experience, but perhaps that such an association task needs to be more clearly related to the odors the children smell, for example by asking for the associations immediately after the children smell the odors.
Indeed, it seemed like other parts of their experiences with the exhibition, particularly with the story characters, dominated children's drawings.In a related project in Malawi, where the researchers asked children to portray smells via drawings, we noted a similar methodological limitation of drawings in that the children captured their story experiences but not olfactory references, as was intended by the researchers (see Kucirkova and Mwenda Chinula, 2023).The embodied cognition theory provides a useful explanatory framework for these findings in that it explains that bodily interactions generate various cognitive responses (see Wilson, 2002).In particular, spatial movements (in our case children moving in the exhibition space) connect to specific images, places and colors in the brain -results of which we noticed in children's drawings.
The reactions to the odors and the verbalized associations are informative about hedonic odor perception in children.Children's strong emotional response to the yellow, brown and black boxes with their mentions and pointing to the boxes, and the children's adamance that all smelled like 'poo' can be explained by their young age and reflects their current preoccupations and interests (Brown, 2016).The influence of peers during the exhibition who poked each other and laughed about the unpleasant poo smells, would have further intensified the social influence on children's responses.Nevertheless, there was a clear pattern in children's likes for the pink and dislike of the brown boxes.One explanation is that these abstract smells were particularly well composed for this particular group of children.Another explanation, one that aligns with the multisensory theory, is that of sub-additive effects of color and odor simultaneous stimulation.All odors had matching colors but while in the case of the black color and the negative wolf 's smell this was a reinforcing effect, in the case of cocoa and the brown color this was a sub-additive effect with the color interfering with olfactory perception.This explanation carries implications for future museum studies and we elaborate on it below.
The explanatory value of the multisensory theory
The associations conveyed through the posters and olfactory boxes did not seem to be picked up by the children as neither of their comments regarding the individual odors mentioned the pigs and very few of the drawings matched the colors of the pig characters.What seemed to be a clear pattern was children's responses to their most and least favorite smells and their corresponding colors in the pointing task.In drawings, 18 of the children's drawings had arbitrary colors.The black and brown colors were attributed to the wolf character, even though in the exhibition, the brown color was associated with the positive chocolate smell in the brick house of the Clever Pig.
Unexpectedly, the odor of chocolate was perceived as unpleasant.This is striking considering it is typically experienced as highly pleasant in adults (Dravnieks et al., 1984), and is likely to be a smell encountered frequently by children in positive contexts.According to the multisensory theory, this was a sub-additive effect of color and olfaction, where the sensory dominance of the visual sense (color brown) took over the olfactory sense (smell of chocolate).Whilst there was sensory congruence in that the color matched the smell quality (semantic congruence) and there was temporal and spatial congruence in that the colors were directly on the handles and the plate, thus not far from the olfactory experience in time or space, the dark brown color prompted children to think that the smell inside the box was the same or similarly unpleasant smell as that in the black wolf box (see Figure 3).Our finding corresponds to studies with adult participants, which found a crossmodal correspondence between odor and color, whereby a change in one directly impacts the other.For example, a study with wine-tasting studies with adult participants found that the color of the wine glass changed the wine tasters' evaluation of the wine's odor (Morrot et al., 2001).The color brown is likely to have negatively influenced the perceived pleasantness of the odor due to existing associations between brown and disgusting objects (e.g., poo and rotten food; Palmer and Schloss, 2010).It is possible that seeing the color brown before smelling the odor already lead to negative associations.Another possible influence of color that we had not anticipated was the color congruence between the color of the pigs depicted on the posters (pink), and the pink color of the smell box for the candy odor.Whilst the odor of candy is likely to be perceived as highly pleasant anyway, it is possible the color congruence heightened such pleasant associations.Other visual features of the smell boxes may also have influenced perception of the odors.Angular shapes, such as the one used in our smell boxes, tend to be associated with more intense and unpleasant odors (Demattè et al., 2006).If our unpleasant smells had been presented in round boxes, children may have perceived them more positively (see also Adams and Doucé, 2017).Furthermore, the fact that children physically manipulated the box to open and close it added haptic stimulation, which could have intensified the perception of the odors by potentially making them too intense at the first encounter.Supportive evidence for this interpretation comes from adult studies (e.g., Delwiche and Pelchat, 2002), so we can only speculate on this interpretation.It has been suggested however that some associations between odor and shape, texture, and color may not develop until after age 6 (Speed et al., 2021).Future work should aim to disentangle the effect of such multisensory features on children's odor experience in museum contexts.
Limitations and recommendations
Although we carefully planned the exhibition in a way to avoid sensory overload and only selectively represented sounds and touch (and no taste stimulation) in the exhibition, we underestimated the crossmodal correspondences between colors and odors and this omission led to children's negative perception of the chocolate odor in the story/exhibition.Children's knowledge of the world and the presence of their peers at the exhibition may have further influenced our results.It could also be that the odor and color were incongruent alongside other dimensions such as texture or shape but what seems the most plausible explanation, is the design misalignment between the brown color and the chocolate's positive olfactory qualities representing the warm home of the Clever Pig.
The drawing method seemed adequate for capturing children's perceptions of their favorite story characters, but it was limited in gauging their sensory preferences (beyond their visual and color preferences).Children's pointing responses could have been influenced by the presence of other children in the drawing area.Although each child was requested to individually point to the box with their most and least favorite odor, some children could see each other's response and this may have influenced their own judgment -a methodological limitation well-documented in child food consumption studies (e.g., Cullen et al., 2001;Olsen et al., 2019).
These study limitations are not uncommon in design-based studies, although the dynamic nature of innovative research and design-based studies means that they are often underreported.Future studies could employ experimental or quasi-experimental methods to formally study these patterns.Here, we highlight the correspondences between colors and odors as an understudied and little understood area in children's interactions with the environment and stories.Colors have been studied in marketing research since the 1970s in relation to their emotional value of colors to appeal to customers and make their products stand out (Wheatley, 1973).The inclusion of colors is natural in any public exhibition and is considered in relation to a number of dimensions, including music and spatial appeal (Monti and Keene, 2016), but rarely in relation to olfaction.Yet, studies, including the present research, show that immediate discrepancy between colors and odors might play a role (Welch and Warren, 1980).Future research could explore how culture-specific odors are and how they might add to children's visual experiences at museum exhibitions.
What is clear is that odor-color associations are an important area of research and aspect to take into account when designing a multisensory exhibition for children.Colors, rather than odors, seem to drive children's hedonic perceptions, something that Ernst and Banks' (2002) theory explains as the maximum likelihood estimation (MLE) account of multisensory integration.According to the theory, there is a clear dominance of some sensory modalities over others and our findings provide indirect support for it.Our findings are also in alignment with Spence's (2011) conclusion that it is not only how closely in time and space two sensory stimuli are but also the correspondence between stimuli's qualitative attributes that leads to the totality of an experience.
Conclusion
As we engage in "sensploration" and as the science on the intimate connection among all senses advances (Spence, 2022), museums need to be more cognizant and knowledgeable about sensory additions, sensory incongruencies and overload.Finding the right sensory balance will vary from exhibition to exhibition as the combination of sensorial inputs and their perceptions by visitors is unique to each context.Nevertheless, museums would benefit from keeping abreast of the insights from multisensory studies and following the general principles of multisensory theories in designing their exhibitions.This recommendation is particularly pertinent for children's museums, as children with emerging linguistic and cognitive capacities are more vulnerable to sensory overload than adults (Veenendal, 2009) and the sensory dominance of the visual sense.
FIGURE 3Smellbox inside the straw house.
TABLE 1
Summary of children's favorite smells from the exhibition, as prompted by the smellboxes (translated from Norwegian). | 2023-12-10T16:18:21.371Z | 2023-12-08T00:00:00.000 | {
"year": 2023,
"sha1": "f50df797d0415bb61e3d67a5d477c4a1e70e0b9e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/feduc.2023.1242708/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "78c16cb35b6b7505b7872c7b3fa7330d29dcfc17",
"s2fieldsofstudy": [
"Psychology",
"Art",
"Education"
],
"extfieldsofstudy": []
} |
17669244 | pes2o/s2orc | v3-fos-license | A Rare Cause of the Cough: Primary Small Cell Carcinoma of Esophagus—Case Report
Primary small cell carcinoma of the esophagus is a relatively rare malignancy. It is highly progressive and poorly prognostic in untreated conditions. In the western populations, the rate of primary small cell carcinoma in all esophageal cancer types is between 0.05% and 2.4%, while it is endemically increasing up to 7.6% in the eastern populations. Most of the cases are in extensive stage at the time of diagnosis. Surgery is the treatment of choice in limited stages, but treatment must be multimodal in primary small cell carcinoma of the esophagus. A 47-year-old woman was referred to our clinic with gradually increasing severe dry cough and slight difficulty in swallowing for 20 days. Chest X-ray graphy was normal, and computed tomography of the chest showed multiple mediastinal lymph nodes and hepatic metastases. Her endoscopic examination revealed an endoluminal vegetative mass between 20 cm and 23 cm of her esophagus. The case was reported as small cell carcinoma of the esophagus on histopathological examination. The case was assumed inoperable, and chemotherapy and radiotherapy were planned. We presented a rare cause of the cough and primary esophageal small cell carcinoma in this paper.
Introduction
Small cell carcinomas (SCCs) are more often described in lungs, but rarely laryngeal, pancreatic, stomach, prostatic, uterine, sweet glands, and esophageal locations are reported [1,2]. Esophageal and extrapulmonary small cell Carcinoma (EPSCC) was described first by McKeown in 1952 [3]. Primary small cell carcinoma of the esophagus (PSCCE) is a rare, rapidly progressive, and highly metastatic disease with poor prognosis. The incidence of PSCCE between all esophageal malignancies is from 0.05 to 2.4% in western populations, and this rate rises up to 7.6% in Chinese and Japanese literature [1,4,5]. As seen in our case, the cases with tracheal invasion due to rapid progression of PSCCE, without the presence of dysphagia in the foreground, admit to the hospital with the complaint of cough. From this aspect, we presented a case of extrapulmonary intrathoracic SCC, because it was both a rare etiology of severe dry cough and an indicator of rapid progression of PSCCE.
Case Report
A 47-year-old woman was referred to our clinic with gradually exacerbating dry cough and slight dysphagia for twenty days. There was no abnormality on the chest X-ray graphy. Thoracic computed tomography (CT) (Figures 1(a), 1(b), and 1(c)) revealed a mass and mediastinal multiple lymph nodes up to 2-3 cm and also hepatic metastases. Bronchoscopic exploration (Figure 2(a)) carried out for severe dry cough and to evaluate subcarinal mediastinal lymph node showed submucosal tumoral infiltration at the left anterolateral wall of the distal trachea. Esophageal endoscopic evaluation revealed an endoluminal vegetative mass between 20 and 23 centimeters of her esophagus. Barium-contrasted esophageal graphy (Figure 1(d)) showed mucosal irregularity and thickness in a long esophageal segment. Biopsy was obtained and pathological specimen reported as small cell carcinoma of esophagus. In the histopathologic examination (Figures 2(b) and 2(c)) of biopsy materials belonging to 2 Case Reports in Medicine esophagus taken endoscopically from the patient, accumulations have formed in lamina propria without indicating remarkable glandular or squamous organization, and it was observed that there was neoplastic formation leading to small rounds in squamous epitelium sporadically. The cells, forming neoplastic formation in which intensive squeezed artefacts and mitotic figures were observed, were round and ovalshaped having this granular chromatin and had narrow cytoplasm, the boarding of which is not chosen well, and its nuclei do not appear as one on the top of the other. In immunohistochemical examination, these tumoral cells indicated chromogranin, synaptophysin, NSE, and CD-56 with a positive immunereactivity. Immuno-reactivity together with Pan-CK and LCA was not observed. The case in this shape condition was reported as PSCCE.
Chemotherapy and radiotherapy were planned in this case that was considered inoperable. Patient received concurrent chemotherapy and radiation therapy using a total dose of 50 Gry in 25 fractions, five fractions per week. The chemotherapy consisted of 75 mg/m 2 cisplatinum given intravenously on the first day and 1 g/m 2 5-FU given by continuous infusion for the first 4 days of weeks 1, 5, 8, and 11. Patient was initiated to be administered radiotherapy and antitussive therapy, which led to the regression of the complaints. At the end of 6 months, a brain metastasis developed, and the patient was lost.
Discussion
SCC which constitutes 15-20% of all bronchial carcinomas mostly arises from lungs. EPSCCs are identified for other organs except esophagus. PSCCE is a rare tumour characterized by early dissemination and poor prognosis if untreated [1,2,5,6].
East side of Turkey is an endemic region for esophageal cancers. For instance, its incidence has been reported as 3/100,000 in Europe and USA, while it is 165-200/100.000 in Eastern Turkey, Northern Iran, and China [7,8]. Between October 2004 and January 2010, 294 patients with esophageal carcinoma were admitted and treated in our clinic with the therapies including esophageal resection, stent application, and conservative therapy in the patients treated with trachea-bronchial or esophagopleural fistula and chemoradiotherapy in the patients agreed to be inoperable. In the retrospective analysis, small cell carcinoma was found in only two cases (0.68%).
Endoscopic and radiological findings of PSCCE resemble squamous or adenocarcinoma of the esophagus. But progressive dysphagia, poor prognosis, rapid weight loss, and distant metastasis are against our interests in early period. Definitive diagnosis of PSCCE is diagnosed by cytological examination with esophageal abrasive balloon and endoscopic punch biopsy. This tumour is mostly reported in men with a male-to-female ratio reported as 2 : 1. It has often been reported between the fourth and the seventh decades. Major symptoms are progressive dysphagia, retrosternal pain, and rapid weight loss. In some cases, hoarseness and upper gastrointestinal tract bleeding have been reported as the primary symptoms. As seen in our case, even rarely, severe cough is the primary and leading symptom. Lesions are usually confined to middle and lower esophagi. Hematogeneous metastases of PSCCE are mainly extended to liver, lung, and bones [1,2,4,5].
There are two viewpoints on the histological origin of PSCCE. The first is that PSCCE originates from neuroendocrine cells of the submucosal gland or stratum basal, that is, the major precursor uptake and decarboxylation cells, as histologically confirmed. The second is that PSCCE originates from multipotential stem cells of the endoderm. Most of these cells may be differentiated into squamous cell carcinoma, and some are differentiated into adenocarcinoma or small cell carcinoma. This is due to the diversity of morphological, immune-histological, and electron microscopic features of PSCCE, in addition to the coexistence of PSCCE with squamous cell carcinoma and/or adenocarcinoma [5].
The standard of treatment for PSCCE has not been established yet due to the paucity of cases. Treatments such as operation alone [6], local radiotherapy [9], chemotherapy alone [10], or operation with adjuvant therapy [11] have been reported. In the limited disease, after surgical resection, short-term results of chemotherapy and radiotherapy are good, although long-term results are still poor. In a series of 29 patients with limited disease treated with only surgery, average survival was 8 months [12]. Also in a series of 20 patients with limited disease patients treated only with radiotherapy, average survival was 5 months [13]. After the basis of biological behavior, chemosensitivity, radiosensitivity, and some satisfaction in the treatment of small cell lung carcinomas, systemic chemotherapeutic agents PSCCE came to the fore. In early detected cases, surgical resection combined with radiotherapy and chemotherapy is the best way to treat PSCCE. In advanced stages, multiagent chemotherapy is the treatment of choice, and radiotherapy can be used for palliation. PSCCE is an extremely rare, rapidly progressive, and highly malignant characterised esophageal pathology and prone to early metastasis. In these cases, treatment must be quickly decided and started as soon as possible. The treatment is multimodal. Surgery is the standard treatment in limited stages. In advanced stages, radiotherapy with multiagent chemotherapy is a treatment choice. Despite all treatment principals, prognosis is still poor in these cases. As in our case, it is possible to detect newly and less symptomatic patients in advanced stages. In these cases we believe that multiagent chemotherapy and radiotherapy are correct treatment options.
Approximately 5% of all the small cell carcinomas are extrapulmonary. Extrapulmonary small cell carcinoma (EPSCC) is called as limited disease (LD) and extensive disease (ED) as in pulmonary SCC. LD was defined as a localized tumour with or without regional lymph node involvement. The cases with distant organ or lymph node invasion referred to ED. Treatment protocols in EPSCC are similar to those in lungs and can be treated with cisplatinumbased regimens for chemotherapy. Surgery is of benefit in LD. Multimodal therapy including chemotherapy and radiotherapy should be preferred in EPSCC even if the diagnosis was established in the early period and surgery was performed. In 34 EPSCC cases studied by KO Kim et al., 23 of the cases had LD and 11 had ED, and 6 (17,6%) of these were reported as esophageal origin 6 (17,6%) and as thymus origin 6 (17,6%). Ten cases with LD underwent surgery. Overall survival was found as 19,8 months in LD and 7 months in ED. Overall survival was estimated as 14 months for all the cases. Multimodal therapy principles were applied depending on the patient's suitability both in LD and ED cases. The most commonly used chemotherapy regimen was the combination of etoposide and platinum compounds (cisplatin or carboplatin) [14].
Extrapulmonary-intrathoracic SCC (esophageal, thymus, etc.) and pulmonary SCC are rapidly progressive malignancies [14]. As observed in our case which was ED, it can be metastatic while newly symptomatic. In a healthy individual, persistent cough should always be taken into account. Similarly to pulmonary small cell carcinoma, esophageal small cell carcinoma remains to be a challenge for medical therapy. | 2014-10-01T00:00:00.000Z | 2012-02-22T00:00:00.000 | {
"year": 2012,
"sha1": "2d1ffcdb4d96402011a1163e346978b2d89f04ea",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/crim/2012/870783.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a6342210709b722be8e9f3d3bd3da1e91952465a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
225931123 | pes2o/s2orc | v3-fos-license | Quantum Theory and Its Effects on Novel Corona-Virus
Emerging infectious viral diseases are a major threat to humankind on earth, containing emerging and re-emerging pathogenic physiognomies has raised great public health concern. This study aimed at investigating the global prevalence, biological and clinical characteristics of novel Corona-virus, Wuhan China (2019-nCoV), Severe Acute Respiratory Syndrome Corona-virus (SARS-CoV), and Middle East Respiratory Syndrome Corona-virus (MERS-CoV) infection outbreaks [1]. Currently, novel Corona-virus disease COVID-2019 is already pandemic and causing havoc throughout the world. Scientific community is still struggling to come out with concrete therapeutic measures against this disease and development of its vaccine is far off from sight in the immediate near future. However, humanity will be put to such pressures very often in the near future and given the present circumstances, what we can expect from the scientific world now? I think QIT (Quantum Information Theory) has an answer to this question. One of the very basic mechanisms that every infectious virus follows to infect is the entry of the virus through cell surface receptors, engulfing, un-coating of viral genome and its transcription to form multiple copies and translation to form viral proteins and coating of viral genome to form multiple copies of the viral particles and then of course the cell bursting to infect other cells. This very basic mechanism does not occur randomly but through a regulated and more dynamic process which we may call coding and decoding of information through reduction in error or noise.
archaea. Now the question is even in the least possible form, its work is beyond the imagination. It is hard to understand the nature of virus, its configuration and the cause of manifestation is an ordeal and even dreadful. Although, bacteria are the tiniest living organisms found yet but this virus is even tinier and also found in bacteria's [2]. Now Scientifically, it has been proven that it contains RNA which works like a code and acts as a source in its formation. In other words, it basically forms from the RNA. But the point is how the RNA actually initiates when later acquires mass and takes the shape of virus [4]. After that, this virus works in two ways initially, in recessive mode and next, in dominant mode i.e. this virus keeps on forming an unmatched track and at the same time does its work. The other virus which is primarily in the recessive mode but also looks for the host and then works as the dominant factor i.e. it looks for the target to complete its task [5]. But the question is that, how does RNA which is the basic source of its creation, build? To understand this, one has to take aid of quantum information theory that helps in understanding the internal information of the RNA, its configuration and its mode of manifestation. The internal information of the RNA is extremely different from the laws which are identified and acknowledged, it has its own laws and mechanisms that play a key role in acquiring mass wherein super energy mechanism/super time mechanism and super dimensional mechanism are amazing. After interpreting these three laws, let's know how super energy that is in the non-baryonic form with the help of super time and super dimensional mechanism converts into baryonic form? Until, these three laws aren't cognized, formation of this virus won't be grasped.
Quantum Information Theory
It is such information which plays an important role in formation of matter that actually is an energy and later with other formulas like super energy, super time and super dimension concepts, changes from non-existing form/non-baryonic form to existing form [6]. This information is actually in hidden geometrical form which we need to understand e.g. it is necessary that every matter contains S. Aasim Journal of Quantum Information Science some information it is because of this information, the matter exists. We have also framed a mathematical equation for this which actually works same as the mechanism found inside the matter at micro level and it has the same functions like the hereditary information has, at the micro level. Since, it is in non-existing form but on quantum level, we can form a quantum information equation that helps in the formation of hereditary information. As we know, hereditary information contains codes which internally are of different charges or have a dual nature. They work as small bits which are linked to each other in order to make a larger bit which has been named as cubit or we can also claim that information is formed on the basis of the smaller bits. These bits of different nature combine to form a sort of pattern i.e. the equation that we call as Nano-bit and Iobit means iobit-1 and iobit-2 forms Nano-bit and these Nano-bits later form the
QIT and Its Role
Actually, the quantum information which is present in it also carries the shape of its manifestation or we can say that quantum information becomes the source of its designing which later takes a geometrical shape [7]. Even though it is present in geometrical information, but it is in non-baryonic/non-existing form initially and later converts into baryonic form. It means that if we will talk about the world of particles, these particles play an important role in the formation of a structure but along with its structure it also has a particular design/pattern which is present in these quantum information bits and play vital role in the designing of the structure or we can say that in any living or non-living matter there is a particular code for its structure [8]. The plant right from its first particle i.e. nucleic code till it gets developed into a complete plant, is controlled by genetic code and the hereditary of this plant works through the quantum information which means that the structure/manifestation of all the living organisms/matter is present in their codes and its configuration is hidden in its information.
Statistical Analysis
Whenever, the error or noise becomes more absurd, recombination oramplifica-
Conclusion
That quantum phenomena might be observable in the messy world of living sys- others [9].
1) Indeed, researchers studying quantum phenomena often isolate particles at temperatures approaching absolute zero-at which almost all particle motion grinds to a halt-just to quash the background noise [10].
2) "The warmer the environment is, the busier and noisy it is, the quicker these quantum effects disappear", says University of Surrey theoretical physicist Jim Al-Khalili, who coauthored a 2014 book called Life on the Edge that brought so-called quantum biology to a lay audience. "So it's almost ridiculous, counterintuitive, that they should persist inside cells. And yet, if they do-and there's a lot of evidence suggesting that in certain phenomena they do-then life must be doing something special [11]".
3) Al-Khalili and Vedral are part of an expanding group of scientists now arguing that effects of the quantum world may be central to explaining some of biology's greatest puzzles-from the efficiency of enzyme catalysis to avian navigation to human consciousness-and could even be subject to natural selection [12].
S. Aasim Journal of Quantum Information Science 4) In the words of Chiara Marletto "The whole field is trying to prove a point", a University of Oxford physicist who collaborated with Coles and Vedral on the bacteria-entanglement paper. "That is to say, not only does quantum theory apply to these [biological systems], but it's possible to test whether these [systems] are harnessing QIT to perform their functions [13]".
5)
Outbreaks of emerging and reemerging pathogens across the globe can be prevented with the help of QIT to minimize the disease burden locally and globally [14].
Conflicts of Interest
The author declares no conflicts of interest regarding the publication of this paper. | 2020-06-11T09:11:06.892Z | 2020-04-24T00:00:00.000 | {
"year": 2020,
"sha1": "f72866e75c06b4f03fd8d426667496bb821f18a3",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=101223",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "15d4300a7c2231bced51e73e247f21faedc0b90d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
225636630 | pes2o/s2orc | v3-fos-license | Smart technologies as a type of intangible assets of a construction organization and mechanisms for their implementation
The company’s assets include intangible assets, a distinctive feature of which is the absence of physical assets. Today, it is becoming more obvious that the so-called tangible assets are not the only factor in ensuring the profitability of the organization, and that there are other types of them that do not have such a classic feature as a material substance, but can play a crucial role in the process of making a profit for the enterprise. The relevance of this topic is obvious, because in modern conditions, the formation of complete information about economic processes is almost impossible without information about intangible assets. Moreover, with the development of innovative technologies, organizations in the construction industry need to step forward and use such type of intangible assets as Smart platforms. This type of intangible assets allows construction companies not only to meet a high innovative level of development, but also provides a reduction in costs and generate more revenue.
The efficiency of any construction project depends not only on the level of technical competence and responsibility of the implementation agencies, but also from the quality control a huge number of interrelated and interdependent processes, as well as coordination of all project participants. In accordance with the development trends in all sectors of business, digital technologies are beginning to take an increasing place, the possession of which will eventually become a mandatory requirement for the implementation of each project. The construction industry is the most conservative with regard to the use of digital technologies. At the same time, it has a high potential for applying digital and innovative technologies.
In 2018, the president of the RF issued an instruction "on modernizing the construction industry and improving the quality of construction", which is aimed at introducing digital technologies into the construction industry [1]. This project should ensure the digital transformation of the industry by 2024. Within the framework of this order, a set of measures "digital construction" is being developed, which are aimed at fulfilling this order. In the process of digitalization of construction, it is expected to reduce the cost and time for the construction of objects by up to 20% [2].
Digitalization of the construction industry is developing in many directions. Participants in the construction market are actively implementing digital information technologies that cover almost all business processes: recruitment, accounting, internal document management, planning, development and placement of advertising, customer search and support, procurement, production, work, services, monitoring of contracts, and many others. Prospects for the development of digitalization consist in a radical transformation of industrial relations, the creation of a digital ecosystem, which is characterized by the following: all elements of the investment and construction system are present simultaneously in the form of physical objects, products and processes, as well as in the form of their digital copies (mathematical models); all physical objects, products and processes become part of an integrated it system due to the presence of a digital copy and the "connectivity" element; through the presence of digital copies (mathematical models) and being part of a single system, all elements of the investment and construction system continuously interact with each other in a mode close to real time, simulate real processes and predicted States, and ensure constant self-optimization of the entire system.
The key advantage of digital transformation is the implementation of the ability to automatically manage the entire system (or individual components), as well as its almost unlimited scaling without loss of efficiency, which allows you to significantly improve the efficiency of economic management (economic activities and resources of the country in various industries) at the micro and macro levels.
However, the construction industry does not fully realize the potential of implementing digital technologies. For example, project planning often remains inconsistent between the office and the field office and is often done on paper, without the use of digital devices. Also, contracts do not include incentives for risk sharing and innovation. The industry has not yet introduced new digital technologies that require prior investment, even if the long-term benefits are significant. RDA spending in construction is significantly lower than spending in other industries: less than 1% of revenue compared to 3.5-4.5% for the automotive and aerospace industries [3,4].
However, investment and construction projects are becoming more complex and large-scale, both in terms of construction volumes and investment. This means that traditional methods must change. It is obvious that the deep problems of the construction industry, such as lack of personnel and resources, will also require new ways of thinking and working. Traditionally, this sector has focused on incremental improvements, in part because many believe that each project is unique, that it is impossible to scale up new ideas, and that the introduction of new technologies is impractical. The McKinsey global institute estimates that the world as a whole will need to spend $ 57 trillion on infrastructure by 2030 to keep up with global GDP growth [5]. This is a huge incentive for construction industry actors to find solutions to transform productivity and implement projects using new technologies and improved methods.
In the RF, the greatest attention when implementing digitalization technologies in the construction industry is given exclusively to building information modeling. This approach is not correct, because digitalization of construction is not only an information modeling of buildings, it is also the formation of electronic information systems for urban planning, digital libraries of standard elements, the creation of machine-readable regulatory documents, the introduction of Smart platforms in all business processes of a construction company.
The author of the article identified five areas of the Smart platform that will be practical and relevant for the subjects of the construction sector: 1. The implementation of surveying and geo-high resolution 2. next-generation 5-D BEAM Information modeling 3. Digital collaboration and mobility 4. Internet of things and advanced analytics 5. Modern construction materials Let's look at each element of the Smart platform in detail. The occurrence of errors during geodetic works is the main reason for the delay of construction projects and excess of estimated costs. New technologies that combine high-definition photos, three-dimensional laser scanning and geographic information systems, thanks to recent improvements in unmanned aerial vehicle and unmanned aerial vehicle technology, can significantly improve the accuracy and speed of geodetic work. Advanced shooting techniques are complemented by geographic information systems that allow you to overlay ICRE-2020 IOP Conf. Series: Materials Science and Engineering 880 (2020) 012081 IOP Publishing doi:10.1088/1757-899X/880/1/012081 3 maps, images, distance measurements, and GPS positions. This information can then be uploaded to other analysis and visualization systems for use in project planning and construction.
The use of BIM technologies in construction has been supported and promoted by the government of the RF since 2010, but so far it has not been possible to achieve high results in this area. Today, the world's leading construction companies use a new generation of information modeling 5-D BIM. This is a five-dimensional representation of the physical and functional characteristics of any project. It takes into account the cost and schedule of the project in addition to the standard spatial design parameters in 3-D [6,7]. This also includes details such as geometry, technical characteristics, aesthetics, thermal and acoustic properties. The 5-D BIM platform allows owners and contractors to identify, analyze, and record the impact of changes on project cost and planning. To take full advantage of BIM technology, project owners and contractors must enable its use right at the design stage, and all stakeholders must adopt standardized design and data presentation formats that are compatible with BIM. In addition, owners and contractors should allocate resources to implement BIM and invest in capacity building.
The digitization process means moving from paper to real-time online information sharing to ensure transparency and collaboration, timely progress and risk assessment, quality control, and ultimately better and more reliable results. One of the system-forming elements of the Smart platform is the joint work of all project performers. The main reason for low productivity in the industry is that it still relies primarily on paper to manage its processes and results, such as drawings, project drawings, purchase orders and supply chains, equipment logs, daily progress reports, and stamp lists. Due to the lack of digitization, information exchange is delayed and cannot be universal. This is why owners and contractors often work with different versions of reality. Using paper makes it difficult to collect and analyze data; this is important because in purchasing and contracting, historical performance analytics can lead to better results and risk management. Errors in the paper also usually cause disagreements between owners and contractors on issues such as construction progress, change orders, and claims management. Finally, paper trails just take longer. Based on this, contractors need to develop software within the company's intangible assets that runs in a cloud-based mobile field surveillance platform that integrates project planning, design, physical control, budgeting, and document management for large projects (Picture 1) [8,9]. In fact, the digital collaboration and mobile solutions segment has attracted almost 60% of all venture capital funding in the construction technology sector [10]. One of the startups has developed apps for tablets and smartphones that allow you to make changes to design drawings and plan real-time transmission to local crews; photos of the site can be linked to construction plans. This solution supports a core set of documents with automatic version control and cloud access. Other companies offer mobile timekeeping, real-time cost coding, employee location detection, and logging and problem tracking.
As advanced users such as project managers, merchants, and operators implement real-time crew mobility applications, they can change the way the industry does everything: managing work and change orders, tracking time and materials, dispatching, scheduling, measuring performance, and reporting an incident.
The digital solutions segment described above is based on the Internet of things. On the construction site, the internet of things allows construction machinery, equipment, materials, structures, and even formwork to "communicate" with a central data platform to collect critical performance parameters [11,12]. The Internet of things is the concept of a computer network of physical objects equipped with built-in technologies to interact with each other or with the external environment, which allows them to collect, analyze and transmit data among themselves using software and technical devices. Let's look at the main applications of the Internet of things in the construction industry: 1.
Monitoring and repair of equipment. Advanced sensors allow equipment to detect and report maintenance requirements, send automatic preventive maintenance alerts, and collect usage and maintenance data [13,14].
2. Inventory management and ordering. Connected systems can predict and notify site managers when inventory runs out and when orders need to be placed. Marking and tracking materials using NFC can also accurately determine their location and movement, and help coordinate physical and electronic inventory.
3. Quality assessment. "Smart structures" that use vibration sensors to test the strength and reliability of the structure during the construction phase can detect flaws and then correct them in advance.
4. Energy efficiency. Sensors that monitor environmental conditions and fuel consumption for assets and equipment can help improve energy efficiency on site.
5.
Security. Wearable lanes can send alerts if drivers and operators fall asleep, or if a vehicle or asset is stationary or inactive for a set amount of time during shift hours.
All of the above applications are combined in a single platform, where devices and sensors that track and analyze all construction processes work in real time. This approach allows you to reduce not only the time of work and their labor intensity, but also their cost. Special attention should be paid to control. Already, large companies are using drones, GPS devices and all sorts of scanners in order to comply with plans, speed of construction of houses and the appropriate quality. End-to-end analytics will also affect the physical work of staff on the construction site [15].
The use of the above programs and tools is aimed at increasing the role of the organization's intangible assets. Competent and timely action on the implementation and use of IA significantly transforms the economic and industrial image of the organization [16]. By implementing the IA management cycle, an organization can increase the efficiency of its activities by making the most complete and rational use of information, reputation, and the knowledge that it has rights to use. It should be noted that IA is quite complete in relation to the economic turnover of the organization: they are used in sales and purchase transactions, they are subject to property claims, they are made as a contribution of legal entities and individuals to the share capital when purchasing shares of the company, they include their cost in fixed assets and amortize, including costs [17].
In modern realities, the use of Smart space in construction organizations allows you to reduce production costs, get additional income, increase the level of business reputation after its severe decline due to economic difficulties and get an investor or an important partner. | 2020-07-16T09:02:42.678Z | 2020-07-10T00:00:00.000 | {
"year": 2020,
"sha1": "8e264227c619f1797c3811abf8c45c253182672a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/880/1/012081",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "75e0c330068603f4e9e1f30cb84de6d5e99c8db3",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Physics",
"Business"
]
} |
3672596 | pes2o/s2orc | v3-fos-license | The Rcs-Regulated Colanic Acid Capsule Maintains Membrane Potential in Salmonella enterica serovar Typhimurium
ABSTRACT The Rcs phosphorelay and Psp (phage shock protein) systems are envelope stress responses that are highly conserved in gammaproteobacteria. The Rcs regulon was found to be strongly induced during metal deprivation of Salmonella enterica serovar Typhimurium lacking the Psp response. Nineteen genes activated by the RcsA-RcsB response regulator make up an operon responsible for the production of colanic acid capsular polysaccharide, which promotes biofilm development. Despite more than half a century of research, the physiological function of colanic acid has remained elusive. Here we show that Rcs-dependent colanic acid production maintains the transmembrane electrical potential and proton motive force in cooperation with the Psp response. Production of negatively charged exopolysaccharide covalently bound to the outer membrane may enhance the surface potential by increasing the local proton concentration. This provides a unifying mechanism to account for diverse Rcs/colanic acid-related phenotypes, including susceptibility to membrane-damaging agents and biofilm formation.
T ransmission of the food-borne pathogen Salmonella enterica requires survival of the bacterium in the environment. The cell envelope forms a permeability and structural barrier that maintains cellular homeostasis and is essential for environmental persistence. Five regulatory systems sense and respond to extracytoplasmic stress: the CpxAR and BaeSR two-component systems (TCSs), the E alternative sigma factor, the Psp (phage shock protein) response, and the Rcs (regulator of capsule synthesis) phosphorelay system (1)(2)(3)(4)(5)(6).
Expression of the Rcs system is observed in response to osmotic shock, growth on a solid surface, or exposure to -lactam antibiotics (6)(7)(8)(9). Effectors of the innate immune system, including cationic antimicrobial peptides (CAMPs), complement, and lysozyme, can also induce the Rcs system (10,11). The Rcs response is initiated by autophosphorylation of the RcsC sensor kinase and proceeds via phosphotransfer by the RcsD protein to the RcsB response regulator (12)(13)(14). The Rcs system also includes RcsF, an outer membrane lipoprotein that acts upstream of RcsC (15). The phosphorylated RcsB response regulator can activate the transcription of downstream genes either as a homodimer or as a heterodimer with the auxiliary regulator RcsA (16), which is unstable because of degradation by the Lon protease (17). RcsBA and RcsB homodimers regulate distinctive subsets of genes. RcsA-RcsB-regulated genes are primarily involved in exopolysaccharide (EPS) production and include the 19-gene colanic acid capsular operon and the yjbEFGH operon, which encodes the biosynthesis of a distinct EPS (18,19). RcsB is required for expression of the Rcs response, and increased RcsB expression can compensate for the absence of RcsA with regard to capsular synthesis (14). Genes regulated by RcsB independently of RcsA include ftsZ, osmC, and rprA (20)(21)(22).
The Rcs system was first identified by its role in the transcriptional regulation of colanic acid biosynthesis in Escherichia coli (23,24). Although colanic acid was discovered more than half a century ago (25), its physiological function has remained poorly defined. Colanic acid is composed of glucose, galactose, glucuronic acid, and fucose and forms a highly negatively-charged capsule (25,26). Colanic acid capsule production is not required for systemic Salmonella infection in mice (27)(28)(29). In contrast to many EPS capsules, colanic acid does not protect against phagocytosis by polymorphonuclear leukocytes (PMNs) or from killing following PMN uptake (26) and confers only minimal resistance to the bactericidal actions of serum complement (10,26). Adherence of uropathogenic E. coli to T84 colonic epithelial cells is impaired by the presence of colanic acid (26). Collectively, these observations do not suggest a primary role for colanic acid in Salmonella pathogenesis. Moreover, the production of colanic acid is increased at lower temperatures, consistent with an environmental function (7).
Biofilms are utilized by bacteria to persist in many environmental niches and during chronic infections (30). The ability to form biofilms has been observed in numerous Salmonella isolates from environmental, clinical, food, and animal sources (31). Colanic acid capsule has been shown to contribute to biofilm formation in E. coli and Salmonella (31,32). Colanic acid has been reported to confer resistance to environmental stresses, including hyperosmolarity, acid pH, desiccation, oxidative stress, and extreme temperatures (33)(34)(35). The yjbEFGH-encoded EPS (18) is less well characterized but appears to contribute to resistance to hyperosmotic stress (36) and is not required for biofilm formation (8).
Although the individual extracytoplasmic stress responses comprise largely discrete subsets of genes, inactivation of one response can lead to the compensatory expression of others (37)(38)(39). In the present study, we observed dramatic induction of the colanic acid capsular regulon following metal deprivation of an S. enterica serovar Typhimurium mutant lacking the Psp response. In Salmonella, the Psp response is required for virulence in mice expressing natural-resistance-associated macrophage protein 1 (Nramp1) (40), a proton-dependent phagosomal divalent metal transporter (41,42). Nramp1 enhances host resistance to intracellular pathogens by limiting metal availability within the phagosomal compartment (42,43). Salmonella competes with Nramp1 by expressing energy-dependent metal transport systems (43). By maintaining membrane bioenergetics, the PspA response allows S. enterica serovar Typhimurium to acquire essential metals despite the presence of Nramp1 (40). As the Psp response has been shown to preserve proton motive force (PMF) during extracytoplasmic stress (38,44,45), we evaluated a possible role for the Rcs stress response and colanic acid capsule biosynthesis in PMF maintenance.
RESULTS
EPS production is enhanced in metal-restricted pspA mutant S. Typhimurium. The Psp system facilitates metal uptake by S. Typhimurium transport systems, including SitABCD, MntH, and ZupT (40). Mutant strains deficient in metal transport exhibit impaired growth following treatment with the chelator 2,2=-dipyridyl (40). In previous studies, the introduction of a pspA (P) mutation into an S. Typhimurium strain lacking the ABC transporter SitABCD (S), the Fe 2ϩ transporter FeoB (F), and the ZIP family permease ZupT (Z) resulted in cell death following treatment with dipyridyl, indicating that the Psp system is necessary for cell survival during metal deprivation (40). To obtain mechanistic insights into the mechanism of cell death, a microarray analysis was performed to analyze the transcriptional response of SFZP (sit feo zup psp) mutant S. Typhimurium treated with dipyridyl for 2 h. As expected, induction of the Psp operon in an SFZP mutant was observed in the absence of the negative regulator PspA (see Table S1 in the supplemental material). Expression of the pspA gene could still be detected in the pspA deletion mutant, as the oligonucleotide probe contains sequences outside the deleted region. In addition to the psp genes, the most strongly induced loci were those comprising the colanic acid capsule operon (see Table S1), regulated by the RcsA-RcsB heterodimer (16). Other genes in the Rcs regulon, including rcsA and the yjbEFGH operon (18,46), were also strongly induced. The induction of rcsA, yjbG, and genes from the colanic acid capsule operon was confirmed by quantitative PCR (qPCR) analysis (Fig. 1A) and observed only under conditions of iron depletion (see Fig. S1). As an SFZP mutant exhibited induction of the auxiliary regulator rcsA, we determined whether an rcsA-expressing plasmid could restore growth in chelated medium. Wildtype (WT) or mutant strains containing either a vector control or pRcsA were grown in chelated Luria-Bertani (LB) medium, and growth was monitored by measurement of optical density at 600 nm (OD 600 ) (Fig. 1B). As previously observed (40), the viability of an SFZP mutant declined after 12 h of growth. The rcsA-expressing plasmid allowed the SFZP mutant to reach a higher cell density and eliminated the decline in viability at 12 h, indicating that the induction of the Rcs response and EPS production following metal deprivation of SFZP mutant S. Typhimurium is adaptive.
The Psp response maintains membrane integrity under metal-restricted conditions. The cell morphology of SFZP mutant S. Typhimurium during metal restriction was examined. SFZ and SFZP mutants grown in LB supplemented with 625 M dipyridyl were sampled at 3 and 4 h and visualized by either differential interference contrast (DIC) or transmission electron microscopy (TEM). DIC images of cells from SFZ mutant cultures ( Fig. 2A and C) appeared smooth, without surface defects, and TEM ( Fig. 2B and D) revealed that the cell and outer membrane remained intact after 3 or 4 h of metal restriction. In contrast, DIC images of SFZP mutant cells at 3 h (Fig. 2E) showed blebbing of the cell surface, and TEM ( Fig. 2F) revealed cytoplasmic extrusion. Cell blebbing was frequently located near the septum of dividing cells. After 4 h of growth in dipyridyl, the surface blebs of SFZP mutant cells had increased in size (Fig. 2G), with evident leakage of the intracellular contents (Fig. 2H). These results show that metal restriction of SFZ mutant Salmonella does not compromise the integrity of the cell envelope, provided that the Psp response is intact. In the absence of the Psp response, membrane integrity is compromised when essential metal uptake is restricted.
The Rcs system maintains ⌬ in metal-restricted pspA mutant S. Typhimurium. The membrane abnormalities observed in SFZP mutant S. Typhimurium adjacent to the (47). Pal is part of the Tol-Pal complex that bridges the inner and outer membranes via protein-protein and protein-peptidoglycan interactions (48). The Tol-Pal complex is required for cell envelope integrity and is dependent on PMF (48,49). Thus, the morphology of SFZP S. Typhimurium cells during metal deprivation suggests that PMF is compromised under these conditions. As the Psp response is known to maintain PMF under stress conditions and the Rcs system is strongly induced in metal-restricted SFZP mutant S. Typhimurium (Fig. 1A), we investigated whether the Rcs system helps to sustain the Δ (membrane potential) component of PMF. WT and mutant Salmonella cultures were grown for 2 h in LB with or without 625 M dipyridyl, and aliquots were removed and treated with DiOC 2 (3) dye for 15 min. DiOC 2 (3) bound to the cell surface emits green fluorescence, whereas internalized DiOC 2 (3) aggregates and emits red fluorescence. As DiOC 2 (3) internalization is Δ dependent, the red-to-green fluorescence ratio is a measurement of Δ. Fluorescence was measured by flow cytometry, with the red-to-green fluorescence ratio interpreted as proportional to Δ (Fig. 3). No difference in Δ was observed between WT and mutant strains grown in LB under nonstress conditions or SFZ (sit feo zup) or SFZR (sit feo zup rcsA) mutants under metal-restricted conditions. In contrast, the Δ of a pspA mutant was significantly lower than that of the WT during metal restriction, demonstrating that PspA is required to maintain Δ under stress conditions, in agreement with the earlier observation that the Psp response facilitates metal transport (40). An SFZPR (sit feo zup psp rcsA) mutant under metal-restricted conditions had the lowest Δ, indicating that the Rcs system can sustain the Δ during stress when the Psp response is absent. Collectively, these observations suggest that the Psp response has a primary role in the conservation of Δ and that the Rcs system can partially compensate for the absence of the Psp response to maintain Δ.
Construction of S. Typhimurium yjb and colanic acid capsule operon mutants. The 19-gene colanic acid capsule operon and the yjbEFGH operon are regulated by the RcsA-RcsB heterodimer (19). To determine the contributions of colanic acid and the yjbEFGH-encoded EPS to Rcs-related phenotypes, deletion mutations of each operon were constructed. Recently, Ranjit and Young reported that a mutation in the colanic acid capsule operon downstream of the initiating glycosylase WcaJ can result in the accumulation of toxic pathway intermediates (50). This is of concern because prior investigations have used the disruption of single pathway genes or genes downstream of WcaJ to infer the biological role of colanic acid (50). Complete operon deletions were constructed to avoid this problem. Most E. coli and S. enterica strains are able to produce the colanic acid capsule (51,52). Complete deletions of the colanic acid and yjbEFGH operons were constructed in S. enterica by -Red-mediated recombination (53,54). The mutations were verified by molecular (see Materials and Methods) and functional assays involving the measurement of capsular carbohydrates (Fig. 4A). The colanic acid capsule is composed of glucose, galactose, fucose, and glucuronic acid (25,26). The structure of the Yjb EPS has not been precisely determined, but it is known to contain a uronic acid component and
FIG 3
The Psp and Rcs responses maintain Δ in metal transport-deficient mutants during growth in metal-limited medium. Δ was measured by flow cytometry of aliquots from cultures incubated for 2 h in LB with 625 M dipyridyl. Flow cytometry was performed with live bacterial cells following 15 min of incubation with the Δ-sensitive dye DiOC 2 (3), which exhibits green fluorescence that shifts toward red fluorescence following Δ-dependent intracellular aggregation. Data are expressed as the mean red-togreen emission ratio of a population of 2 ϫ 10 4 cells; a representative plot of three biological replicates is shown. Statistical significance was determined with an unpaired t test (*, P Ͻ 0.05; **, P Ͻ 0.01). Abbreviations: S, sitA; F, feoB; Z, zupT; P, pspA; R, rcsA.
FIG 4
Mutations in the wza colanic acid and yjbEFGH operons eliminate EPS production. Expression of the colanic acid and yjbEFGH regulons is dependent on the RcsCDB phosphorelay. EPS production was induced by RcsA expressed in trans on a pBR322 replicon. (A) Purified EPS was subjected to a spectrophotometric assay for fucose and uronic acid. Values represent the mean of three biological replicates Ϯ the standard deviation, and significance was determined with an unpaired t test (*, P Ͻ 0.05; n.s., not significant; n.d., not detected). (B) Representative images of colonies on LB agar plates formed by the WT strain or EPS-deficient mutants. Each strain contains either the pRcsA expression plasmid or the vector control. Colonies were allowed to grow for 3 days at 25°C. Scale bar, 5 mm.
Colanic Acid Maintains Δ and PMF in Envelope Stress ® lack fucose (18). WT cells overexpressing RcsA in trans showed a significant increase in both uronic acid and fucose relative to a vector control. The increase in uronic acid and fucose was eliminated by wza and yjb mutations. Colanic acid capsule overproduction results in mucoid colonies (25). Colony morphology further confirmed the lack of colanic acid production by a wza yjb mutant (Fig. 4B). Additional confirmation that neither the colanic acid capsule nor the Yjb EPS was being produced was provided by the failure to observe an increase in uronic acid (colanic acid and Yjb) or fucose (colanic acid) or the generation of mucoid colonies (colanic acid) in wza yjb mutants overexpressing RcsA.
EPS deficiency enhances the sensitivity of pspA mutant S. Typhimurium to CAMPs. CAMPs are amphipathic molecules that disrupt bacterial membranes and dissipate the PMF. The cationic P2 peptide derived from bactericidal/permeabilityincreasing (BPI) protein, BPI-P2, permeabilizes the bacterial outer membrane and disrupts energy-dependent processes (55). An rpoE pspA mutant S. Typhimurium strain has been previously shown to exhibit enhanced sensitivity to BPI-P2 (38). To determine whether the colanic acid and Yjb EPS protect cells from PMF-dissipating agents, we tested the susceptibility of WT and mutant strains to BPI-P2. The pspA and wza yjb mutants survived as well as the WT following exposure to 8 g ml Ϫ1 BPI-P2 for 45 min at 37°C (Fig. 5A). However, pspA yjb, pspA wza, and pspA wza yjb mutants were significantly more sensitive to BPI-P2 than an isogenic pspA mutant strain.
The Rcs system has been previously implicated in sensitivity to the CAMP polymyxin B (PMB), independent of the colanic acid capsule (27). The sensitivity of WT and mutant strains to PMB was tested to determine if the capsules are required for PMB resistance in a pspA mutant background. Exposure to 1 g ml Ϫ1 PMB for 1 h at 37°C did not affect the survival of a pspA, wza yjb, or pspA yjb mutant strain (Fig. 5B). However, pspA wza and pspA wza yjb mutants were significantly more sensitive to PMB than a pspA mutant, indicating that colanic acid but not the Yjb EPS promotes cell survival following PMB-mediated membrane damage in a pspA mutant. As the antimicrobial activity of CAMPs is dependent in part on PMF disruption, these observations are consistent with a role for colanic acid in PMF maintenance.
The Psp and Rcs stress responses maintain ⌬ in stationary phase. The importance of the Psp response during stationary phase is well established. PspA is one of the (1,38). The Δ of WT and mutant bacteria was measured to determine whether the Rcs system, colanic acid capsule, and Yjb EPS contribute to the maintenance of this component of PMF in early stationary phase. Although our initial experiments focused on phenotypes dependent on RcsA, some residual capsule synthesis can be observed in rcsA mutant strains as a result of capsular operon activation by the RcsB homodimer (14). Therefore, Δ was measured in rcsB mutants that are completely incapable of capsule production. Overnight cultures were diluted 1:1,000 in fresh LB and grown at 37°C with agitation to an OD 600 of 1.5. Aliquots were taken, and the Δ was measured by using DiOC 2 (3) and flow cytometry. The Δ of individual cells (Fig. 6A and C) is depicted as histograms representing the distribution of red-to-green fluorescence ratios in populations of 2 ϫ 10 4 cells. The distribution of a WT population treated with the protonophore carbonyl cyanide m-chlorophenylhydrazone (CCCP) was included as a control, showing a left- Colanic Acid Maintains Δ and PMF in Envelope Stress ® shifted histogram with a lower red-to-green fluorescence ratio, indicating depolarization of the Δ. The Δ was measured in four biological replicate experiments, and the average mean fluorescence intensities (MFIs) were calculated from histograms for statistical analysis (Fig. 6B and D). The WT histogram (Fig. 6A) appears normally distributed, with an average MFI of 570 Ϯ 85 (Fig. 6B). All mutants showed left-shifted distributions relative to the WT, with mean MFIs significantly different from that of the WT, confirming the requirement of PspA for the maintenance of Δ during stationary phase and demonstrating a role for the Rcs response system in WT cells. Although the histogram of an rcsB mutant appears slightly left shifted in comparison to that of a pspA mutant, the mean MFIs are comparable (pspA, 413 Ϯ 110; rcsB, 411 Ϯ 118). The pspA yjb mutant histogram was only slightly left shifted compared to that of a pspA mutant, and the mean MFIs were not significantly different, indicating the Yjb EPS is not required for Δ maintenance in stationary phase. The pspA rcsB, pspA wza yjb, and pspA wza mutant histograms were all left shifted relative to that of a pspA single mutant and slightly left shifted in comparison to that of an rcsB single mutant. Statistical analyses of the mean MFIs showed significantly lower red-to-green fluorescence ratios in pspA rcsB, pspA wza yjb, and pspA wza mutant strains than in pspA and rcsB single mutant strains. Together, these observations demonstrate that the Psp and Rcs stress responses contribute independently to the maintenance of Δ in stationary phase and that colanic acid capsule production is specifically required. Expression of rcsB on a plasmid fully complemented an rcsB mutation and also restored Δ in a pspA mutant strain ( Fig. 6C and D).
The Psp response and colanic acid capsule contribute to biofilm formation. Salmonella can form biofilms on biotic and abiotic surfaces, including glass, plastic, gallstones, HEp-2 cells, and chicken intestinal epithelium (31,(56)(57)(58). The contribution of the colanic acid capsule to biofilm formation is well established (31,32), and induction of the Psp response has been observed in biofilms (59). Therefore, we tested whether the inability to mount the Psp response and produce the colanic acid capsule impacts biofilm formation in microtiter plates. For biofilm formation, overnight cultures were adjusted to an OD 600 of 1.0, diluted 1:100 in LB, and then added to microtiter plate wells and grown statically for 48 h at 25°C. Biofilms were quantified by the amount of crystal violet (CV) bound to EPS as measured by absorbance at 595 nm. No defect in growth was observed in any of the strains under these assay conditions, as determined by OD 600 measurement (data not shown). The pspA mutant formed significantly less biofilm than WT cells (Fig. 7), demonstrating the importance of the Psp response for Salmonella biofilm formation. Biofilms formed by pspA and pspA yjb mutants were similar, whereas pspA wza and pspA wza yjb mutants formed significantly less biofilm than a pspA mutant. These observations indicate that the Psp response and the colanic acid capsule contribute to S. Typhimurium biofilm formation, whereas the Yjb EPS does not, as previously observed (8). Decreased biofilm formation by the wza yjb mutant is also likely to result from absence of the colanic acid capsule, further confirming that this EPS is essential for Salmonella biofilm formation. The red, dry, and rough (RDAR) colony morphotype is indicative of biofilm formation by Salmonella on agar plates containing the dyes Congo red and Coomassie blue (57). WT RDAR colonies are not formed by pspA, wza yjb, or pspA wza yjb mutants (see Fig. S2), providing additional evidence that the Psp response and the colanic acid capsule support biofilm development.
Colanic acid-deficient mutants are more susceptible to ampicillin. Beta-lactam antibiotics induce the colanic acid capsule and Yjb EPS, as well as the Psp operon (6,9,60). Sensitivity to ampicillin was measured to determine if the Psp response, colanic acid, or the Yjb EPS is protective against this clinically relevant antibiotic. Overnight cultures were used to inoculate fresh LB, and strains were grown to logarithmic phase before the addition of 200 g ml Ϫ1 ampicillin and determination of survival by dilution, plating, and enumeration of CFU. A pspA mutation did not affect Salmonella sensitivity to ampicillin, nor did the introduction of a yjb mutation into a pspA mutant background (Fig. 8), at any time point measured. The wza yjb, pspA wza, and pspA wza yjb mutant strains, which lack the colanic acid capsular operon, all exhibited impaired survival following ampicillin treatment. Therefore, colanic acid supports Salmonella survival following ampicillin exposure.
DISCUSSION
Although the extracytoplasmic stress responses of enteric bacteria react to different signals and control largely nonoverlapping sets of genes, there is substantial evidence that these responses can act in an integrated fashion. For example, the stress response Cultures were allowed to continue growth in the presence of antibiotic, samples were taken and plated at the time points indicated, and CFU were enumerated after 24 h. Susceptibility was determined by dividing the CFU count at the posttreatment times indicated by the CFU count of cultures immediately before antibiotic addition. The average survival Ϯ the standard deviation of four biological replicates are shown. Significance was determined with a paired t test (*, P Ͻ 0.05; **, P Ͻ 0.01).
Colanic Acid Maintains Δ and PMF in Envelope Stress ® controlled by the alternative sigma factor E is expressed in response to the presence of misfolded outer membrane proteins in the periplasm and preserves cell envelope integrity (61). Abrogation of this response by the creation of an rpoE null mutation in Salmonella results in compensatory expression of the Cpx and Psp responses (37,38). The CpxAR TCS senses the accumulation of misfolded proteins in the periplasm and responds by inducing protein-folding and -degrading factors (61) that have some functional overlap with the E regulon (62). CpxR also cooperates with the BaeSR TCS in the regulation of certain genes (61) that respond to drug-induced envelope damage by activating the expression of efflux pumps and ameliorating oxidative stress (61,63). Integration of the extracytoplasmic stress responses allows Salmonella to respond to a diverse array of environmental signals that threaten cell envelope integrity (39). Here, we describe the compensatory role of the Salmonella Rcs stress response system and colanic acid capsule production in the absence of the Psp response and the role of colanic acid in preserving PMF.
The Psp response was originally described as a system that preserves PMF in response to cell envelope disruption by filamentous bacteriophages (1). Our laboratory subsequently demonstrated that the essential role of the Psp response in Salmonella virulence is to support energy-dependent metal importation in the host environment (40). Unexpectedly, we observed that metal deprivation of a Salmonella strain lacking the Sit, Feo, and ZupT (SFZ mutant) metal transport systems caused the cells to lose viability if the Psp system was also inactivated (SFZP mutant) (40) (Fig. 1B). In the present study, we demonstrate that the loss of viability of an SFZP mutant is accompanied by loss of cell envelope integrity (Fig. 2). We hypothesize that metal depletion of an SFZP mutant impairs electron transport and results in energy depletion with a heightened dependency on the phage shock response to maintain PMF. The membrane instability observed in an SFZP mutant may result from disruption of the PMF-dependent formation of the cell envelope-stabilizing Tol-Pal complex (47), resulting in loss of viability.
A transcriptomic analysis of an SFZP mutant under metal-deprived conditions revealed expression of the Rcs system (see Table S1), which was confirmed by qPCR (Fig. 1A). Expression of the RcsA regulator from a plasmid is able to restore growth to the SFZP mutant in metal-deprived medium (Fig. 1B), indicating that the Rcs system is playing a compensatory role in the absence of PspA. Salmonella SFZP mutants continue to exhibit envelope structural defects (Fig. 2E to H) despite Rcs induction, indicating that the endogenous expression of Rcs is insufficient to completely compensate for the loss of the Psp response under these environmental conditions.
In view of the established role of the Psp response in PMF maintenance during envelope stress (1), we investigated whether the Rcs system also affects the Δ component of PMF. We observed that metal deprivation of Salmonella results in depolarization of the Δ, which is sustained by the Psp response (Fig. 3). Under these conditions, the Δ of an SFZPR mutant lacking both the Psp and Rcs stress responses is significantly lower than that of mutants lacking only the Psp response. This suggests that the Rcs system helps to preserve PMF in the absence of the Psp response. As an S. Typhimurium pspA mutant was previously found to have attenuated virulence for mice expressing the metal transporter Nramp1 (40), we determined whether a pspA rcsB mutant was less virulent than a pspA mutant during S. Typhimurium infection of Nramp1-expressing C3H/OuJ Nramp ϩ mice. However, a competitive-infection experiment showed no effect of an rcsB mutation on virulence in this model (see Fig. S3). Further attenuation of the virulence of a pspA rcsB mutant was not observed, possibly because the effects of a pspA mutation on virulence are sufficiently marked that further attenuation could not be detected. Other investigators have found that rcsB mutants can be outcompeted by WT S. Typhimurium after 3 weeks of competitive infection of 129SvC6 mice (27).
Other conditions that stimulate colanic acid production are also known to perturb membrane energetics. For example, IgA monoclonal antibody Sal4 impairs membrane integrity, transiently reduces PMF (64), and induces colanic acid synthesis (65). Low concentrations of the CAMP PMB permeabilize the cell membrane and disrupt respiration, and higher PMB concentrations result in Δ depolarization (66). PMB also induces colanic acid synthesis (67), and we observed that absence of the colanic acid capsule renders pspA mutant Salmonella more susceptible to this antimicrobial agent (Fig. 5B).
Although elimination of the Psp response by itself did not affect the survival of cells exposed to the CAMP PMB or BPI-P2, elimination of both the Psp response and colanic acid synthesis enhanced susceptibility to both peptides (Fig. 5). Deletion of the yjbEFGH operon did not increase the susceptibility of a pspA mutant to PMB but enhanced its sensitivity to the BPI-P2 antimicrobial peptide (67), suggesting that the Yjb EPS subserves a similar function.
Both the Psp (1) and RcsB (68) responses are induced during stationary phase. Stationary-phase cultures of pspA or rcsB mutant Salmonella exhibited significantly lower Δ than the WT (Fig. 6A and B), and the Δ of a pspA rcsB double mutant was even lower, demonstrating that both the Psp and Rcs stress responses maintain Δ in stationary phase. Measurement of Δ in pspA mutants lacking either the wza or yjb operon indicated that colanic acid, but not the Yjb EPS, is essential for Δ maintenance during stationary phase. With the construction of an rcsB mutation, which completely abolishes expression of the Rcs regulon (14), we also found that the Rcs system is required for Δ preservation, even in the presence of the Psp response (Fig. 5), and can be restored by the expression of RcsB in trans (Fig. 6B). We did not observe a decrease in Δ in a wza yjb rcsB mutant beyond what is observed in the rcsB mutant, suggesting that no additional RcsB-regulated factors are required for PMF maintenance (see Fig. S4).
Colanic acid is highly expressed in Salmonella biofilms, most likely to address membrane bioenergetic requirements during slow growth and nutrient limitation (30,59,69). Decreased biofilm formation by a pspA mutant (Fig. 7) suggests that PMF is reduced in biofilms, and Salmonella mutants lacking both pspA and the colanic acid capsule formed even less biofilm, suggesting that, in addition to its proposed structural role, colanic acid may function to maintain membrane energetics in biofilms as well. Bacteria in biofilms are notable for their resistance to killing by antibiotics (70), and we observed that colanic acid capsule biosynthesis contributes to resistance to ampicillin (Fig. 8), an antibiotic used to treat Salmonella infections. Thus, the colanic acid capsule contributes to the antibiotic tolerance of Salmonella in biofilms.
The strongly negative charge of colanic acid (26) is likely to account for its ability to maintain Δ under stress conditions. A negative charge adjacent to the bacterial cell surface requires protons as counterions. A local increase in the proton concentration at the cell surface can enhance both the surface potential and ΔpH, which has been shown to increase ATP generation in E. coli (71).
The yjbEFHG operon appears to be associated with the production of a distinct type of EPS, but its structure and cell association have not been defined (8). We therefore cannot say why it is unable to maintain PMF in the absence of colanic acid. In the future, the analysis of other types of capsule whose structures and charge are characterized may provide further insight into the mechanism of PMF maintenance by colanic acid.
Our observations corroborate Model's original hypothesis that the Psp response conserves PMF under stress conditions and provides evidence that the Rcs system and specifically colanic acid also contribute to this function. This demonstrates a novel physiological role for the colanic acid capsule that may provide a unifying mechanism to account for its diverse contributions to stress resistance in enteric bacteria.
MATERIALS AND METHODS
For additional information regarding our materials and methods, see Text S1, and for information about the strains, plasmids, and primers used, see Table S2.
Bacterial growth conditions. All strains were routinely cultured in LB with shaking at 250 rpm at 37°C unless otherwise stated. Antibiotics were used at the following concentrations, as indicated: ampicillin, 100 g ml Ϫ1 ; kanamycin, 50 g ml Ϫ1 ; chloramphenicol, 20 g ml Ϫ1 ; tetracycline, 25 g ml Ϫ1 .
Strain and plasmid construction. Mutant strains were constructed with the -Red recombinase system (54). The wza colanic acid capsule mutant was constructed by the -Red tetRA replacement method (53). All mutations were verified by PCR with gene-specific primers and transduced into a clean 14028s background with bacteriophage P22. To generate plasmid JP102, plasmid pATC118 (17) was digested with EcoRI and HindIII to generate an 860-bp DNA fragment containing the Δ37 rcsA complementing fragment. The 860-bp fragment was then cloned into pJK392 at the EcoRI and HindIII sites and ligated with T4 DNA ligase (New England Biolabs, Ipswich, MA). To generate plasmid JP103, primers JPP249/250 were used to PCR amplify the rcsB promoter and coding region (68). Primers were designed to include the Ϫ35 and Ϫ10 elements of P rcsB , which are located within the rcsD coding region (68). The rcsB gene was cloned into stable low-cloning vector pRB3-273C (72) at the SmaI site and verified by sequencing.
Flow cytometry. Overnight cultures were diluted 1:1,000 in fresh LB containing the metal chelator 2=2=-dipyridyl (Sigma-Aldrich) at 625 M in a volume-to-flask ratio of 9:25. After 2 h of growth, approximately 1 ϫ 10 6 CFU were added to a 5-ml flow cytometry tube containing 1 ml of permeabilization buffer (10 mM Tris [pH 7.5], 1 mM EDTA) and 30 M DiOC 2 (3) (Sigma-Aldrich) and incubated in the dark for 15 min at room temperature. A total of 2 ϫ 10 4 cells were assayed with an LSRII flow cytometer with a 488-nm excitation wavelength. Green emission was detected through a 505-nm long-pass filter with a 530-to 30-nm bandpass filter, and red emission was detected through a 600-nm long-pass filter with a 610-to 20-nm bandpass filter. Gates for bacterial populations were based on the WT population by using forward versus side scatter and red versus green emission. For measurements of stationary-phase cultures, overnight cultures were diluted 1:1,000 in fresh LB in a volume-to-flask ratio of 1:5, grown to an OD 600 of~1.5, and then assayed by flow cytometry as already described. Flow cytometry data were processed with FlowJo v 10.0.7 software (TreeStar, Inc.) and analyzed by using the red-to-green fluorescence ratio as previously described (73). Flow cytometry was performed at the University of Washington Pathology Flow Cytometry Core Facility.
Capsule purification and quantification. An overnight culture was diluted 1:1,000 in 50 ml of fresh LB with ampicillin and grown to an OD 600 of~2.0. One milliliter of the 50-ml culture was used to enumerate CFU by dilution and plating on LB agar, and 25 ml was pelleted, resuspended in an equal volume of phosphate-buffered saline (PBS), and then boiled for 15 min to inactivate EPS-degrading enzymes and completely release EPS from the cell surface. The boiled sample was allowed to cool to room temperature and then centrifuged at 25,400 ϫ g for 30 min at 4°C, and the supernatant was combined with 3 volumes of 70% ethanol and incubated overnight at 4°C. Following overnight incubation, the sample was centrifuged at 25,400 ϫ g for 30 min at 4°C and the resulting pellet was resuspended in 1 ml of sterile water and dialyzed against distilled water for 48 h. The final sample was stored at 4°C until quantification. Total fucose and uronic acid contents were quantified in accordance with established protocols (74,75). Total sugar contents were normalized to CFU counts and expressed in micrograms per CFU per milliliter.
Susceptibility assays. Measurement of PMB sensitivity was performed in glass culture tubes as previously described (76). PMB stock was made in a glass tube, stored at 4°C, and used for no longer than 1 week.
Synthesis of the BPI-P2 peptide was previously described (38). P2 sensitivity was determined by a previously developed method (77). Briefly, cultures were grown in Trypticase soy broth (TSB), diluted 1:100 in fresh TSB, and then grown to an OD 600 of~1.0. A total of 10 6 bacteria ml Ϫ1 were treated with 8 g ml Ϫ1 of BPI-P2 peptide, and cells were kept stationary at 37°C for 45 min. Input CFU counts were determined at time zero by plating unexposed samples on LB agar and counting colonies after 24 h at 37°C. Percent survival was determined by dividing the CFU count obtained after antimicrobial exposure by the input CFU count and normalizing the result to WT percent survival.
Susceptibility to 200 and 100 g ml Ϫ1 ampicillin was determined as previously described (60). Briefly, overnight cultures were diluted 1:1,000 in fresh LB and grown for 3 h before plating to enumerate CFU before the addition of ampicillin and at the time points indicated following antibiotic treatment. Percent survival was calculated by dividing the CFU count obtained after ampicillin exposure by the unexposed input CFU count.
Growth kinetics in the presence of dipyridyl were performed as previously described (40), with a Bioscreen C Microbiology microplate reader (Growth Curves USA).
Microscopy. To prepare cells for microscopy, overnight cultures were diluted 1:1,000 in 1 liter of fresh LB with the metal chelator 2=2=-dipyridyl (Sigma-Aldrich) at 625 M in a volume-to-flask ratio of 9:25. Cultures were grown with shaking at 37°C, and aliquots taken at the time points indicated, pelleted, and kept on ice. For DIC microscopy, pelleted cells were resuspended in 0.85% NaCl and 2 l was immobilized on an agarose pad and imaged with a Nikon Eclipse TE200 inverted microscope. For TEM, cells were pelleted, washed two times with PBS, and resuspended in 1 ml of 0.5ϫ Karnovsky fixative. TEM imaging was performed at the University of Washington Electron Microscopy Center.
RNA preparation, cDNA synthesis, and qPCR. Overnight cultures were diluted 1:1,000 in 1 liter of fresh LB with the metal chelator 2=2=-dipyridyl (Sigma-Aldrich) at 625 M in a volume-to-flask ratio of 9:25, and 200 ml of cells was pelleted after 2 h of growth. The pellet was resuspended in 2.5 ml of Trizol reagent. Contaminating DNA was removed by a 1-h DNase (Fermentas) treatment. Following the DNase treatment step, the RNA was further purified by the acid-phenol method and stored at Ϫ80°C. RNA purity was determined on a 2% agarose gel and with a NanoDrop spectrophotometer. The Qiagen QuantiTect reverse transcription kit was used to synthesize cDNA with 500 ng of RNA as the input. qPCR was performed with the SYBR green kit (Qiagen, Valencia, CA) and the CFX96 real-time system (Bio-Rad, Hercules, CA) with rpoD as an internal control.
CV-based biofilm assays.
Overnight cultures were brought to an OD 600 of~1.0 with fresh LB. Adjusted cultures were then diluted 1:100 in fresh LB in a 96-well polystyrene microtiter plate. Plates were sealed with Parafilm and incubated at 25°C for 48 h. The OD 600 was measured to determine growth, and then culture supernatants were decanted, and unbound bacteria were removed by washing with PBS (pH 7.4). Remaining cells and cell-associated material were stained with 0.1% CV for 10 min. After staining, wells were washed twice with PBS and the dye was solubilized with an 80:20 (vol/vol) ethanol-acetone mixture. CV absorbance was quantified at 595 nm. | 2018-04-03T01:54:47.796Z | 2017-06-06T00:00:00.000 | {
"year": 2017,
"sha1": "aa63112d64486edda9b5437d3fed0474a6fb5e6c",
"oa_license": "CCBY",
"oa_url": "https://mbio.asm.org/content/mbio/8/3/e00808-17.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91dc7ad4944446480977e5a126854f02d6bcb409",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
16478846 | pes2o/s2orc | v3-fos-license | New mouse models for metabolic bone diseases generated by genome-wide ENU mutagenesis
Metabolic bone disorders arise as primary diseases or may be secondary due to a multitude of organ malfunctions. Animal models are required to understand the molecular mechanisms responsible for the imbalances of bone metabolism in disturbed bone mineralization diseases. Here we present the isolation of mutant mouse models for metabolic bone diseases by phenotyping blood parameters that target bone turnover within the large-scale genome-wide Munich ENU Mutagenesis Project. A screening panel of three clinical parameters, also commonly used as biochemical markers in patients with metabolic bone diseases, was chosen. Total alkaline phosphatase activity and total calcium and inorganic phosphate levels in plasma samples of F1 offspring produced from ENU-mutagenized C3HeB/FeJ male mice were measured. Screening of 9,540 mice led to the identification of 257 phenodeviants of which 190 were tested by genetic confirmation crosses. Seventy-one new dominant mutant lines showing alterations of at least one of the biochemical parameters of interest were confirmed. Fifteen mutations among three genes (Phex, Casr, and Alpl) have been identified by positional-candidate gene approaches and one mutation of the Asgr1 gene, which was identified by next-generation sequencing. All new mutant mouse lines are offered as a resource for the scientific community.
Abstract Metabolic bone disorders arise as primary diseases or may be secondary due to a multitude of organ malfunctions. Animal models are required to understand the molecular mechanisms responsible for the imbalances of bone metabolism in disturbed bone mineralization diseases. Here we present the isolation of mutant mouse models for metabolic bone diseases by phenotyping blood parameters that target bone turnover within the large-scale genomewide Munich ENU Mutagenesis Project. A screening panel of three clinical parameters, also commonly used as biochemical markers in patients with metabolic bone diseases, was chosen. Total alkaline phosphatase activity and total calcium and inorganic phosphate levels in plasma samples of F1 offspring produced from ENU-mutagenized C3HeB/ FeJ male mice were measured. Screening of 9,540 mice led to the identification of 257 phenodeviants of which 190 were tested by genetic confirmation crosses. Seventy-one new dominant mutant lines showing alterations of at least one of the biochemical parameters of interest were confirmed. Fifteen mutations among three genes (Phex, Casr, and Alpl) have been identified by positional-candidate gene approaches and one mutation of the Asgr1 gene, which was identified by next-generation sequencing. All new mutant mouse lines are offered as a resource for the scientific community.
Introduction
Metabolic bone diseases originate from endocrine dysfunctions as well as from heterogeneous determinants, including age, life style, and environmental influences. Bone turnover is physiologically regulated by hormones, cytokines, and growth factors and is under the control of numerous signaling pathways (Chavassieux et al. 2007). Metabolic diseases may have primary or secondary impact on bone mineralization. For investigating disease development and progression and to understand the underlying mechanisms, mice have been shown to serve successfully as model organisms (e.g., Abe et al. 2007;Kurima et al. 2002;Marklund et al. 2010;McGowan et al. 2008). Random N-ethyl-N-nitrosourea (ENU) mutagenesis is a promising approach to obtain mouse models for inherited human diseases (Hrabě de Angelis and Balling 1998). This has been shown in worldwide ENU mutagenesis programs, including bone metabolism, using dual-energy X-ray absorptiometry (DEXA), X-ray analysis, biochemical markers, or the SHIRPA protocol for the phenotyping of ENU mutagenesis-derived C3H/HeJ, BALB/cCRLAnn, and B57BL/6 J mice (Barbaric et al. 2008;Smits et al. 2010;Srivastava et al. 2003).
Within the large-scale Munich ENU mutagenesis screen more than 850 mutant mouse lines have been isolated, derived from a large-scale genome-wide screen (Hrabě de Angelis et al. 2000) or from an implemented modifier screen on Dll1 lacZ knockout mice (Rubio-Aliaga et al. 2007). Our Dysmorphology Screen is focusing on the isolation of new mouse models for hereditary metabolic bone diseases Lisse et al. 2008).
In previous studies in mice the reliability of biochemical markers for skeletal disorders, including alkaline phosphatase (ALP), has been shown (Srivastava et al. 2001). Combined ALP, total calcium (Ca), and inorganic phosphate (P i ) measurements in serum or plasma are routinely performed in patients with metabolic bone diseases (Table 1).
Ca and P i homeostasis is balanced by intestinal absorption, mobilization, or binding in bone and renal excretion. Ca levels directly and indirectly influence intestinal phosphate absorption. Much less is known about the influences on P i homeostasis (Bergwitz and Jüppner 2010). A key role in maintaining phosphate homeostasis is the reabsorption of phosphate from urine into the renal proximal tubules. A previously identified phosphaturic factor, FGF23 (fibroblast growth factor 23), acts as an endocrine hormone on the regulation of P i reabsorption in the kidney and on renal vitamin D metabolism (ADHR Consortium 2000;Strom and Jüppner 2008).
Mice
For this study we used C3HeB/FeJ (C3H) inbred mice purchased originally from the Jackson Laboratory (Bar Harbor, ME, USA) and bred in our animal facility. The mice were housed and handled according to the federal animal welfare guidelines and the state ethics committee approved all animal studies. The mice were kept in a 12/12-h dark-light cycle and provided standard chow ad libitum (TPT total pathogen-free chow #1314: calcium content 0.9 %, phosphate 0.7 %, vitamin D3 600 IE; Altromin, Lage, Germany) and water. Hygienic monitoring was performed following FELASA recommendations (Nicklas et al. 2002). Mutant mouse lines derived from our screen were given internal lab codes and were assigned with official gene symbols and names after the mutation was identified.
ENU mutagenesis ENU mutagenesis treatment of inbred strain C3H males was as described previously (Aigner et al. 2011). Litters produced from the ENU-treated C3H males (G0) are designated F1 in the following, while offspring produced from confirmed mutant F1 animals are designated G2.
Generation of F1 mice and confirmation of phenotypes in a dominant breeding strategy The F1 animals investigated for this study were derived from a total of 893 G0 males from 15 different ENUtreated groups. Blood samples of 9,540 F1 animals (4,606 females and 4,934 males) were screened for alterations of total ALP, Ca, and P i blood plasma levels. F1 mice showing alterations of blood-based parameters were retested after 14 days. Breeding for confirmation of a dominant phenotype was performed as described previously (Aigner et al. 2007).
Blood measurements
Blood samples (250 ll) were obtained from 12-week-old nonfasted anesthetized mice by puncture of the retroorbital sinus, as already described . All samples were collected between 9:00 and 11:00 a.m.
Plasma analysis of ALP, Ca, and P i was done using an Olympus AU400 autoanalyzer (Olympus, Hamburg, Germany) and adapted test kits (Klempt et al. 2006). Descriptive data are expressed as mean ± standard deviation. PTH values were analyzed with a Mouse Intact PTH ELISA Kit (TECOmedical, Bünde, Germany).
DXA and X-ray measurement DXA (pDEXA Sabre, Norland Medical Systems Inc., Basingstoke, Hampshire UK, distributed by Stratec Medizintechnik GmbH, Pforzheim, Germany) and X-ray (Faxitron, Hewlett Packard, Palo Alto, CA, USA) measurements were performed for in-depth analysis in selected mouse lines as described previously Fuchs et al. 2011).
Genetic mapping
To map the mutations, ENU-derived mutant mice were outcrossed to wild-type C57BL/6 J (B6) mice, as described previously (Aigner et al. 2009). For linkage analysis, SNP (single-nucleotide polymorphism) genotyping by highthroughput MALDI-TOF (matrix-assisted laser desorption/ ionization time-of-flight) technology supplied by Sequenom (San Diego, CA, USA) was performed with a panel containing 158 markers evenly distributed over the whole genome (Klaften and Hrabě de Angelis 2005). We developed the internal MyGenotype database for statistical SNP data analysis.
Mutation analysis
Casr, Phex, Alpl, and Asgr1 exons were amplified with intronic primers and directly sequenced using BigDye v3.1 cycle sequencing (Applied Biosystems, Life Technologies, Foster City, CA, USA). Casr consists of 7 exons (NM013803), Phex (NM011077) consists of 22 exons, and Alpl (NM007431) consists of 12 exons. All primer sequences are available upon request. The mutation of the BAP005 mutant line was detected by chromosome sorting (CHROMBIOS, Raubling, Germany) and whole-chromosome sequencing on a Genome Analyzer IIx (Illumina, San Diego, CA, USA). DNA extraction from sorted chromosomes 11 was performed overnight at 42°C with 0.25 M EDTA, 10 % Na lauroyl sarcosine, and 50 lg proteinase K. Extracted DNA was precipitated and resuspended in TE buffer. Paired-end libraries were constructed with the Illumina paired-end DNA sample preparation kit according to the manufacturer's protocols and as described previously (Eck et al. 2009). Alignment of the reads was performed with the BWA software, and subsequent analysis was performed with the SAMtools package. In total, *82 million reads and *157 million reads were generated for the mutant and control strain, respectively, of which 64 % mapped to the target chromosome 11 for the mutant strain, while 26 % of the control strain reads were on target. The identified nonsynonymous sequence variation in Asgr1 was confirmed in mutant mice by capillary sequencing.
Statistical analysis
Statistical analysis of parameters of F1 animals and sexand age-matched wild-type C3H mice were performed using the software package JMP Release 5.1 (SAS Institute, Cary, NC, USA). The reference values were obtained from untreated age-matched C3H wild-type control groups (50 males and 50 females). Single F1 variants for ALP activity and Ca levels were defined by a Z score C3 or B-3 compared to the age-matched control groups. Mice showing hypophosphatemia were tested three times to confirm P i changes. A Z score of B-2 was taken to select variants for hypophosphatemia. Statistical differences (P values) of the means of ALP, Ca, or P i blood values between all tested affected mice and nonaffected littermates of a mutant line were assessed by one-way analysis of variance (ANOVA), t test (giving mean ± SD values), and the Mann-Whitney rank sum test (giving median values) using SigmaStat 3.5 (Systat Software Inc., Chicago, IL, USA).
Overall results and statistics
In order to identify early stages of disturbed bone turnover, we investigated the diagnostic value of routine assays for ALP activity and Ca and P i levels in the plasma of mice derived from ENU-treated males for its comparability to their use in human patients (Table 1). This table also shows other mouse lines obtained for selected metabolic bone diseases and the observed alterations of plasma parameters in these models. Since we were interested only in mouse lines showing alterations of the bone ALP (bALP) isoform of the measured total ALP enzyme, variants with additional alterations of ALAT (alanine-amino-transferase) and ASAT (aspartate-amino-transferase) levels were excluded from this study. Two hundred fifty-seven phenodeviants (2.7 %, 87 females and 170 males) of 9,540 F1 animals showed alterations in at least one of the three parameters of interest (ALP, Ca, and P i ) in two repeated blood measurements. One hundred ninety of the 257 (74 %) phenodeviants were mated to wild-type C3H mice in confirmation crosses. In 71 of the mated 190 (37 %) (25 females and 46 males), the observed phenotype was genetically transmitted as a dominant trait (Table 2); however, six of these mutant lines were lost because no mutant male offspring was produced for sperm cryopreservation. For 110 of the mated 190 (58 %) phenodeviants, inheritance could not be confirmed because of sterility (n = 22/110, 20 %), the mice died due to unknown reasons (n = 15/110, 14 %), or the hypothesis of a dominant mutation was excluded (n = 73/110, 66 %). Confirmation crosses for the remaining 9 of the 190 phenodeviants are still underway. Sixty-seven of the 257 (26 %) phenodeviants were not mated due to space limitations; however, their sperm was frozen. Founder F1 mice with a similar phenotype and derived from the identical G0 male were expected to carry the identical mutation. Fifteen mutations have been identified resulting in new alleles of the Phex, Casr, and Alpl genes (Table 3).
New mouse lines carrying mutations of the Phex (phosphate-regulating gene with homologies to endopeptidases on the X-chromosome) gene Affected animals of the BAP012 (Bone screen Alkaline Phosphatase No. 012) mutant line displayed a significant (P B 0.001) decrease in plasma P i levels. Female mutant mice (n = 42) exhibited a P i value of 1.3 ± 0.2 mmol/l compared to female wild-type mice (2.0 ± 0.3 mmol/l, n = 11). Male mutant mice (n = 7) had a P i value of 1.2 ± 0.1 mmol/l compared to 2.0 ± 0.3 mmol/l in male wild-type mice (n = 44). Mean ALP activity was significantly elevated (P B 0.001) in female mutants (266.4 ± 35.3 U/l) compared to wild-type littermates (147.9 ± 17.9 U/l), and also in mutant male mice (370.9 ± 88.5 U/l) compared to their wild-type littermates (120 ± 8.5 U/l). In addition to these biochemical alterations, all mutants showed reduced body size, shortened hind limbs, and mild head-tossing behavior as described in other Phex mouse models (Lorenz-Depiereux et al. 2004;Moriyama et al. 2011). Genetic crosses revealed X-linked inheritance of the phenotype. Thus, mutant mice of both sexes were derived from mated mutant females, but from matings of male mutants only female mutants were born. Based on the phenotypic data, the causative mutation was hypothesized to be in the Phex gene. DNA sequence analysis of the Phex gene revealed a new hemizygous nonsense mutation in exon 2 (c.148A [ T, p.Lys50X) (Fig. 1a). The mutation is located within the large extracellular domain of the protein close to the transmembrane domain. The Phex gene in mice is syntenic to the human PHEX gene, which is organized into 22 exons and encodes a type II transmembrane protein with homology to zinc metallopeptidases (HYP Consortium 1995). Inactivating mutations of the PHEX gene cause X-linked dominant hypophosphatemic rickets (XLHR), which has an incidence of 1:20,000 and is the most common familial form of hypophosphatemic rickets in humans (Burnett et al. 1964;Tenenhouse 1999). Mice of the BAP024 mutant line express similar phenotypes, with gender influences on inheritance as the C3Heb/FeJ-Phex BAP012 mice. In BAP024 we found a new missense mutation in exon 22 of the Phex gene (c.2197T [ C, p.Cys733Arg) (Fig. 1b), also located in the large extracellular catalytic domain of the protein. The cysteine at position 733 is highly conserved among other vertebrate species (Du et al. 1996). A cysteine-to-serine substitution at the corresponding position of the C3Heb/ FeJ-Phex BAP024 mutation has been described recently in a patient with XLHR (Filisetti et al. 1999). No spontaneous Phex point mutations on the C3H strain have been isolated previously.
New mouse lines carrying mutations of the Casr (calcium-sensing receptor) gene The BCH002 (Bone screen Calcium High No. 002) line showed a statistically significant increase of Ca levels in mutant animals compared to wild-type littermates (P B 0.001). Female mutants' Ca level was 2.9 ± 0.1 mmol/l (n = 23) compared to 2.43 ± 0.1 mmol/l for wild-type littermates (n = 19). The male mutant value was 2.87 ± 0.1 mmol/l (n = 20) compared to the wild-type littermates' value of 2.41 ± 0.1 mmol/l (n = 20). Fiftythree percent of female and male mutant BCH002 mice had slightly reduced P i levels. Histological analysis showed enlarged parathyroid glands in heterozygous mutant mice (Fig. 2a). A group of 11 female (6 mutants, 5 wild types) and 20 male mice (10 mutants, 10 wild types) was tested for PTH values, resulting in significantly raised median PTH values for mutant mice (P B 0.001): female mutants, 214.9 pg/ml (25 % 203.3 pg/ml and 75 % 265.7 pg/ml), and wild types, 85.7 pg/ml (25 % 79.6 pg/ml and 75 % 113.1 pg/ml). Male mutants showed 235 pg/ml (25 % 191 pg/ml and 75 % 409.9 pg/ml) compared to wild types showing 102.7 pg/ml (25 % 78.8 pg/ml and 75 % 117.2 pg/ml). So far eight pups were derived from a first heterozygous intercross but no homozygous mutant was found. Mapping analysis of 40 mutant and 20 wild-type BCH002 animals derived from the dominant backcrosses to the B6 strain revealed linkage to chromosome 16 (Table 4), with the highest v 2 value at the marker rs4186801 (51.47 Mb, mouse genome Build 37.1, UCSC). In this region Casr was the most promising candidate gene for the observed phenotype. DNA sequence analysis of the Casr gene revealed a new heterozygous missense mutation (c.2579T [ A, p. Ile859Asn) within the protein-coding region of exon 7 (Fig. 2b) of the gene that was not present in wild-type C3H and B6 mice. CASR belongs to the family of G-protein-coupled receptors (GPCRs) and is an integral membrane protein that senses changes in the extracellular calcium concentration to parathyroid cells.
In addition, six new alleles of the Casr gene were isolated in other mouse lines (BCH003, BCH004, BCH007, BCH011, BCH013, and BCH014) ( TRE002 High ALP All mutants trembling, high ALP probably secondary effect 100 All mouse lines listed in alphabetical order of the internal lab names a According to dominant inheritance 50 % mutant offspring corresponds to 100 % transmission of the phenotype mouse lines with mutations of the Casr gene, PTH data are underway. The missense and nonsense mutations of these mouse lines were located in exons 3, 4, 5, and 7 of the Casr gene (Table 3).
New mouse lines carrying mutations of the Alpl (alkaline phosphatase, liver/bone/kidney) gene In mutant mice of the BAP032 line, statistically significant (P \ 0.001) low mean ALP activity was found in female mutants (47 ± 5.8 U/l, n = 9) compared to wild-type littermates (157.8 ± 7.9 U/l, n = 8), and in male mutants (38.4 ± 6.3 U/l, n = 12) compared to wild-type littermates (129.5 ± 10.1 U/l, n = 10) (Fig. 3a). Significantly reduced ALP activity suggested a mutation in the Alpl gene encoding the tissue nonspecific ALP (TNSALP). We sequenced this gene in BAP032 mice and revealed a new heterozygous missense mutation in exon 11 located within the protein-coding region of the Alpl gene on chromosome 4 (c.1217A [ G, p.Asp406Gly) (Fig. 3b). This mutation was not found in wild-type C3H littermates or in wild-type B6 mice. We isolated five additional mouse lines carrying new alleles of the Alpl gene (Table 3). Four sequence variations were located in exons 7, 10, or 12 (BAP020, BAP023, BAP027, SAP007) and one affects the splice site in intron 9 (BAP026).
Other mouse lines and mutations
Female and male mutant mice of the BAP005 line showed statistically significant (P B 0.001) increased mean ALP activity. The value in female mutants (n = 25) was 233 ± 21 U/l compared to 136 ± 14 U/l in wild-type littermates (n = 32), and in male mutants (n = 39) ALP activity was 188 ± 19.81 U/l compared to 104.5 ± 10.43 U/l in their wild-type littermates (n = 36). The mouse line breeds homozygous offspring with very high ALP activities. Homozygous females derived from heterozygous intercrosses showed mean ALP activity of 587 ± 39 U/l (n = 12), and ALP activity in homozygous males was 482 ± 51 U/l (n = 21). SNP mapping revealed a region between the markers rs26982471 and rs27000576 (53.99-114.33 Mb, mouse genome Build 37.1, UCSC) on chromosome 11. Sorting of chromosome 11 and whole chromosome 11 sequencing on a GAIIx next-generation sequencing machine revealed a new missense mutation in the Asgr1 (asialoglykoprotein receptor 1) gene within the translated region (c.815A [ G, p.Tyr272Cys). The mutation was sequenced in 16 BAP005 mutant mice, but neither in 4 wild-type littermates, nor in additional 4 wild-type mice from different inbred strains (BALB/c, DBA/2, FVB, SJL). For eight additional mouse lines (BAP002, BAP003, BAP004, BAP014, BPL004, BPL006, BPL008, and TRE002) showing high ALP activity, low P i , and high or low Ca values as a phenotype, genetic mapping has been finished ( Table 3) and sequencing of candidate genes is in progress. For selected mouse lines we will include exome sequencing to find the causative mutation.
Discussion
In this study we described a large-scale ENU mutagenesis screen (Soewarto et al. 2009), with the main focus on malfunctioning bone turnover. In other projects murine models for disturbed bone metabolism were obtained by gene targeting (Daroszewska et al. 2011;Ducy et al. 1996;Forlino et al. 1999;Kato et al. 2002), transgene insertions (Imanishi et al. 2001;Rauch et al. 2010) or spontaneous mutations (Eicher et al. 1976;Marks and Lane 1976). Here, we isolated 71 new mouse models by screening for wt +/Y mut -/Y BAP012: Phex exon 2 c.148A>T, p.Lys50X alterations of total ALP activity and total Ca and P i values in plasma of 9,540 F1 mice. Our results demonstrate that malfunctions of bone metabolism in mice may be efficiently detected by the analysis of human standard clinical chemical parameters.
In this study the highest fraction of new mouse lines revealed alterations of total ALP activity (Table 3). Since these mouse lines discriminated in phenotype expression and occurrence of additional phenotypes, the phenotypes presumably depend on different molecular mechanisms. Total ALP was chosen as a parameter of interest since elevated ALP activity is the most frequently measured parameter for human Paget's disease (Langston and Ralston 2004), X-linked hypophosphatemic rickets (XLHR) (Jonsson et al. 2003;Mäkitie et al. 2003), autosomal dominant hypophosphatemic rickets (ADHR) (Econs and McEnery 1997;Imel et al. 2007;Kruse et al. 2001), and type I osteoporosis (Avbersek-Luznik et al. 2007;Pedrazzoni et al. 1996). Screening for alterations of total Ca and P i values without changes of ALP activity resulted in 34 mutant lines with confirmation of the observed phenotype (Table 2). While the Ca parameter was easy to measure, P i values were artificially elevated after plasma storage for longer than 1 day, freezing of the samples, or hemolysis. Metabolic bone diseases may be reflected in changes of more than one parameter, and very often two or three of the parameters of interest showed alterations in the same individual mouse line, as is commonly observed in human patients (Table 1).
In our screen we obtained new mouse models for hypophosphatemia, hyperparathyroidism, and hypophosphatasia. Despite the large number of existing mouse models for XLHR, there are still open questions on the mechanism of PHEX in renal phosphate wasting, abnormal vitamin D metabolism, and matrix mineralization (Addison et al. 2010;Brownstein et al. 2010). The C3HeB/FeJ-Phex BAP012 and C3HeB/FeJ-Phex BAP024 mutant lines represent two new mutant mouse lines with novel point mutations modeling XLHR in addition to previously published models (Carpinelli et al. 2002;Lorenz-Depiereux et al. 2004;Xiong et al. 2008).
New point mutations of the Casr gene were found in seven mouse lines. The large extracellular domain of the receptor contains clusters of amino acid residues, which may be involved in calcium binding ). Exon 7 encodes the seven transmembrane domains and four intracellular loops of CASR (Chang et al. 2008). Human CASR mutations are known to be causative for primary hyperparathyroidism (HP) (Bilezikian et al. 2005) and familial benign hypocalciuric hypercalcemia (FHH) Fig. 3 a C3HeB/FeJ-Alpl BAP032 ALP blood activities (mean ± SD U/l) in female mutant (N = 8), female wild-type (N = 9), male mutant (N = 12), and male wild-type (N = 10) mice. Mean ± SD ALP activities were: female mutants 47 ± 5.8 U/l (P \ 0.001); female wild-types 157.8 ± 7.9 U/l; male mutants 38.4 ± 6.3 U/l (P \ 0.001); male wild-types 129.5 ± 10.1 U/l (t-test). b DNA sequence analysis of the Alpl gene exon 11 revealed a new heterozygous missense mutation (c.1217A [ G, p.Asp406Gly). Variant is marked by an asterisk (Pollak et al. 1994). Approximately two thirds of FHH patients showed loss-of-function mutations involving the 3,234-bp coding region of the CASR gene (D' Souza-Li et al. 2002). Individuals with HP and FHH discriminate in creatinine clearance and serum magnesium values, both being higher in FHH (Marx et al. 1981). It has been demonstrated that individuals with FHH are heterozygous, and children within these families with severe neonatal primary hyperparathyroidism (NSHPT) are homozygous for CASR mutations (Janicic et al. 1995;Pollak et al. 1993). Mice with tissue-specific deletion of Casr in the parathyroid gland and bone exhibited profound bone defects (Chang et al. 2008). C3H;102Casr Nuf /H mice carry an activating ENU-derived Casr point mutation that exhibits hypocalcemia, hyperphosphatemia, cataracts, and ectopic calcifications (Hough et al. 2004). We obtained the first presumed loss-of-function point mutation isolated in C3HeB/FeJ-Casr BCH002 mice that is supposed to model human FHH. Since the mouse line has been bred more than 15 generations and because we found six other independent Casr mutations for this phenotype, it is more than likely that the consistent phenotype is due to the isolated point mutation of the Casr gene. Heterozygous C3HeB/FeJ-Casr BCH002 mice exhibited high Ca and PTH values similar to targeted Black Swiss/129SvJ Casr ?/and Casr -/mice (Ho et al. 1995), but, in addition, they showed enlarged parathyroid glands described only for Casr -/mice. Further heterozygous intercrosses are required to find out if homozygous C3HeB/FeJ-Casr BCH002 mice are viable, which is not the case in Casr -/mice. This would raise the opportunity to obtain a mouse model for NSHPT. More than 270 mutations have been described so far in the human CASR mutation database (www.casrdb.mcgill.ca; Nakajima et al. 2009), and interestingly most of the human mutations were found in exons 4 and 7. The mouse lines carrying Casr mutations obtained in our screen showed slight differences in the expression of the phenotype. Additional studies on phenotypical and histological traits will help to discriminate between the different effects of each point mutation on the severity of hyperparathyroidism and concomitantly to improve our understanding of CASR mutations in human patients. Heterozygous C3HeB/FeJ-Alpl BAP032 mice showed a statistically significant reduction of ALP activity in plasma without additional phenotypes, as observed in heterozygous Akp2 Hpp/? mice derived in an ENU mutagenesis screen on C3H/HeH background (Hough et al. 2007). In Akp2 Hpp/? mice, an Alpl loss-of-function mutation led to the rare disease hypophosphatasia (HPP) which displays reduction of plasma ALP activities to about 50 % in Akp2 Hpp/? and a stronger reduction in Akp2 Hpp/Hpp mice. Akp2 Hpp/? mice were radiographically and histologically indistinguishable from wild-type mice at different time points, as were 16-week-old C3HeB/FeJ-Alpl BAP032 mice in DEXA and X-ray analysis. Interestingly, we observed a stronger ALP reduction in heterozygous C3HeB/FeJ-Alpl BAP032 mice than in Akp2 Hpp/? mice, with ALP activities in female and male mutant mice reduced to 29 % of that found in wild-type littermates. Severe HPP forms are characterized by hypomineralization, rickets, seizures, and nephrocalcinosis due to hypercalciuria (Beck et al. 2009). Alpl -/mice showed a reduction in body size, no detectable ALP levels, and lethality prior to weaning, whereas Alpl ?/mice appeared healthy (Narisawa et al. 1997). The identical point mutation of C3HeB/FeJ-Alpl BAP032 mice has also been described for a patient with HPP (Taillandier et al. 2000). Heterozygous C3HeB/FeJ-Alpl BAP032 mice presumably model mild adult HPP. The mouse line was bred for more than ten generations, showing full penetrance of the phenotype in all litters. A multitude of diverse point mutations, deletions, and insertions of the human TNSALP gene causing HPP are listed in the hypophosphatasia database (www.sesep.uvsq.fr/03_ hypo_mutations.php). The diversity of published human point mutations emphasizes the importance of mouse models for further investigations on physiological functions and cellular mechanisms of Alpl regions involved in collagen and Ca binding. Interestingly, we isolated in addition one silent mutation in the BAP020 mouse line (Table 4) showing the expected phenotype. No additional Alpl mutations were found in this mouse line. Alpl mRNA and translation of ALP were not analyzed so far.
Since only total ALP can be tested in mice so far, we probably will isolate mouse lines showing alterations other than the bone ALP isoform. High alterations of plasma ALP activities without any additional phenotypes, as observed in homozygous animals of the C3HeB/FeJ-Asgr1 BAP005 line, have not been published in mice before. It has been described in patients with chronic liver disease that the adult intestinal ALP isoenzyme was increased due to the reduced efficiency or numbers of asialoglycoprotein receptors (Moss 1994). Thus, the mutation of the gene in BAP005 mice seems to cause alterations of the intestinal ALP isoform as a secondary effect. ASRG1 mutations may be responsible for high ALP activities of so far unknown reasons in humans without any skeletal disorders (Panteghini 1991) or may cause benign familial hyperphosphatasemia (Siraganian et al. 1989).
We have to consider bone as an active metabolic organ with a possible influence on metabolism in diseases of disturbed bone turnover (Ferron et al. 2010;Fulzele et al. 2010). For this reason, systematic analysis of all organ systems, as in the German Mouse Clinic (Gailus-Durner et al. 2005), might provide new insights into the actions in these pathways. Our mouse models will be archived by the European Mouse Mutant Archive (EMMA) and are available (www.emmanet.org) for the scientific community. | 2017-08-02T19:18:58.026Z | 2012-04-21T00:00:00.000 | {
"year": 2012,
"sha1": "3dbc68769c18308aa554417a472d4769f38fb84f",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00335-012-9397-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ca70d6bcf72a673fc844ccb7e455d9e5e13e183d",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
253107747 | pes2o/s2orc | v3-fos-license | Inflation from Multiple Pseudo-Scalar Fields: PBH Dark Matter and Gravitational Waves
We study a model of inflation with multiple pseudo-scalar fields coupled to a U (1) gauge field through Chern-Simons interactions. Because of parity violating interactions, one polarization of the gauge field is amplified yielding to enhanced curvature perturbation power spectrum. Inflation proceeds in multiple stages as each pseudo-scalar field rolls towards its minimum yielding to distinct multiple peaks in the curvature perturbations power spectrum at various scales during inflation. The localized peaks in power spectrum generate Primordial Black Holes (PBHs) which can furnish a large fraction of Dark Matter (DM) abundance. In addition, gravitational waves (GWs) with non-trivial spectra are generated which are in sensitivity range of various forthcoming GW observatories.
We study a model of inflation with multiple pseudo-scalar fields coupled to a U (1) gauge field through Chern-Simons interactions. Because of parity violating interactions, one polarization of the gauge field is amplified yielding to enhanced curvature perturbation power spectrum. Inflation proceeds in multiple stages as each pseudo-scalar field rolls towards its minimum yielding to distinct multiple peaks in the curvature perturbations power spectrum at various scales during inflation. The localized peaks in power spectrum generate Primordial Black Holes (PBHs) which can furnish a large fraction of Dark Matter (DM) abundance. In addition, gravitational waves (GWs) with non-trivial spectra are generated which are in sensitivity range of various forthcoming GW observatories.
Introduction: Inflation is the leading paradigm for early universe cosmology and the mechanism behind the generation of large scale structures. Among the basic predictions of models of inflations are that the primordial perturbations are nearly scale invariant, adiabatic and Gaussian which are well consistent with cosmological observations [1]. While the simplest models of inflation are based on a single scalar field, having inflation driven by multiple scalar fields along with other types of fundamental fields in the spectrum is well-motivated in models of high energy physics [2][3][4]. In particular, there have been growing interests in the axion models [5][6][7][8][9][10][11][12][13] to amplify the primordial power spectrum for PBHs formation and to generate detectable GWs signals.
PBHs are distinct from their astrophysical counterparts in several ways. Among all, PBHs could form in the early universe from the collapse upon horizon re-entry of perturbations generated during inflation which may comprise a large fraction of DM energy density [14][15][16][17][18]. However, PBHs can form through different channels in the early universe as well [19,20]. Remarkably, unlike astrophysical black holes, PBHs can include a vast range of masses. Therefore, the recent observations of GWs from merging binary systems with about 30 times solar mass [21] together with the lack of observational signals of particle DM have renewed the interests in PBHs from inflation [22][23][24]. To produce PBHs from inflation one requires that the amplitude of the primordial curvature perturbation is large enough, at least 10 7 times larger than its CMB value. Over the past a variety of single field models have been studied to provide such enhancement, for a review see [24][25][26] and the references therein.
Among multiple-field inflation scenarios N-flation is an interesting example which is based on many axion fields, providing a simple radiatively stable realization of chaotic inflation [27]. In this model, the collective contributions of N axion fields yield a long enough period of inflation to solve the flatness and horizon problems. In this picture, inflation is divided to N slow-roll phases where each phase is driven by one axion while others are nearly frozen. Inspired by N-flation model, in this work, we study an inflationary model with multiple pseudo-scalar fields coupled to a U (1) gauge field through Chern-Simons types of interaction. We examine the enhancement of the curvature power spectrum to form PBHs at small scales. We show that PBHs can be formed abundantly (in the allowed window where PBHs could provide a substantial part of the DM, if not all) without introducing specific features on the inflationary potentials. In addition, tensor perturbations with nontrivial spectrum are generated which may be detected in upcoming GWs experiments. The Model and Background Dynamics: We consider N pseudo-scalar fields Φ a (a = 1, 2, · · · , N) driving inflation in N stages. While our starting discussions are general but for specific examples studied below, we consider the cases N = 2, 3 specifically. In each stage, only one pseudo-scalar field can slow-roll and then decay, while others remain frozen. The next inflationary stage is driven by the second field before it decays and so on. For this picture to be realized, we need a working hierarchy on the masses of Φ a , such that the most massive field starts rolling first, then the second most massive field and so on [28]. For example if the ratio of the mass of Φ 1 to Φ 2 is at the order 10 or so, then we can safely assume that the first period of inflation is driven by Φ 1 . As in single field axion model we demand that all pseudo-scalar fields couple to a U (1) gauge field A µ through the Chern-Simons interactions [29,30] in the following action in which M Pl is the reduced Planck mass, R is the Ricci arXiv:2210.13822v1 [astro-ph.CO] 25 Oct 2022 scalar associated with the spacetime metric g µν . In addition,F µν ≡ µνρσ F ρσ /2 is the dual of the gauge field strength tensor F µν = ∇ µ A ν − ∇ ν A µ . Finally,α a is a dimensionless parameter controlling the coupling of the a-the pseudo-scalar field to the electromagnetic field [29].
To simplify the analysis below, we assume that allα a have the same sign. This is a technical tuning which simplifies the analysis significantly but it can be relaxed in a more general consideration.
In the presence of couplingα a , the gauge field quanta exhibit tachyonic instability sourced by the rolling pseudo-scalar fields. More precisely, during the a-th stage of inflation, only the field Φ a rolls slowly. The rolling of this Φ a amplifies one polarization (e.g. the negativehelicity) of the gauge field, leading to [31] where k is the comoving Fourier mode of the gauge field, a and H ≡ȧ/a respectively are the scale factor and the Hubble expansion rate during inflation and φ a is the homogeneous part of the pseudo-scalar field Φ a . The dot denotes the derivative with respect to the cosmic time.
The above solution well describes the growth of the mode functions in the interval (8ξ) −1 < ∼ k/(aH) < ∼ 2ξ [8]. Note also that the other polarization state (here the positivehelicity) is not amplified and can therefore be ignored. The so-called instability parameter ξ can be considered nearly constant, as its time variation is subleading in a slow-roll expansion. It is worth mentioning that the gauge quanta (2) not only affect the background dynamics of φ a and the scale factor but also source scalar perturbations via inverse decay [7,8,32]. We assume thaṫ φ a < 0 during inflation as in the large field models like (5) so forα a > 0 the negative-helicity is amplified. As mentioned before, we assume that allα a are positive so only A (−) k is amplified at each stage of inflation. Since the gauge field has no background value one can calculate their effects on the background dynamics via mean field approximation method [11,13], yielding to Note that the contributions in right hand sides above come respectively from E 2 + B 2 and E · B where the electric and magnetic fields, in the Coulomb-radiation The exponential enhancement reflects significant non-perturbative gauge particle production in the regime ξ > ∼ 1 [6]. To ensure that the tachyonic growth of gauge field fluctuations does not spoil the inflationary dynamics, we demand H 2 /|φ a | O(10 2 ) ξ 3/2 e −πξ at each a-th stage [7,8,13]. From Eq. (4), one finds that the growth of ξ comes to a halt when the back-reaction term becomes large enough. Note that the system experiences a nonlinear phase for large coupling, e.g.α > ∼ 20. This regime is known as the strong back-reaction regime [33]. Furthermore, ξ does not experience the oscillatory epoch discussed in [33][34][35][36], because before entering this phase at the end of previous stage, the next rolling field dictates the evolution of ξ. We work in the regime of negligible back-reaction such that the system never enter this phase and the evolution of ξ can not destroy the inflationary dynamics driven by the pseudo-scalar φ a .
A simple choice for the inflation potential is the chaotic-type potentials [27][28][29] as in N-flation such that during the a-th stage only the field φ a rolls down towards it potential minimum for some e-folds, oscillating rapidly at the bottom of its potential till its amplitude is effectively died out and the next field starts its rolling. However, the chaotic potentials is rule out by current Planck constraints [1] even in the multi-field configuration [37,38] due to the large tensor-to-scalar ratio value, r t . For example, for the two-field and three-field (N = 2, 3) axion models with the pure chaotic potentials, our numerical results indicate that r t > ∼ 0.1 which is in conflict with large scale CMB observations [1]. One possible way to avoid this issue is to consider the following simple potential form [39][40][41][42][43] where V 0 , m 1 , and m a are constant parameters. In two-field case, this potential represents the well-known dilaton-axion inflation [40,44]. We arrange that on CMB scales the first pseudo-scalar field (the dilaton field) drive inflation while the remaining pseudo-scalar fields (N ≥ 2) with the standard chaotic type potential (now specifically called axionic fields) drive the rest of inflation. Recently, in [41][42][43] the authors have shown that PBHs and GWs might be generated by considering a non-flat field space with the negative curvature in the absence of Chern-Simons coupling. In comparison, here we work with the flat field space while the instabilities induced from the Chern-Simons coupling are responsible for amplification of power spectra. Also note that instead of potential (5) one may consider different examples as well. We only need to assume the first stage of inflation is driven with a potential different than simple chaotic potential such that on CMB scales the value of r t is small enough. After presenting the general setup, in the following we consider two specific models: dilaton-axion (model I) and dilaton-axion-axion (model II). Table I presents the initial conditions and model parameters. The parameters are fixed to produce the correct COBE normalization at the CMB pivot scale k CMB = 0.05 Mpc −1 [1]. As shown in Figs. 1 and 2, the background experiences several inflationary phases in both models. The first inflationary phase is driven by dilaton Φ 1 while the other fields remain frozen. After Φ 1 has reached to its minimum and eter, and P R for the model I with two pseudo-scalars (one dilation and one axion). The black dashed curve presents the PBH bound [12]. A rise in ξ yields to a localized peak in P R . Note that the green dashed curve shows that the power can not be enhanced whenαi = 0.
its energy is died out after a few rapid oscillations the axion fields drive the next inflationary phase each in turn. The evolution of the Hubble parameter is also presented in Figs Q a k in momentum space reads [11,45,46] (6) where τ is the conformal time, dτ ≡ a(t)dt. The solution forQ a can be separated into two uncorrelated partsQ a =Q represents the solution to the homogeneous part of Eq. (6) which reduces to Bunch-Davies vacuum on small scales, whereasQ (s) a is the particular solution obtained by the Green function [7]. Also, since there is no interaction between the fields, the equations forQ a are decoupled. Finally, the power spectrum of curvature perturbations, which is defined as R k = a (H/aφ a )Q i k at horizon crossing time, N k , becomes [7,8] (7) where H is the first slow-roll parameter and the dimensionless function f 2 (ξ) can be estimated for large ξ as 10 −5 /ξ 6 [7]. The first term in Eq. (7) stands for the standard vacuum contribution to the power spectrum [4,28].
The curvature perturbations power spectrum for the models in Table I are illustrated in Fig. 1 and 2. As can be seen, a rise in ξ amplifies the scalar power spectrum for the mode that leaves the Hubble radius at the transition time between the two stages. The location of a-th peak, N a (number of e-fold since the start of inflation), [20,25,53]. We have considered the threshold density contrast δc 0.4 for PBHs formation [54,55].
is given by the initial condition φ a * , while the amplitude of P R (k) is controlled byα a . Interestingly, for large values ofα a , the enhancement in the power spectrum is large enough to seed PBH formation due to the gravitational collapse of large density fluctuations after horizon re-entry during radiation-dominated era [18]. The mass of the formed PBHs is approximately 0.2 of the mass enclosed in the horizon at the time of re-entry of the corresponding mode [18,47]. Assuming an instant reheating at the end of inflation, the mass corresponding to a-th peak can be estimated as [12] where M is the solar mass and H end and H a are the Hubble rates at N end and N a , respectively. The formed PBHs can contribute to the DM density. The fraction of PBHs against the total DM density at the present is given by [18] where β is the mass fraction of PBHs at the time of formation [12,[48][49][50] which depends on the probability density function (PDF) of curvature perturbation [51,52]. SinceQ (s) a , originating from the convolution of two Gaussian gauge fields, generates the peaks in P R , the PDF of curvature perturbation obeys a χ 2 statistics [11]. In Fig. 3, we have depicted f PBH for the models introduced in Table I. As illustrated, the formed PBHs can furnish a large fraction of total DM abundance. In particular, for model I, we obtain f PBH 1 corresponding to M PBH 2 × 10 −14 M , whereas for model II we see two distinct peaks associated with M PBH 10 −12 M and M PBH 0.8M , which jointly yield to f PBH 1.
Primordial and Induced GWs: In addition to the quantum vacuum fluctuations of metric during inflation, there are two distinct populations of stochastic GWs in our inflationary scenario. The first contribution is related to the GWs generated form the amplified gauge fields The shadowed regions represent the sensitivity curves of various GW detectors [56,57]. Note that the model II with two rolling axions yields to two separated sets of curves (black). The peaks of these curves are due to primordial GWS given in (11) while the oscillations in these curves originate from convolution integrals in Eq. (12).
Due to the parity-violating nature of the system, the right and left helicities of tensor modes have different amplitudes [58]. The equation of motion for two canonical tensor helicityĥ λ is given by where Π λ ij is the transverse traceless projector [58]. Similar to scalar fluctuations, we decomposeĥ λ into a vacuum mode,ĥ (v) λ , and the sourced mode,ĥ (s) λ . Adding up these two contributions, one finds the tensor power spectrum for each polarizations mode to be [58] where the superscript (p) stands for the primordial contribution in which the first term represents the contribution from vacuum fluctuations. Moreover, the dimensionless function f λ (ξ) at large ξ for the right and left helicities are approximately 10 −7 /ξ 6 and 10 −9 /ξ 6 , respectively [58]. Correspondingly, the main contribution to the primordial tensor power spectrum, P λ , comes from the right helicity GW modes [32,58].
As stated earlier, the GWs can be induced from the amplified curvature perturbations in Eq. (7) [61]. Indeed, the large second order scalar fluctuations on small scales induce tensor perturbations after the horizon reentry during radiation-dominated era. With regard to the population of GWs discussed above, one deals with multiple integrals of the following form [69] (12) where f (k, k , t) is an oscillating function and t describes the time when the GW is sourced from the scalar modes [69][70][71][72]. Finally, the total present-day energy density of GWs is given by [61,67,68] in which Ω (p) GW (k) and Ω (ind) GW (k) represent the fraction energy density of primordial GWs induced by the tachyonic gauge field mode and the induced GWs from the second order scalar perturbations respectively.
In Fig. 4, we have plotted the quantity Ω GW h 2 against the frequency with h 2 = 0.49 together with the sensitivity of the various forthcoming GW experiments e.g. the LISA [73], BBO [74][75][76], SKA [77][78][79], and PPTA [80,81]. Clearly, for the models I, Ω GW h 2 falls within the sensitivity of the BBO and peaks well inside the range of detectability of LISA. Remarkably, for the model II, we observe that the double rises in GWs are detectable by LISA and SKA. A similar feature has been observed in [82] as a signal of a non-thermal baryogenesis from evaporating PBHs. Future GW observations can put constrain on the parameters of our model. Summary and Discussions: We have studied a model of inflation with multiple pseudo-scalar fields coupled to a gauge field via the Chern-Simons type interactions.
There are multiple stages of inflation driven by each scalar field. To evade the constraint on tensor-to-scalar ratio, we have considered a setup where the first stage is driven by a dilaton field while the remaining stages of inflation below CMB scales are driven with multiple axionic fields with the standard chaotic type potentials. However, our setup can be extended to more complicated potentials, such as α-attractor model [40]. The enhanced power spectrum from the gauge field instability can generate PBHs with various masses which can furnish a large fraction of total DM while satisfying the bounds on PBHs formation [12]. In addition, GWs can be generated both from second order scalar perturbations as well as from the tachyonic gauge field perturbations with distinct features on the location of the peaks and their oscillatory behaviours. These signals are within the detection range of the future GW observatories. There are a number of directions in which the current investigations can be extended. These include investigating the non-Gaussianity of the perturbations [83] and its effects on the induced GWs [84,85] and the PBHs formation [86]. | 2022-10-26T05:34:01.003Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "b0983bb4a92f9c9dd502884e4137e0e54d6ed78c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b0983bb4a92f9c9dd502884e4137e0e54d6ed78c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
251187402 | pes2o/s2orc | v3-fos-license | Wastewater-Based Epidemiology for COVID-19: Handling qPCR Nondetects and Comparing Spatially Granular Wastewater and Clinical Data Trends
Wastewater-based epidemiology (WBE) is a useful complement to clinical testing for managing COVID-19. While community-scale wastewater and clinical data frequently correlate, less is known about subcommunity relationships between the two data types. Moreover, nondetects in qPCR wastewater data are typically handled through methods known to bias results, overlooking perhaps better alternatives. We address these knowledge gaps using data collected from September 2020–June 2021 in Davis, California (USA). We hypothesize that coupling the expectation maximization (EM) algorithm with the Markov Chain Monte Carlo (MCMC) method could improve estimation of “missing” values in wastewater qPCR data. We test this hypothesis by applying EM-MCMC to city wastewater treatment plant data and comparing output to more conventional nondetect handling methods. Dissimilarities in results (i) underscore the importance of specifying nondetect handling method in reporting and (ii) suggest that using EM-MCMC may yield better agreement between community-scale clinical and wastewater data. We also present a novel framework for spatially aligning clinical data with wastewater data collected upstream of a treatment plant (i.e., distributed across a sewershed). Applying the framework to data from Davis reveals reasonable agreement between wastewater and clinical data at highly granular spatial scales—further underscoring the public-health value of WBE.
■ INTRODUCTION
Wastewater-based epidemiology (WBE) has become widely recognized as a useful complement to clinical testing for managing COVID-19. Relative to large-scale diagnostic testing, WBE offers a less resource-intensive way to monitor COVID-19 infections and spread among large numbers of people. WBE is also unbiased, capturing data on entire populations rather than just the subset of individuals who come in for clinical testing. 1 Most studies to date comparing wastewater and clinical data have focused on the community scale, that is, comparing trends in data collected from the influent to a given wastewater treatment plant (WWTP) to trends in data collected from clinical tests of a subpopulation served by that WWTP. 2−4 Such studies have frequently found good agreement between the two data sources. But little is known about relationships between wastewater and clinical data at more granular spatial scales. Part of the challenge in elucidating such relationships is the fact that to protect privacy, clinical-testing results are often geographically aggregated, for example, at the census-block level. A first objective of this study was to develop and test a framework for probabilistically disaggregating clinical-testing data to facilitate comparison with wastewater data collected from sampling sites that strategically isolate different parts of a community sewershed.
Separately, SARS-CoV-2 RNA in wastewater samples is typically quantified using either reverse transcription-quantitative polymerase chain reaction (RT-qPCR) or RT-droplet digital PCR (RT-ddPCR). 5 While RT-ddPCR is becoming more popular for wastewater surveillance 6 because of its greater specificity and sensitivity, 7,8 many laboratories continue to use RT-qPCR due to the higher cost and time requirements of RT-ddPCR and the large upfront capital investment of ddPCR instrumentation. Bivins et al. (2021) recently drew attention to how variability in RT-qPCR methods and reporting affects results and interpretation. 9 An additional and important source of variability not considered by these authors is how nondetects are handled. qPCR nondetects occur routinely for reasons including low or zero target abundance, poor assay design/ performance, or human error. 10,11 There is no current consensus on how to best manage qPCR nondetects. Researchers, whether through scientific software or manual analysis, typically handle nondetects either using single imputation (setting all nondetects equal to a constant value, such as the mean of detected replicates, half the detection limit, or zero) or by censoring (excluding nondetects from analysis altogether). 11 Unfortunately, both single imputation and censoring can substantially bias qPCR results. 11 The biasing effect is amplified when, as is often the case for wastewater data, the target is present in low concentrations to begin with. A second objective of this study was to demonstrate how different nondetect handling methods can affect apparent wastewater data trends and to explore whether multiple imputation of nondetects can improve on more commonly used but less sophisticated approaches.
■ MATERIALS AND METHODS
Study Setting and Design. We used wastewater data collected through the Healthy Davis Together (HDT) program in Davis�a small city of approximately 69,000 located in northern California�to (1) explore the value of multiple imputation for handling qPCR nondetects and (2) examine relationships between wastewater and clinical data at multiple spatial scales.
HDT was a joint, multipronged initiative between the city of Davis and the University of California, Davis (UC Davis) for local management and mitigation of COVID-19. Beginning in November 2020, HDT made free, saliva-based PCR tests for COVID-19 available to anyone living or working in Davis. Uptake of the clinical-testing program was considerable. The fraction of Davis residents who reported receiving at least one COVID-19 test rose from 30% to 73% from September 2020 to March 2021. As of April 2021, Yolo County had performed the most tests per capita of California's 58 counties, at a rate quadruple the state median.
HDT also conducted wastewater surveillance at the community, subregional, and building/neighborhood scales (Figure 1). At the community scale, samples were collected from the influent to the City of Davis Wastewater Treatment Plant (COD WWTP). The COD WWTP captures all of Davis's municipal wastewater, with no contributions from UC Davis or from neighboring jurisdictions. At the subregional scale, samples were collected from sewershed nodes isolating the wastewater contributions of different geographic areas in the city. At the building/neighborhood scale, samples were collected from sewershed nodes isolating high-priority building complexes or neighborhoods identified through discussion with local officials. The HDT WBE program began in September 2020 with weekly samples collected from the COD WWTP. Zones were added and sampling frequency increased over the course of the sampling campaign ( Figure S1). At full scale-up, the surveillance program sampled daily from the COD WWTP and 3x/week from each of 16 subregional and seven building/neighborhood zones.
Sample Collection. 24-h time-weighted composite samples were collected from each zone using insulated Hach AS950 Portable Compact Samplers (Thermo Fisher Scientific, USA) programmed to collect 30 mL of sample every 15 min. The bulk of samples were processed immediately, with a small number stored at 4°C for up to 1 week before processing.
Sample Processing. Samples were pasteurized for 30 min at 60°C to reduce biohazard risk while preserving RNA quality. Samples were then spiked with a known concentration of φ6 bacteriophage (strain HB104; generously provided by Samuel Díaz−Munõz, UC Davis) as an internal recovery control. 12,13 The φ6 spike solution was prepared using previously described methods, 14 modified slightly by using ATCC Medium 129 in place of LB media. The final steps in the processing pipeline were sample concentration and extraction. From September 2020 through the end of February 2021, concentration was performed via ultrafiltration through 100 kDa Amicon Ultra-15 centrifugal filter devices, and column-based extraction was performed manually using either the NucleoSpin RNA Stool Kit (Macherey-Nagel) or the AllPrep PowerViral DNA/RNA Kit (Qiagen). From February 2021 through June 2021, concentration was performed using Nanotrap Magnetic Virus Particles (Ceres Nanosciences) and the MagMAX Microbiome Ultra Nucleic Acid Isolation Kit (Thermo Fisher) coupled with the KingFisher Flex liquidhandling system (Thermo Fisher). The particle-based method was far more conducive to automation and higher throughput than the ultrafiltration-based method, and the switch was necessary to accommodate greater numbers of samples as the sampling campaign scaled up. Further details on the concentration and extraction protocols are available in SI Materials and methods.
We performed a four-sample comparison of the two methods, utilizing three process replicates and three qPCR technical replicates per method per sample (SI Methods comparison). Two-way ANOVA showed that the ultrafiltration method yielded higher concentrations of the fecal-strength indicator PMMoV while the magnetic-particle method yielded higher concentrations of both the N1 and N2 regions of the SARS-CoV-2 nucleocapsid gene; however, average concentrations of positive replicates for all targets across all samples (Table S1) were generally of the same order of magnitude (with the exception of N1 for Sample 4 and PMMoV for Sample 1, where slightly more than an order of magnitude separated average concentrations of positive replicates for the two methods).
RT-qPCR. Sample extracts were analyzed by one-step RT-qPCR for four targets: N1 and N2 targeting regions of the nucleocapsid (N) gene of SARS-CoV-2, φ6 bacteriophage (an RNA virus used as an internal quality control), and pepper mild mottle virus (PMMoV; used for normalization of SARS-CoV-2 results). Additional information on the RT-qPCR assay designs is available in Tables S1−S3. Per Bivins et al. (2021), 9 the Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) checklist for this study is included as SI MIQE. Inhibition testing (see SI MIQE) was performed on a representative set of six sample extracts, using three different dilutions per sample. Cts increased commensurately with dilution factor, indicating lack of inhibition. Triplicate (technical replicate) wells were run for each target of each sample. Each qPCR run included duplicate of a nontemplate-control (NTC) and a known concentration of positive plasmid to verify consistency of the Ct number between plates.
Because our WBE campaign required preparation and analysis of a large number of qPCR plates, we did not run a standard curve on every plate. Rather, we constructed standard curves for each target (Table S4) on separate plates using seven-point serial dilutions of plasmid containing the targets (ordered from Eurofins Scientific and IDT, who supplied information on starting plasmid concentrations measured through spectrophotometry), with each dilution assayed in triplicate or quadruplicate. Per Kralik and Ricchi (2017), the limit of detection (LOD) values were considered "as the minimum concentration of nucleic acid or number of cells, which always gives a positive PCR result in all replicates tested, or in the major part (over 95%) of them". 15 The measured LODs were 2.5 gene copies (gc) per reaction for N1, 10 gc per reaction for N2 and φ6, and 1 × 10 3 for PMMoV (Table S3).
However, Kralik and Ricchi also note that "LOD is not a limiting value and, therefore, that C q values below the LOD are absolutely valid in terms of microorganism presence; however, the probability of their repeated detection is less than 95%." Given this, and given that our study assesses how handling of nondetects influences results for very low concentrations of N1 and N2 targets, we also defined a statistical limit of detection (LOD 99 ) for N1 and N2 using the highest Cts statistically distinguishable from negative controls (Table S3). The LOD 99 values were based on uncertainty in the standard curve as the upper 99th percent confidence interval of the Ct values of either the negative controls (for which signal was detected only for PMMoV) or the Ct of the total number of cycles minus 1 (that is, 44) if no signal was present in the negative controls. 16 These values were 42.44 for N1 and 43.07 for N2 (corresponding to LOD 99 values of 0.1 and 0.2 gc/reaction, respectively). All N1 and N2 Cts less than these values were included as valid in our analysis, while Ct values greater than these values were treated as nondetects.
Nondetect Handling Methods. Inspired by McCall et al. (2014), 11 we developed and applied an expectation maximization-Markov chain Monte Carlo (EM-MCMC) model for multiple imputation of nondetect Ct values in wastewater qPCR data. 17 We began by grouping results by sampling zone a separately for each target (i.e., N1 and N2). Within each zone, we modeled the Ct values (X i,t ) for each technical replicate (index i) and sampling date (index t) as independent and identically distributed. The values were modeled with a normal distribution characterized by a common variance σ 2 and common prior probability distribution b on the mean Ct parameter for a given sampling date across all technical replicates (i.e., the prior on θ t ). The normal distribution is truncated such that it is positive.
We then used an empirical Bayesian approach to learn the prior for the model parameters, enabling discovery of hyperparameters shared by all samples from the same zone via the EM algorithm. The approach reduces variability in estimated mean Ct value for each sample date by specifying a common prior for all samples from a given location. Specifically, we modeled the priors for all θ t and common σ as two gamma distributions with shape and rate parameters α i calculating the posterior distribution for the latent (i.e., modelinferred) parameters given the current hyperparameters (E step) and updating the hyperparameters using maximum likelihood based on the posterior expectation. Because closed forms for the posterior distribution do not exist for this application, we sampled from the posterior using MCMC via Python's Stan package (pystan). The EM-MCMC algorithm can be summarized as samples of the latent parameters θ i,t and σ within the group using MCMC with the current hyperparameters. (3) Compute the maximum likelihood estimates of the hyperparameters given the T sampled latent parameters (solved numerically via the scipy.stats.gamma.fit method). (4) Repeat steps 2 and 3 until convergence of hyperparameters. We carried out this process independently for each target and group using the hyperparameter priors α i θ = 1, The model was run for 20 iterations, generating 10 4 MCMC samples per iteration of which the first 500 were dropped. The model was then run again for one iteration (again with 10 4 MCMC samples and 500 drop samples) using the hyperparameter estimates. The Python script used for implementation, along with a sample data set, is available at https://tinyurl.com/Safford-et-al-EM-MCMC. The model output contained estimated posterior mean N1 and N2 Cts for each sample. The Ct values were converted to concentration values (in gc/reaction) using the master standard curves presented in Table S4 and effective volumes analyzed.
We compared the EM-MCMC method with the following three (more conventional) methods for handling qPCR nondetects in wastewater data: (1) [LOD 0.5 ], single imputation with half the detection limit.
As stated above, serial dilutions of plasmids containing the N1 and N2 targets indicated that the LOD of our N1 and N2 qPCR assays were ≤10 gc/reaction. For the LOD 0.5 method, then, we set the single imputation value at 0.5 gc/reaction� i.e., half the theoretical LOD of 1 gc/reaction. 15 This value was substituted as the target concentration for any technical replicate yielding a nondetect. For the Ct max method, we similarly substituted 0.010 gc/reaction and 0.047 gc/reaction (values calculated from the master standard curves using the assay's maximum Ct of 45) as the target concentrations. For the Ct avg method, nondetect values were simply dropped from N1 and N2 concentration calculations (and average concentrations of samples with no positive replicates were set to zero).
Data Analysis. N1, N2, and PMMoV concentrations calculated using each nondetect handling method were converted to gc/L of initial sample based on effective volumes analyzed. MATLAB software (version R2021a; MathWorks) was used for subsequent analysis. N1 and N2 concentrations were averaged into a single concentration (C N1N2 ) per sample to facilitate data visualization and trend analysis. C N1N2 values were normalized using PMMoV d according to the formula C facilitate comparison of wastewater and clinical data, and the MATLAB "smoothdata" function was applied using a centered 7-day moving average. Probabilistic Assignment of Clinical Data to Sampling Zones. All clinical data collected by HDT's asymptomatic community-testing program e since program inception were provided as an anonymized data set indicating the date that each test was administered, the ZIP code and census block corresponding to the testee's address, and whether the test was positive. Use of these data was deemed exempt from IRB review by the University of California, Davis, IRB Administration. To compare clinical and wastewater data at the city/WWTP scale, we selected a subset of these data comprising all clinical-testing results for Davis ZIP codes (95616, 95617, and 95618). We designed a Python tool (available at https://tinyurl.com/Safford-et-al-Predictive) that combines information on municipal wastewater flows (provided as a File Geodatabase by the City of Davis Public Works Department) with U.S. Census Bureau data to probabilistically assign HDT asymptomatic testing results to sewershed sampling zones via three steps. First, we used the geospatial coordinates of all maintenance holes (MHs) in the Davis sewer system, along with information indicating the relative positions (upstream/downstream) of each MH, to build a graph capturing directional connections among all MHs ( Figure 2A). Second, we used 2019 American Community Survey (ACS) data from the U.S. Census Bureau (USCB) to estimate the number of people living in each census block included in the HDT clinical-testing data set. We assume that each person in each census block produces the same amount of wastewater (a "unit") each day, and that each person has an equal probability of discharging the wastewater unit to each MH located within the block. Finally, we used the connection graph to probabilistically assign positive clinical-testing results from census blocks to sewershed monitoring zones, as illustrated and explained in Figure 2B. Results the recovery efficiency for each sample but did not attempt to use this value to correct the concentration data. 14 At least one sample from each monitoring site and a total of 377 samples across all sites tested positive for SARS-CoV-2 (i.e., N1 or N2 above LOD in at least one technical replicate). Nondetect replicates were common even among positive samples; only 32 samples were positive for all N1 and N2 technical replicates. N1 and N2 nondetect percentages were similar and inversely proportional to sampling scale (Table S5). This suggests either that reliable detection of SARS-CoV-2 may become more challenging the further upstream in a sewershed that sampling is conducted or that nondetects are simply more frequent for smaller monitoring areas due to lower population sizes. Pepper mild mottle virus (PMMoV) nondetects were never observed. Because PMMoV serves as both an indicator of fecal strength and an internal control, consistent detection of PMMoV suggests that the high percentages of N1 and N2 nondetects are more likely attributable to frequently low abundance of SARS-CoV-2 in the wastewater samples rather than a systematic problem with the viral RNA extraction protocols used. We acknowledge, however, that different qPCR assays can behave differently in response to the same conditions. Confirmation of successful qPCR assays is supported by (1) inclusion of N1 and N2 positive controls for every qPCR run, and (2) the fact that samples yielding higher numbers of positive technical replicates also exhibited lower Cts on average for those replicates (Table S6)�that is, nondetects were more common when the target was present at lower concentrations. We did not observe any systematic correlation between PMMoV concentration and the number of N1 or N2 nondetects.
EM-MCMC Model Performance. Trace plots of posterior means generated by the EM-MCMC method over time showed good convergence. Trace plots of the MCMC samples exhibited no obvious patterns, indicating strong mixing of the Markov chains ( Figure S2). Table 1 summarizes model output. The table shows that the number of positive replicates for a given sample exhibits a weak negative correlation with average standard deviations of imputed N1 and N2 mean Cts. This indicates that as the number of positive replicates increases, so too does the model's confidence in its estimate of the "true" Ct. The table also shows that, as we would expect, the more positive replicates of a sample there are, the closer the average of those replicates is likely to be to the imputed mean Ct. The very large values for samples with zero positive replicates indicate that the model, having no information about those samples, simply defaults to the prior specifications placed on it.
Comparison of Nondetect Handling Methods. We used COD WWTP data to compare the EM-MCMC method with three other, commonly used methods for handling nondetects in wastewater qPCR data: LOD 0.5 (single
ACS ES&T Water
pubs.acs.org/estwater Article imputation with half the detection limit), Ct max (single imputation with the maximum qPCR cycle number), and Ct avg (censoring nondetects entirely). Figure 3 coplots the community-level clinical data with the relative normalized SARS-CoV-2 concentrations calculated using each method. We see from this plot that while apparent relative normalized virus concentrations are similar when calculated using different nondetect handling methods, they are not the same. Toward the end of the study period, for instance, relative normalized virus concentrations calculated using the LOD 0.5 method are generally higher than concentrations calculated using the other methods. There are also particular dates when one calculation method yielded much higher values than the other methods (e.g., December 9 for the EM-MCMC method, February 14 for the LOD 0.5 method, April 20 for the Ct max method). We applied Spearman's rank-order correlation to quantitatively assess how well the clinical-data trends match the wastewaterdata trends for results obtained using each of the nondetect handling methods tested. The results (Table 2) show a considerably lower correlation when using the LOD 0.5 method and a slightly stronger correlation when using the EM-MCMC method. These correlation coefficients suggest that the LOD 0.5 method may not be as robust as alternatives, while also indicating the potential value of the EM-MCMC method. Subcommunity Comparison of Clinical and Wastewater Data. Figure 4 coplots the clinical data and relative normalized virus concentrations (calculated using the EM-MCMC nondetect handling method) for each sampling zone. Table 3 presents the accompanying Spearman's rank-order correlation coefficients. For these data, coupling visual and quantitative inspection yields a holistic assessment of how well subcommunity trends in the clinical and wastewater do (or do not) match. Visual inspection enables rapid though subjective identification of interesting features in the data. The Spearman correlation analysis, on the other hand, provides a useful objective framework for interpreting the data but suffers from limitations. For instance, trends in clinical data collected from symptomatic individuals have sometimes been observed to lag trends in wastewater data. 20,21 But a systematic lag is less likely when clinical data derives from large-scale asymptomatic testing. 22 Moreover, because Davis is a small community that experienced a relatively low COVID-19 burden during this study, daily numbers of HDT-reported cases were generally low. Double-digit numbers of confirmed cases were reported on only 11 of the 234 days included in this study, and days on which the number of confirmed cases was zero or one were common. Probabilistically assigned case levels at the subregional and building/neighborhood scales were frequently fractional and near zero as a result. For these sampling zones characterized by sparse positive data, the results of the Spearman analysis can be significantly affected by only one or several data points.
Despite these caveats, we find reasonably good agreement between subcommunity clinical and wastewater data in certain instances sampling zones at both the subregional and building/ neighborhood scales. Visual inspection shows that zones and time periods exhibiting greater activity (i.e., more frequent detections) in clinical data tended to also exhibit greater activity in wastewater data. Moreover, we generally observed much higher Spearman correlation coefficients for the 10 zones where wastewater surveillance began prior to the winter COVID-19 surge. This may be explained by greater activity (in the wastewater and clinical data alike) during the winter surge, as well as by the fact that sampling zones added later in the campaign were generally smaller�and hence, less active� than zones added earlier. The larger data sets available for zones where sampling began early also strengthen the robustness of data comparisons (as indicated by the universally low p-values of correlation coefficients for these zones). A notable exception to this trend is zone SR-G. We note that this zone largely comprises apartment complexes targeted at low- In multiple zones (e.g., BN-D, BN-E, SR-C, SR-E, and SR-I), even relatively small and isolated spikes in clinical data were observe, parallel spikes in wastewater virus concentrations and clinical case rates recorded at the community and regional levels during the winter 2020/2021 COVID-19 surge indicate that wastewater monitoring can provide information on changes in disease burden. 24 Our results indicate that wastewater monitoring may also be valuable at the subregional and building/neighborhood levels. Wastewater data from most zones were characterized by major peaks and valleys�with a high positive result frequently occurring right after a low positive or nondetect result and vice versa�rather than smooth trends. This phenomenon may be due to relatively low-frequency sampling during the period of highest disease burden. On the basis of the daily sampling of wastewater from multiple WWTPs in Wisconsin, Feng et al. (2021) concluded that "a minimum of two samples collected per week [is] needed to maintain accuracy in trend analysis". 25 Because of the staffing and lab-capacity constraints, however, wastewater samples for this study were only collected on a weekly basis from November through late January; sampling frequency was increased in late winter/early spring. Increased sampling frequency made it easier to put high-positive results in context. For instance, zones SR-F, SR-H, and SR-J yielded similarly high positive results on April 30, March 24, and March 10�but while results for zone SR-J increased gradually leading up to the high-positive date and decreased gradually after, results for zones SR-F and SR-H were at or near zero on either side of the high-positive date. This indicates that the SR-F and SR-H high positives were aberrations, while the SR-J high positive was part of a meaningful trend.
Even after sampling frequency increased, we occasionally observed isolated high-positive results (in both the wastewater and clinical data) that did not appear part of broader trends (e.g., for zone SR-H in late March and zone SR-F in late April). These isolated positives could be due to aberrations (such as an infected group of individuals temporarily visiting a zone or coincidental passage of a large amount of virus-rich fecal matter near an autosampler actively drawing up volume) rather than sustained community spread. This possibility cautions against basing public-health interventions on individual data points.
There are multiple explanations for mismatches between wastewater and clinical data trends (e.g., the spike observed for clinical�but not wastewater�data in early April for Zone SR-B). One explanation is that while the predictive probability model performs reasonably well, it is still at best an approximation of the number of clinically confirmed cases in each wastewater sampling zone. Furthermore, generally low COVID-19 levels in Davis yielded sparse or weak positive signals in the clinical data, which in turn made it difficult to perceive trends at more granular spatial levels. A more precise comparison of wastewater and clinical data would require disclosing the addresses of individuals testing positive�an unacceptable privacy violation.
A second explanation is that the HDT data set used in this study is incomplete. The data set does not include results from other COVID-19 testing opportunities available to Davis residents (e.g., tests conducted in medical settings or through county-run testing programs). The HDT data set also does not include results from the parallel on-campus testing program for UC Davis students and employees even though these individuals frequently reside off campus. This explanation could account for the February spike in wastewater�but not clinical�data observed for Zone BN-D, since Zone BN-D includes an apartment complex targeted at students.
A final explanation is that neither WBE nor clinical testing reliably capture the "true" level of COVID-19 infections in a sampling zone. WBE results can be affected by many factors, including variability in SARS-CoV-2 excretion rates, 26 wastewater composition and temperature, average in-sewer travel time (coupled with viral decay in sewer lines 14 ), per-capita water use, 27 autosampler settings, 28 and movement of people in and out of sampling zones. Clinical-testing results can be further biased by various types of self-selection. 29,30 Though it is impossible to precisely determine the relative contributions of these factors and biases, context can suggest which are likely to have the greatest influence in a given instance. For example, an unexplained spike in wastewater�but not clinical�data for a zone housing disproportionate numbers of individuals with characteristics that could cause lower propensity to test (e.g., limited access to transportation; low English proficiency) could be a sign of the presence of infected individuals detected through WBE but not clinical testing. Bolded rows indicate zones where wastewater surveillance began prior to the winter COVID-19 surge. p-values are in parentheses: *** p < 0.01, ** p < 0.05, * p < 0.1. Implications for Future WBE Deployment. In this study, we hypothesized that (i) conventional methods of handling qPCR nondetects could substantially bias apparent trends in wastewater data and that (ii) such bias could be minimized by instead using a combined expectation maximization-Markov Chain Monte Carlo (EM-MCMC) strategy to estimate nondetect values. We tested this hypothesis with data collected from November 2020−June 2021 at the City of Davis Wastewater Treatment Plant. Specifically, we compared trends in city/community-level clinical data to trends in WWTP data obtained using four different nondetect handling methods: single imputation with half the detection limit, single imputation with the maximum qPCR cycle number, censoring, and the EM-MCMC method. While results obtained using different nondetect handling methods were more similar than expected, they were not the same. This indicates the importance of specifying nondetect handling method in WBE studies. Though our methods comparison would need to be extended to additional sites to convincingly identify the optimal strategy for handling nondetects, Spearman's rankorder correlation did show slightly stronger agreement between clinical and wastewater data examined, herein, using the EM-MCMC method. Refinements to the algorithm, tuning parameters, and variable groupings presented herein could further recommend this method for wastewater-data analysis in the future.
We also found that WBE can provide useful information about disease prevalence and trends at granular spatial scales. Visual and quantitative comparison of subcommunity-level data from a large, asymptomatic clinical-testing initiative in Davis, CA, with data from a parallel WBE campaign revealed significant correlations, especially in sampling zones for which greater numbers of data points were available and where COVID-19 burden was relatively high. Our results suggest that strategically geotargeted WBE could support pandemic response by, for instance, informing allocation of resources such as testing, personal protective equipment, and vaccination outreach. In addition, the predictive probability model we developed for spatially aligning clinical and wastewater data by wastewater-sampling zone provides a framework that can be easily extended to support similar analyses in other regions and communities.
We acknowledge two limitations of our work. First, some comparisons presented herein are incomplete because sampling zones were added over time. Only two of the seven sampling zones at the building/neighborhood scale, for instance, were active during the winter pandemic surge. Though this means that our results do not provide deep insight into the value of spatially granular wastewater surveillance during periods of peak disease spread, we note that wastewater surveillance is particularly valuable outside of such periods� for example, as an early warning system when background case levels are low. Second, we did not rigorously test the effect of different data groupings when running the EM-MCMC model. Though grouping data by sampling zone is a logical choice, it is possible that alternate groupings (e.g., grouping by sampling scale, grouping temporally, pooling results from adjacent sites, etc.), coupled with appropriate tuning of model parameters, could further improve model performance.
■ CONCLUSIONS
• Because different methods of handling nondetects in wastewater qPCR data yield different results, researchers should specify nondetect handling method used in technical details of future studies. • Preliminary evidence presented in this Article indicates that coupling the expectation maximization (EM) algorithm with the Markov chain Monte Carlo (MCMC) method to estimate "missing" qPCR values may yield better results than more common and less sophisticated nondetect handling methods. Further work is needed to validate this hypothesis. • Privacy considerations mean that clinical test results for diseases like COVID-19 may only be available in aggregate, for example, at the census-block level. The predictive probability model presented, herein, is a useful general framework for researchers seeking to spatially align wastewater data from sewershed sampling sites with aggregate clinical data to compare trends. • Results obtained by applying the predictive probability model to data from the Healthy Davis Together initiative in Davis, CA, show parallels between wastewater and clinical data at the subregional and building/ neighborhood scales. These results further underscore the value of wastewater-based epidemiology as a publichealth tool, suggesting that sewershed data could help inform allocation of resources (e.g., outreach, mobile testing/vaccination units, distribution of personal protective equipment) on hyperlocal scales. ■ ASSOCIATED CONTENT
* sı Supporting Information
The Supporting Information is available free of charge at https://pubs.acs.org/doi/10.1021/acsestwater.2c00053. Additional materials and methods, including information on qPCR assay, methods comparison, and sampling zone populations (PDF) Raw data and metadata from sample collection and analysis (XLS) Probabilistic assignments of clinical-testing results (XLSX) Raw data from methods comparison (XLSX) MIQE checklist (XLSX) | 2022-07-31T15:17:10.832Z | 2022-07-29T00:00:00.000 | {
"year": 2022,
"sha1": "1a7f0d1a57ac98cbf9cf082b69f438619a73ee73",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2d794edab41d426c6d75d2f57fe6df13b9ff85fc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
246649400 | pes2o/s2orc | v3-fos-license | A Simulation Method for Assembly Process of Branch Cables Based on the Minimal Energy Principle
To solve the simulation problem of branch cables in virtual assembly process, a cable simulation method based on the minimal energy principle is proposed. Firstly, to describe the postures and deformationss of the branch cables, a discrete Cosserat rod model is established. Next, according to the description of the minimal energy principle, each system is in the most stable state when its energy takes the minimum, so the postures of the cables could be solved by calculating the extremum of energy optimization model. After that, on the circumstance of ignoring low-speed movement of the cables, the energy could be obtained by the summation of the tensile, bending and torsional deformations potential energy. Then, the Levenberg-Marquardt algorithm is applied to solve the energy optimization model, and the postures of cables is calculated through Hermite interpolation. Finally, the verification example was established to simulate the deformations of the branch cables, which verifying the high calculating speed of algorithm and the authenticity of simulation.
Background
Cables play an important role in the signal and energy transmission in electronic equipment, and their layouts will directly affect the working stability of the devices. To economize the labor and time cost of cable design and debug process, the assembly process simulation could be applied to discover the design defects and to verify the design results in advance, leading to a suitable guide of the actual process. This simulation studies the deformed results of cables when they are moving, and its core lies in the establishment of cable physical model and the theory to solve simulation.
Literature reviews
In the course of the assembly process, cables often occur tensile, bending and torsional deformations. To take the simulated accuracy and calculating speed into consideration, the physical model of cables should fully describe the deformations, and an appropriate theory should be applied to meet the needs of interactive display.
Terzopoulos et al [1] proposed a dynamic spline model, which treated the centreline as a D-NURBS curve, and solved the cable's model by the lagrangian functions. Wakamatsu et al [2] expressed the cables by using the parametric equations, and simulated the drag operations accurately by the finite element method. Loock et al [3] established a mass spring model which used torsional springs to describe the anti-bending characteristic of the cable, brought a fast simulation of bending deformations. Selle et al [4] added the torsional springs on the basis of the mass springs model, which were used to 2 express the anti-torsional characteristic of the model, and simulated multiple hairs successfully. Pai [5] applied the Cosserat rod model, and used different coordinate systems of the thin rod to express the bending and torsional deformations, then simulated the deformed results of surgical sutures.
In the respect of modelling of branch cables, Bergou et al [6] established a centreline-angle coupled model of the rope, and got good deformations results of tree branches by solving the kinetic equations. Zhao et al [7] treated the plants as triangular mesh, then used finite element method to solve the deformations of complex plants.
For the theory used to solve the simulation, Wang et al [8] built the kinetic equations based on the Kirchhoff rod theory [9], and is solved the model numerically by the differential quadrature method. Spillmann et al [10] considered the speed and angular velocity of the rope, and used the semi-implicit Euler method with respect to time to update the speed and position, then completed the dynamic simulation of the rope.
The simulation theory above not only provided high precision, but also brought slower speed in calculating, and spent too much time in solving the nonlinear mechanical equations. The minimal energy principle is an energy optimization method, which states that every system has a tendency to develop to the minimum energy state, and the system takes the most stable state when its own the extremum of energy. In this way, it's obvious that the postures of the branch cables could be solved by calculating the extremum of energy, that's what we studied in the article below.
Cable modelling
The cables are expressed by the Cosserat rod model [5], which contains multiple coordinate systems lying on the cables. Then, the postures are described by each origin point of the coordinate systems, while the deformations are described by its axes. To avoid the complex partial differential and integral problems, the cables are discretized into a series of end-to-end cable segments, and each coordinate system lies on the middle of the segment. In figure 1 above, the cable is discretized into N segments. By using the Rodrigues' rotation formula and the quaternion ( , , , ) T q q q q q and, each coordinate system i d can be described as: According to Cosserat theory, the bending and torsion vectors are used to describe the strains of the rod, which are refer to the angle of bending and torsional deformations along each axis, and its value depend on the derivative of each d to the arc coordinate system s of the cable. At this time, the position of each segment is given by the points on both sides, and the deformations between adjacent segments is described by the bending and torsional vector.
Potential energy
After the cable been treated as discrete segments, some presumption should be listed to simplify the solution steps. It's easy to learn that the tensile deformations have just a small impact on the total length, leading to the smaller influence on the bending and torsional deformations. Vice versa, the latter don't have much impact on the former. Then, we can assume that the tensile deformations only exist in each segment, while the bending and torsional deformations occur between adjacent segments.
Since we have got the relationships between the moment M and the vector [9], the stiffness matrix K is introduced to describe the ability of cable to resist deformations, and the moment are: Where the parameter b k and t k refer to anti-bending stiffness and anti-torsional stiffness, related to the radius r, the Young's modulus E and the shear modulus G .
Known that the potential energy is equivalent to the work done by the external force, the bending and torsional potential energy could be expressed by the integration of the moment to the angle: For the tensile potential energy, the deformations include the directional part and the length part, then the square of strain should be written as the dot product of the gap vector. 3 3 (5) Where the parameter p . For all N parts of cable segments, when we ignore the low-speed moving, the total energy E is: Now the postures of the single cable can be calculated by obtaining the minimum energy value.
The operations on branch cables
We can learn that the postures of each discrete cable segment depend on the quaternion and points. So the core to handle the branch cables lies in the operation on the branch point and the branch segment.
All the occasions about the branch cables can be divided into the simplest situation shown in figure 2, which contains the ramose part 2 B , 3 B and the main part 1 B . The three parts are intersect at the common segment C between s p and e p , and part C belongs to all three parts simultaneously. After separating the branch cables to three parts, it's obvious that there are two parts of the bending and torsional potential energy at e p , and the radius of the part 1 B should be bigger than the other two. While computing the total energy of the branch cables, we can also calculate the energy of each part separately, and the sum of them calculates the tensile potential energy of the part C for three times, then the total energy can be easily described as: After Based on the points and the vectors, the curve 3 C is described by the Hermite interpolation: Where the parameter 5000 q k refers to the coefficient of penalty term.
Algorithm and result
To solve the least square optimization, the Levenberg-Marquardt algorithm is used, which unites the least square method and the trust region method at the same time. Each round of calculation starts from the initial point, after assuming a reliable displacement, then in the area with the current point as the centre and the displacement as the radius, and the real iterative increment can be solved by finding the optimal point of the approximate function of the objective function. | 2022-02-08T20:08:43.724Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "e359b181a6d80ab7123568351ce57669f0af3f36",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2181/1/012059",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e359b181a6d80ab7123568351ce57669f0af3f36",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
118635661 | pes2o/s2orc | v3-fos-license | Jet structure modifications in heavy-ion collisions with JEWEL
Key features of jet-medium interactions in heavy-ion collisions are modifications to the jet structure. Recent results from experiments at the LHC and RHIC have motivated several theoretical calculations and monte carlo models towards predicting these observables simultaneously. In this report, the recoil picture in \textsc{Jewel} is summarized and two independent procedures through which background subtraction can be performed in \textsc{Jewel} are introduced. Information of the medium recoil in \textsc{Jewel} significantly improves its description of several jet shape measurements.
Introduction
The qualitative effect of jet quenching through the measurements of the jet nuclear modification factor (R AA ) are confirmed with Run1 data at the LHC [1,2,3]. While the R AA shows a clear and expected trend in the medium induced effects from central to peripheral events, the energy loss on a jet-by-jet level is not characterized. This motivated several measurements that probed the inner-structure of jets such as jet shapes [4,5,6], searches for the quenched energy away from the jet axis [7] and fragmentation functions [8,9] to name a few. From these detailed measurements, jets thats propagate the quark gluon plasma (QGP) were perceived as getting broader, losing energy inside the jet cone, increased multiplicity of low transverse momenta (p T ) particles around the periphery of the jet and several other consistent observations when compared to jets in pp collisions. The subjet groomed momentum fraction was recently measured as a function of the jet p T and event centrality at CMS [10], concluding that jets were more asymmetrically split in heavy-ion collisions as opposed to those in pp collisions.
All these aforementioned results point to quantitative features of jet quenching, hence it is imperative that any phenomenological model describing the physics of QGP has to predict the general behavior and trend of these observables. Currently there are several theoretical calculations and monte carlo models available on the market (see [11] for a comprehensive review). These results offer a unique way of discriminating between these models.
Jewel [12] is a monte carlo framework utilizing a perturbative quantum chromodynamical (pQCD) implementation of in-medium energy loss. The latest version of Jewel is interfaced with Pythia6 [13] and the full framework operates as follows: • Pythia produces the di-jet hard scattering and initial state radiation • Jewel takes these hard scattered partons and proceeds with the final state shower arXiv:1610.09364v1 [nucl-th] 28 Oct 2016 • Pythia takes over for hadronization and hadron decays • These events are then analyzed with the Rivet [14] analysis framework The medium in Jewel is characterized as a thermal distribution of scattering centers. The hard scattered partons undergo microscopic interactions that are estimated with perturbative matrix elements and the aforementioned parton shower modifications, giving rise to elastic and inelastic energy losses. At each interaction, a recoiled parton is created and in the current version of Jewel, propagates without any further interaction. This is a limiting case, since in reality these recoiled partons can further interact with the medium. Detailed descriptions of the monte carlo implementation in Jewel and corresponding studies are available here [12,15].
The recoiled parton's energy is comprised of both the collisional part due from the hard scattered parton, and the thermal component of the scattering centers. This extraneous energy gets clustered in along with the other particles in the event and becomes part of the jets. Hence, when comparing data with Jewel (including recoils), a slight mismatch appears for inter-jet observables due to the experimental implementation of the background subtraction. Since Jewel does not simulate a full heavy ion event, the exact method utilized by different experiments cannot be reliably implemented. However, due to the microscopic nature of the interactions, the exact amount of background energy and momentum is easily estimated as the thermal component of the scattering centers before interaction. Any such subtraction techniques are only viable for infrared safe observables since the energy corresponding to the scattering centers are before hadronization effects.
Background subtraction in Jewel
The information of the scattering centers before the interaction are first included in the event record with a separate tag, so as to not disturb the jet clustering. Simultaneously dummy particles with very small momenta and positions corresponding to the scattering centers are introduced to the final state particle collection. With this information, one can associate the scattering centers involved with the corresponding jet and hence proceed with background subtraction with either of the following methods: • 4MomSub: Once the jet constituents are matched in position to their corresponding scattering centers, a simple four-momenta vectorial subtraction is performed to remove the background contribution to the respective jet. • GridSub: A finite resolution grid is superimposed on the event confirning the jet constituents and their scattering centers in grid cells. Inside each cell, the momenta of the scattering centers are vectorially subtracted and the jet is finally clustered with each cell as an input pseudo-particle. For the case when cells only contain scattering centers, their momentum is set to zero before the clustering procedure.
The 4MomSub is recommended when possible since it is closer to a true background subtraction. The GridSub method is employed in cases when observables require the information of jet constituents, and is very similar to experiments where the results are constrained by the detector's finite resolution. Detailed studies of the systematic uncertainties introduced by the background subtraction procedures are in preparation.
Comparisons with data
The ratio of the track yield in annuli around the jet axis in most central PbPb collisions at 2.76 TeV compared to pp collisions are shown in Fig. 1. The CMS data points [4] when considering both neutral and charged in the event. The data systematic uncertainty is presented in the yellow shaded region. The general trend of the data is reproduced by Jewel+Pythia after subtraction using the 4MomSub method for anti-k t R = 0.3 jets clustered with the Fastjet [16] toolkit. As expected the agreement gets better with the data at large r when the density is estimated with all final state particles. The Jewel+Pythia predictions for the ratios of the subjet groomed momentum fractions in PbPb to pp collisions are compared with CMS data [10] as shown in Fig. 2. The subjet groomed momentum fractions are estimated via the softdrop framework [17], for low (left) and high (right) p T anti-k t R = 0.4 jets respectively in Fig. 2. The ratio of Jewel+Pythia to data is shown in the bottom panels where the yellow shaded region represents the total uncertainty in the data points. The systematics in the Jewel+Pythia predictions are shown by varying the grid resolution by a factor of two. Jewel+Pythia also reproduces the general trend corresponding to more asymmetric jet splittings in PbPb jets at low p T and more symmetric splittings as the jet p T increases.
Conclusions
Including the recoils in Jewel along with the background subtraction procedures allows the recovery of the jet energy acquired by the medium and its response to jets. This interplay makes Jewel capable of reproducing the qualitative trend in data related to jet structure modifications. This new class of jet observables that probe the medium-jet interaction highlights the next era in jet tomography in heavy ion collisions.
This work was done in collaboration with Dr. Korinna Christine Zapp. RKE thanks the CERN theory department for its hospitality. | 2016-10-28T19:58:26.000Z | 2016-10-28T00:00:00.000 | {
"year": 2016,
"sha1": "3d4da40eba511165de7115eface3f20bdb6dbff6",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/832/1/012004",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "3d4da40eba511165de7115eface3f20bdb6dbff6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
139746495 | pes2o/s2orc | v3-fos-license | Registration of the cavitation regime in liquids under the action of ultrasonic vibrations
Ultrasonic vibrations are widely used in various industries and production: in metallurgy, in the chemical and food industries, in mechanical engineering, and in medicine. This is due to the physicochemical changes in the substance when superimposed sound fields. Cavitation in the ultrasonic field causes dispersion and emulsification of certain substances, promotes coagulation and degassing, affects the processes of crystallization and dissolution, it is known that ultrasonic vibrations cause a variety of chemical transformations of the substance, including oxidation, reduction, polymerization and depolymerization reactions. The researchers find the explanation of these phenomena in the diverse effect of cavitation on matter: shock waves, micro streams, acoustic wind. The experiments were carried out in a liquid medium (un-distilled water). The volume of the experimental test was 10 dm3. To obtain ultrasonic vibrations, the method of magnetostriction was used, the principle of which is to convert electric oscillations into mechanical ones. To assess the cavitation regime, the level of cavitation was recorded in two stages: the degree of erosion of the artificial obstacle, the measurement of the intensity of the cavitation noise in the volume
Introduction
The physical nature of ultrasound Ultrasound -elastic vibrations. Ultrasonic waves of high intensity are described only by the laws of nonlinear acoustics. The propagation of ultrasonic waves in liquids is accompanied by acoustic flow, condensation, and medium discharge. Acoustic cavitation is among the important nonlinear phenomena in an ultrasonic field.
Investigation of the mechanism of the action of acoustic fields on the matter is complicated by the fact that there are simultaneously different processes in the ultrasonic field that can have mutual influence. Therefore, it is difficult to describe the experimental results. At the present time, theoretical grounds for the use of physical methods, in particular, ultrasonic vibrations, in water technologies are being developed. The use of an ultrasonic field to intensify oxidation-reduction processes in an aqueous medium, the precipitation of coarsely dispersed impurities expands the area of possible use of this physical method [1,2,3,4].
Sound, as a physical phenomenon, is characterized by the sound pressure, the density of sound energy, the flow of sound energy, the level of intensity (power) of sound. Liquids in the static state do not have a shear viscosity and are incapable of withstanding and transmitting any tangential stresses. Therefore, only longitudinal waves propagate in liquids and gases, in which the direction of the tangential motions of the particles coincides with the direction of propagation of the waves. The propagation velocity depends on the density of the medium ρ and the adiabatic compressibility coefficient βc and is calculated from the formula: The physical nature of propagation of elastic waves is described by equation (2) where Abias amplitude; ωcyclic frequency; ta period of time.
Acoustic vibrations in the medium create additional pressure [5,6,7]. The sound wave, passing through the liquid, creates zones of compression and depletion, changing places in each half-period of the passage of the wave. This gives rise to a sign-variable pressure, P, which can be determined from formula: (3) Sound pressure is a variable that varies periodically. At a given point in the medium, during a period, the pressure P changes from a maximum to zero and then rises again to a maximum value, and corresponds to a description of harmonic oscillations: max sin ω, where P max is the maximum sound pressure (pressure amplitude), defined by formula When ultrasound propagates in the medium, part of its energy is absorbed, and the medium heats up. Absorption of acoustic energy is due to the frequency of sound, viscosity, thermal conductivity and wave resistance of the medium, i.e. the product of the density of the medium ρ and the speed of sound. The relationship between the sound pressure developed in the medium and the wave impedance expresses the vibrational velocity. [8,9] The value of the wave impedance of the medium is determined by the ratio of the sound pressure in the traveling plane wave to the vibrational velocity of the particles of the medium V P c V = r Ч (6) Vibrational velocity and sound pressure do not depend on frequency, the amplitude of the oscillations is inversely proportional, and the acceleration is directly proportional to the frequency. Usually the amplitudes of the vibrational velocities are many orders of magnitude lower than the sound velocity in the unperturbed liquid (10 3 m/s).
For moderate acoustic fields, the sound pressure usually does not exceed 1 MPa, but the pressure gradient, especially at high frequencies, can reach large values. The amplitude of the acceleration of liquid particles in the field of ultrasonic waves is great, they can exceed the acceleration of free fall by several orders of magnitude. In technology such accelerations are achieved only in special ultracentrifuges. Also taking into account the fact that such values of acceleration change sign twice during the period, one can consider ultrasonic waves a very powerful and peculiar physical factor affecting the substance even in the absence of nonlinear effects. Physicochemical and chemical effects in a liquid medium under the action of ultrasonic cavitation. The main physicochemical and chemical effects that arise in a liquid under the action of acoustic fields are due mainly to nonlinear effects, of which the most important is cavitation [10,11,12,13]. One of the characteristic features of ultrasonic cavitation is that it is a peculiar and effective mechanism of local concentration of relatively low average energy of the acoustic field in very small volumes, which leads to the creation of exceptionally high energy densities. The detailed mechanism of this effect is not yet completely clear. In connection with the fact that the phenomenon of cavitation represents a great theoretical and practical interest in the basic physicochemical problems of cavitation, considerable attention will be paid subsequently. The wave resistance of the medium is a very important characteristic that determines the conditions for radiation, absorption of acoustic vibrations, their reflection, refraction, etc. Particles of the elastic medium in which ultrasonic waves propagate vibrate and therefore possess kinetic and potential energy.
The amount of energy transferred by sound vibrations per second through an area of 1 sm 2 , which is perpendicular to the direction of their propagation, characterizes the intensity of sound and is determined by the formula: 1 , The density of sound energy at each point varies with time. The average value of the energy density at a given point is determined by the formula Transforming equation (7) and equation (8) we obtain expression (9) 22 1 ω ρ . 2 In the propagation of sound waves in a liquid medium, the intensity of sound I decreases with increasing distance from the source of radiation from equation: The absorption coefficient depends on the physical properties of the substance, it is a parameter characteristic of the substance, and also depends on external conditions (temperature, pressure) and on the frequency of oscillations.
Although the physical nature of ultrasound and the basic laws describing its propagation are the same as for sound waves of any frequency range, it has a number of specific features. These features are due to its relatively high frequencies and correspondingly small wavelengths [14,15,16]. Thus, for high ultrasonic frequencies, the wavelengths are: and in metal. Ultrasonic waves decay much faster than the waves of low-frequency sound range since the coefficient of "classical" sound absorption (per unit distance) is proportional to the square of the frequency. In the low-frequency region, the relaxation absorption coefficient also increases in proportion to the square of the frequency, but as the frequency increases, this growth decreases, and the absorption coefficient tends to a constant value.
The relationship between the nature of the propagation of ultrasound and, in particular, its highfrequency area -hypersound with the structure of matter and elementary excitations in it -is one of the most important features of ultrasonic waves. It allows us to judge the structure of matter on the basis of measurements of velocity and absorption in it, depending on the frequency, as well as on certain external factors -temperature, pressure, etc.
A feature of ultrasound in the high-frequency and hypersonic ranges is the possibility of using the methods of quantum mechanics, since the wavelengths and frequencies of sound in these ranges become of the same order with parameters and frequencies characterizing the structure of matter.
An elastic wave of a certain frequency is then associated with a photon quasiparticle, or a quantum of sound energy. Representations from quantum mechanics are convenient when considering various In a high-intensity ultrasonic field, considerable acoustic currents develop, the velocity of which, as a rule, is small in comparison with the vibrational velocity of the particles [17,18,19]. Currents can be due to sound absorption, can occur in standing waves or in the boundary layer near obstacles of a diverse type (Figure 1). Figure 1. The photo. Acoustic flow, which occurs when the propagation of ultrasonic oscillations with a frequency of 5 MHz in benzene. Radiation pressure also increases with increasing frequency, since its magnitude is proportional to the intensity of sound; in the ultrasonic frequency range. It is used in the practice of acoustic measurements to determine the intensity of sound.
In order for the parameters determining the various effects of the sound field-sound intensity, sound pressure, vibrational velocity, radiation pressure -to reach a noticeable value, as the frequency increases, an ever smaller value of the amplitude of the vibrational displacement is required.
The most important nonlinear effect in the ultrasonic field is cavitation -the appearance in the liquid of a mass of pulsating bubbles filled with steam, gas or a mixture thereof [20]. The complex motion of bubbles, their slamming, merging with each other, etc., generate compression pulses (microimpact waves) and micro-streams in the liquid, cause local heating of the medium, ionization. These effects affect the substance: the solid particles in the liquid are destroyed, (cavitation erosion), fluid mixing occurs, various physical and chemical processes are initiated or accelerated. By changing the conditions of cavitation, various cavitation effects can be intensified or weakened, for example, with the increase in the ultrasound frequency, the role of microflows increases and cavitation erosion decreases, with the increase in hydrostatic pressure in the liquid, the role of microscopic impacts increases.
An increase in the frequency of oscillations usually leads to an increase in the threshold value of the intensity corresponding to the onset of cavitation, which depends on the nature of the liquid, its gas content, temperature, etc. For water in the low-frequency ultrasonic range at atmospheric pressure, the intensity is usually 0.3-1 W / cm2 (figure 2). The diverse applications of ultrasonic vibrations, in which different features are used, can be conditionally divided into three directions. The first is due to the acquisition of information by means of ultrasonic waves, the second -with the active effect on the substance and the third -with the processing and transmission of signals [21]. The aim of the work. The aim of this work is to confirm the existence of a cavitation regime and to identify the nature of the distribution of the cavitation intensity in the volume of the liquid. The task of the work. The task of the work is to determine the developed cavitation in terms of the degree of erosion (destruction) of the artificial obstacle, and also to obtain a graphic representation of the intensity of ultrasonic cavitation in the volume of the reactor of the experimental apparatus for treating liquids. To carry out experimental studies on the effect of ultrasonic vibrations on water purification processes in the NRU of the MSCU at the Chair of Water Management and Water Supply, the ultrasonic reactor was made of XI8HICT stainless steel with the introduction of acoustic oscillations from below upwards through the thickness of the liquid in the volume of the reactor at atmospheric pressure to the water surface.
The ultrasonic reactor is rectangular in shape, equipped with magnetostrictive converter PMS -6 -22, reactor dimensions in the plan 400mm x 400mm (membrane size 300mm x 300mm), the liquid depth in the reactor reaches 300mm. The transducer of acoustic oscillations is placed in the lower part of the reactor, under a layer of liquid. The scheme of the experimental installation of liquid treatment in an ultrasonic field is shown in Fig. 3. The basis of the processes of ultrasonic action on the liquid medium is the cavitation regime. Therefore, reliable registration of the flow of cavitation phenomena is desirable. It is expedient to carry out experimental studies in two stages.
First stage. Evaluate the collapse of cavitation cavities, in a liquid, under the influence of ultrasonic vibrations, according to the degree of erosion of the artificial barrier (indicator). For this purpose, a foil tape 400 mm x 300 mm in size is used as an indicator, it is immersed in a liquid, in which a cavitation effect is observed.
The bubbles that are formed in the ultrasonic field collapse and create microexplosions, shock waves, and microflows. Cavitation cavities, which are formed near the foil web, inevitably lead to erosion of the web itself. In the case of a powerful effect of ultrasonic vibrations, it is possible to partially or completely destroy the foil web.
The intensity of the cavitation regime depends on the chemical composition of the liquid, on the intensity of the ultrasonic field, and on the emitting ability of the ultrasonic oscillation source. In our case, the emitter and the liquid remain constant, therefore, the chemical composition of the medium, the exposure to ultrasound and the sample temperature may change.
As samples for the example of visual registration of the cavitation regime, we choose two liquid media, water and industrial wastewater of the primary processing of wool.
A foil web of 400 mm x 300 mm in size, 15 microns thick in a special frame, is placed in the reactor perpendicular to the radiating surface to the height of the sample, as if dissecting the depth in the reactor in half, from the radiator to the upper edge of the free surface of the liquid.
The temperature of the initial water sample was 13 ° C, and the duration of the ultrasonic exposure to the blade was 10 min. For sewage, the exposure varied from 10 seconds to 10 minutes, and the water temperature was 38 ° C.
Second stage. They begin to pulsate in phase with the active acoustic field in the process of propagation of ultrasonic vibrations in the water of the microcavity. At the same time, in the process of cavitation, ultrasound is emitted as the fundamental frequency, and with the addition of harmonic components of sound radiation, collapsing cavitation cavities. The registration of these sound effects can be performed with the aid of a device that measures acoustic radiation, filtering out and eliminating the noise of the fundamental frequency. Such a device was created and tested at the Chair of Water Supply and Sanitation of the Moscow SRI MG.
As a result of the research it was determined that zones with different cavitation intensity and density exist in the reactor under the action of an ultrasonic field. To identify areas of cavitation zones, this device was used, which makes it possible to measure the intensity of sound pressure when an ultrasonic field is applied at any point in the reactor. This device is capable of recording the sound pressure resulting from the collapse of the cavitation cavity.
Results
At the first stage of the experimental studies of cavitation regime registration by erosion of the foil web (the foil web size is 300 x 400 mm.) are shown in Fig. 4 (a, b, c, d, e). The lower horizontal part of the foil was in close proximity to the radiator. Accordingly, the upper horizontal part of the foil web was located on the free surface of the liquid.
Below are photographs of real images with traces of cavitation erosion on the foil web. The resulting images, shown in Figure 4, confirm the presence of a cavitation regime in a liquid. Cavitation destruction (erosion) is recorded as holes in the central part of the canvas. The degree of change in the physicochemical properties of water affects the intensity of the cavitation regime in the ultrasonic field. Thus, in a sample of sewage with a high degree of change in the physicochemical properties of water, the effect of cavitation action surpasses the similar effect in water of a lesser change in the physicochemical properties.
Thus, the resulting image on the canvas will help determine the location of the cavities of cavitation destruction and visually, with certain certainty, fix the cavitation mode in the test sample.
In the second stage, experimental data on the change in sound pressure were obtained. The developed device allowed to fix the cavitation pressure of the collapsing cavities at selected points of the reactor. The diagram of the isosurfaces of sound pressure distribution in the volume of the reactor is constructed from the points obtained (Fig. 5).
Figure 5.
Diagram of acoustic pressure distribution in the volume of ultrasonic reactor in cavitation mode The axonometric image makes it possible to comprehend the inhomogeneities of the zones of cavitation manifestations in the volume of the reactor. Thus, in the central part of the diagram there is a region of increased cavitation effect, reminiscent of the appearance of "torch" with a pronounced "core" of the maximum value of the acoustic pressure.
"Torch" is placed in suspension above the radiating membrane and its central part does not surface on the surface of the liquid medium.
The central part of the diagram is very similar to the "yule" and is characterized by a region of lower acoustic pressure than the central part of the "torch", which corresponds to the physical parameters of the process: the length of the acoustic wave, the frequency and the radiation intensity. | 2019-04-30T13:07:41.494Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "e89055b4046bd8281bd35c4967d565a9aaa10707",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/365/3/032002",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "8382e34123dd29e240f873fb0d1ed4427a879dbb",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
228900115 | pes2o/s2orc | v3-fos-license | Understanding children’s harmful work: a review of the methodological landscape
Children’s engagement with work has been widely researched using a wide variety of methods. However, the extent to which such methods and their combination provides insight into forms of children’s harmful work (CHW) is not obvious. This paper reviews and assesses respective opportunities and challenges of the main methods that have been used to study children’s engagement with work. It proposes research design principles and a methodological landscape for an integrated approach to child-centred, inclusive, and ethical research of CHW.
About ACHA:
The research informing this Working Paper as well as its publication was made possible thanks to the Foreign, Commonwealth & Development Office (FCDO)-funded research on Action on Children's Harmful Work in African Agriculture (ACHA). The aim of the programme is to build evidence on: • the forms, drivers, and experiences of children's harmful work in African agriculture; and • interventions that are effective in preventing harm that arises in the course of children's work.
It is currently assumed that the majority of children's work in Africa is within the agricultural sector. However, the evidence base is very poor in regard to: the prevalence of children's harmful work in African agriculture; the distribution of children's harmful work across different agricultural value chains, farming systems and agro-ecologies; the effects of different types of value chains and models of value chain coordination on the prevalence of harmful children's work; and the efficacy of different interventions to address harmful children's work. These are the areas that ACHA will address.
ACHA is a collaborative programme led by the Institute of Development Studies (IDS), Brighton, UK. Partners include: ACHA is directed by Professor Rachel Sabates-Wheeler (r.sabates-wheeler@ids.ac.uk) and Dr James Sumberg.
We are grateful to Carolina Szyp for her invaluable support to this paper by providing background research and streamlining all references.
References 23
Figures and tables Table 1 Overview of national household surveys and measurement of child labour 7 Table 2 Opportunities and challenges of methods to gain insight into prevalence of forms of CHW 14 Table 3 Opportunities and challenges of methods to gain insight into drivers and dynamics of forms of CHW 15 Table 4 Opportunities and challenges of methods to gain insight into the impact of (interventions on) forms of CHW 17 Table 5 Opportunities and challenges of mixed-methods design to gain insight into forms of CHW 19 Contents 1 Introduction 1 Some countries also adopt their own definitions of hazardous child labour, such as Côte d'Ivoire.
Forms of children's harmful work (CHW) are notoriously difficult to identify, assess and understand. Common definitions of child labour such as 'worst forms of child labour' and 'hazardous child labour', as put forward by the International Labour Organization (ILO), are premised on notions of hazard and risk 1 but do not include an explicit consideration of harm (Maconachie, Howard and Bock 2020). Harm can be considered 'an identifiable negative impact on an individual or household arising from a specific workplace hazard' (ibid: 8) and CHW 'refers to any work that children undertake that actually results in harm to the child and/or their household' (Sabates-Wheeler and Sumberg 2020: 8). Forms of CHW are often hidden from sight and its prevalence, drivers and impacts are highly context specific (Maconachie et al. 2020). Research on CHW therefore requires careful consideration of its methodological approach and individual methods. This paper provides a review of methods that are commonly used for studying child labour and children's engagement with work (sections 1-4) in order to input into research design of the Action on Children's Harmful Work in African Agriculture (ACHA) programme (section 5). In doing so, it explores the opportunities and challenges of common methods and proposes an innovative methodological landscape for studying CHW. This review is guided by several principles. First, we consider methods that are used for studies across the spectrum of child labour and children's work. However, in keeping with the focus of ACHA, we reflect specifically on how those methods are used for studying children's engagement with work that may be considered hazardous or harmful. Second, we adopt an interdisciplinary approach, considering methods that are used across the social sciences, including anthropology, childhood studies, economics and geography, among others. Third, we pay special attention to child-centred research. Methodologies that adopt a child-centred approach typically try to understand different types of harm in relation to emic notions of childhood. Studies without such an approach tend to pay little attention to the nuances of work, view children as victims of circumstances or ignorance, and push for abolishing children's work through a wholesale ban. Fourth, we consider the extent to which methods are inclusive and incorporate views and voices across identities and groups, notably gender, age, disability, religion and faith, and ethnicity.
We review three types of methods: survey methods; qualitative and participatory methods; and certification methods. Survey methods range from nationally representative multi-purpose household surveys to purposive child labour surveys that are administered to smaller populations. Generally, surveys collect information with relatively large thematic and population coverage but with relatively limited participant involvement. They are rarely administered directly to children.
Qualitative and participatory methods range from focus group discussions and individual interviews to participant observation and visual methods. These methods tend to involve the research population more actively than surveys do, although levels of involvement depend on the specific method. Although qualitative methods are commonly applied to smaller samples, it is important to note that some participatory methods themselves can operate well at scale. These include individually oriented and group-based activities such as drawing, structured visuals, qualitative interviews and focus groups carried out with larger groups.
Certification methods are tools that are used within certification schemes (Ton et al. 2020). Firms use certification to reduce reputational risk in relation to children's work within their supply chains. To check compliance, data on production and household livelihoods are collected by producers and auditors, usually with little engagement with children or their caregivers. While methods within such mechanisms are akin to surveys and qualitative methods, we review them separately as their use is prescribed as part of voluntary standards, and data are collected indirectly (through producers). The inability to directly feed into processes of research design and implementation means that these methods are therefore less flexible compared to primary academic research.
We also review mixed methods as an overall approach to research design. Studies that have adopted mixed-methods research designs explicitly seek to achieve both breadth and depth by combining a variety of methods. Methods could be mixed in parallel or sequentially. Mixing methods leads to both larger analytical coverage and greater population involvement.
Inevitably this framing presents an oversimplification of the range of methods and their many intricacies. Furthermore, many studies adopt a combination of methods and data, often in implicit ways without making reference to a mixed-methods approach (such as using different qualitative and participatory tools in small-scale studies). This categorisation serves as a framework for organising this review as opposed to a strict delineation.
The remainder of this paper is structured as follows. First, we provide an overview of methods as outlined above, exploring their use within studies of child labour and children's work. Second, we assess the merits and challenges of specific methods for assessing the prevalence of forms of children's harmful work, drivers and dynamics, and impact (in line with ACHA's core research foci). Finally, we propose research design principles and a methodological landscape for studying CHW, aiming to inform research design for the ACHA programme.
Review of methods
This section provides an overview of methods using the categorisation as listed above.
Survey methods
A wide range of survey methods exist for studying children's engagement with work, ranging from large-scale surveys that collect information about work alongside many other topics, to purposive small-scale and child-centred surveys. We explore some of the most common survey methods below.
National multi-purpose household surveys
National multi-purpose household surveys collect information across a range of issues and are representative at country level. Living Standards Measurement Studies (LSMS), Multiple Indicator Cluster Surveys (MICS), Demographic and Health Surveys (DHS) and Labour Force Surveys (LFS) have been widely used to gain insights into the prevalence and patterns of child labour at a national level (Bhalotra and Tzannatos 2003;International Programme on the Elimination of Child Labour (ILO/IPEC) and Statistical Information and Monitoring Programme on Child Labour (SIMPOC) 2007; Understanding Children's Work 2017). These surveys often do not produce detailed information on child labour but collect information on employment of household members, characteristics of the household and its members, and wider household living standards, which can help to understand the context in which child labour takes place (Verma 2008 In turn, the International Conference of Labour Statisticians (ICLS) translates these conventions in statistical terms and sets standards for measurement of child labour (ibid.).
The narrow focus of these conventions and their rigid standards result in a similarly narrow remit in most multi-purpose surveys. Nevertheless, surveys differ in their potential to explore children's engagement with work. Within an LSMS, for example, the ability to cross reference information about children's work with data on school attendance and educational attainment, as well as demographic and socioeconomic characteristics of the household and its members, contributed to the popularity of the LSMS for studying child labour (Bhalotra and Tzannatos 2003). An MICS provides insights into children's engagement with unpaid household chores, which are not captured in many other surveys (Dayıoğlu 2013). A notable downside of the MICS is that information about health and nutrition is only collected for children under five years of age, which limits the ability to link information about children's engagement in work to health and nutrition outcomes (ILO/IPEC-SIMPOC 2007). Similarly, use of the DHS is limited due to a small range of questions about employment and only asking these to individuals aged 15-49 years. LFS is the most comprehensive in terms of capturing information about employment, but age brackets vary across surveys, with lower age thresholds included ranging from 10-15 years (Desiere and Costa 2019). Table 1 provides a comparative overview of national household surveys and their potential use for studying child labour.
Child labour surveys
Child labour surveys include a wide set of purposively developed survey instruments, ranging from large-scale household-based surveys to small-scale surveys with street children (Verma 2008). The Statistical Information and Monitoring Programme on Child Labour (SIMPOC) (also the statistics and monitoring unit) of the ILO's International Programme on the Elimination of Child Labour (IPEC) has played a key role in developing survey-based instruments and in advising national governments on how to generate high-quality data on child labour (SIMPOC n.d.).
Household-based child labour surveys use the household or family unit as an entry point into understanding patterns of child labour, with parents/guardians and children acting as respondents (ibid.). SIMPOC has developed questionnaires and methodologies in support of National Child Labour Surveys (NCLSs). Questionnaires can be implemented as a standalone survey or be attached to other surveys (SIMPOC/ ILO n.d.), such as LFS. Questionnaires commonly consist of three parts: (1) household roster; (2) adult questionnaire; and (3) child questionnaire (children aged 5-17) (ILO 2017). Given the purposive nature of an NCLS, they provide detailed information about child labour, certainly in comparison to multi-purpose household surveys. For example, as it includes children aged 5 and upwards, it allows for assessing the age at which children started working (ILO 2015). The questionnaires do not capture engagement in domestic chores or unpaid care work and therefore do not provide a full representation of children's engagement with work, particularly for girls, who are more likely to be engaged in housework.
Child-focused surveys include children and/or youth as respondents. A well-established survey is the School-to-Work Transition Survey (SWTS), which aims to gain better insights into transitions from school into work, and to understand transitions into the labour market for youth (Elder 2009). The survey is directed at youth aged 15-29 years, and its underlying sampling methodology aims for national representation. Although it is possible to use SWTS for producing child labour estimates, its main objective is to supplement the information collected through LFS or NCLS and provide detailed data about the supply of youth labour (ibid.).
Another category of child-focused surveys includes those that are developed and implemented as part of specific research studies. These vary widely in scope, sampling and types of questions asked. Examples include a six-country study that assessed whether child domestic work can be considered as a worst form of child labour, which administered questionnaires to more than 3,000 children aged 6-18 years (Gamlin et al. 2015) and a study of work and education in slum settlements in Dhaka among 2,700 children aged 6-14 years (Quattri and Watkins 2016). School-based surveys collect information about how work affects school attendance or performance and attitudes to schooling (SIMPOC n.d.). Schools are used as primary sampling units with questionnaires being administered to children in those schools. Surveys may also include interviews with teachers, administrative staff and parents, and may also include a control group of children out of school (Verma 2008). While other surveys generally limit questions to school enrolment and attendance, school-based surveys seek to generate data about how much time children spend in school, how often they miss school because of work, and their ability to engage in homework and extracurricular activities (Guarcello, Lyon and Rosati 2005). Large-scale school-based surveys were undertaken in the early 2000s with support from ILO/IPEC in Brazil, Kenya, Lebanon, Sri Lanka and Turkey, among other countries (ibid.).
Establishment surveys focus on the demand side of child labour and collect information from employers or labour intermediaries. Surveys seek to interrogate the situation in the workplace with questions focusing on the nature of work, hours of work, remuneration and pay, injuries and illnesses, and other conditions of work. Establishment surveys are rarely representative of all establishments employing children as identification of such establishments is inherently problematic (Verma 2008). However, they use the place of employment as an entry point, so they can be valuable for collecting information about forms of labour for children that live outside of the household unit or at non-registered locations, such as children living on the streets (ILO/ Special Action Programme to Combat Forced Labour (SAP-FL)/ IPEC 2012).
Impact evaluation surveys
Impact evaluation represents a growing body of research within which surveys are used to collate information about children's engagement with work. They often employ multi-purpose surveys with varying degrees of detail on children's work, typically based on the examples reviewed above. While evaluations of programmes that seek to reduce child labour as a primary objective tend to include more detail about children's engagement with work, this is less often the case for evaluations of interventions in which reducing child labour is a secondary objective. The policy area of social protection is a case in point.
Social protection has become a key policy area for reducing child labour (ILO 2018). Subsequently, an increasing number of studies consider the impact of social protection programmes -including schemes such as unconditional cash transfers, conditional cash transfers and public works programmes -on children's engagement in work (Dammert et al. 2018;de Hoop and Rosati 2014).
In the large majority of cases, evaluations aim to capture the programme effects on an array of outcomes, and child labour tends to be only one such outcome, resulting in relatively narrow collection of information.
Small-scale surveys
The use of survey methods is not limited to collection of large-scale data. Qualitative researchers also use small-scale quantitative surveys to develop their knowledge of the research setting, introduce themselves, and to get specific data that are important to their analysis of children's lifeworld, work, education and social position (Dyson 2014a;Hashim 2004;Katz 2004;Reynolds 1991).
In her research on child labour in the Zambezi Valley, Reynolds conducted a census of 12 families in her research setting (Reynolds 1991). She had already worked in the community before commencing the study and thus had a broad knowledge of it. By contrast, in her study in south-eastern Sudan, Katz saw her village-wide household survey as a way to introduce herself and her research, while constructing a socioeconomic and cultural profile of the community. The survey illuminated the diversity of economic activities people were engaged in, both on-and off-farm, and their seasonality (Katz 2004). In the context of a child-centred study on everyday involvement in rural household labour in a remote village in the high Himalayas in Nepal, Dyson undertook a full village census on age, educational background and occupation of all household members (Dyson 2014a).
Qualitative and participatory methods
Qualitative studies span a range of scales, from small case studies zooming in on a limited number of people to large-scale studies working with samples of several hundreds. Some methods are used successfully with a limited input of time, while others require a substantial investment of time to develop dense relations within the community and to capture the culture and context (Lancy 2015: 387). Wessells, on the basis of his work with child soldiers in East Africa (interviewed in Johnson and Lewin, forthcoming), has suggested that 'mini-ethnographies' should be carried out as part of any research seeking to understand child protection in community settings. This is particularly relevant in gaining insights into CHW that may at first be invisible, and in understanding how intergenerational relationships play out in workrelated family, community and employer/employee practices and decision making. Adequate time also needs to be allowed for adults to accept the participation of children and to support them to be part of the research (Chawla and Johnson 2004).
A wide range of methods is available within the qualitative and participatory toolbox. They are rarely used in isolation but rather in combinations that serve to elucidate different aspects of a research question. Increasingly, more traditional methods such as interviews and observations are used alongside creative methods (Boyden and Ennew 1997;Mitchell 2006;Punch 2001b). The sequencing of methods is flexible and depends on whether the aim is to map a set of factors that can be explored in depth or at scale later in the research, or to unpack processes surrounding children's work. Participant observation is a key element of ethnographic studies throughout the time spent in the research setting. In early phases of the research, observations are broad, focusing on grasping the general organisation of everyday life, including the work that children do. Later in the research, observations become gradually more focused on specific aspects of children's lives. Observations can involve random observation of each child in the sample throughout an entire day to discover the range of activities they engage in, accompanying children (and adults) to learn from them and participate in their work, and talking to children and adults for many hours (Dyson 2014;Johnson, Hill and Ivan-Smith 1995;Katz 2004;Punch 2001a;Reynolds 1991). The time spent informally with various participants gives them time to engage with the research. In her study, Punch (2001b) questions the extent to which adult researchers can do participant observation with children. She argues that there are limits to participation because although researchers can join children's games and work, the researcher will always be a different type of player in the game (ibid: 165; Atkinson 2019).
Participant and other types of observation
Time-use studies and diaries to map children's work are time consuming but one of the best ways of getting insights into the multiple tasks that children undertake during a day and their ability to combine different chores, work and play. Children's work can be recorded in different ways such as random 'snapshots' of labour allocation, 24-hour reported recall, extended periods of detailed observation, and in written diaries (Robson 2004;Reynolds 1991).
In recall interviews and diary-writing, children are asked to recount their activities in as much detail as possible, paying attention to the timing and duration of activities. However, both methods tend to under-report work because children forget tasks that they do not find important, tasks they are not allowed to do or find embarrassing, and tasks they do simultaneously with other work, such as childcare (Dyson 2014;Johnson et al. 1995;Robson 2004: 199). The recording of time use needs planning vis-à-vis the agricultural calendar, school holidays and even within a day (Robson 2004;Tudge and Hogan 2005).
Photography has also been used to observe children's day-to-day activities, including their work. For example, Bolton, Pole and Mizen (2001) conducted research into the working and economic lives of 11-16-year-olds, who were tasked with 'making photographs' 2 of their part-time jobs. PhotoVoice has become increasingly popular in the past decade to undertake research with children on a wide range of topics, including in the global South. In South Africa, for example, the method was used with children to understand their concept of 'self' (Benninger and Savahl 2016) and perceptions of the natural spaces around them (Adams, Savahl and Fattore 2017). However, the method goes beyond mere observation; it helps in 'making the familiar strange' to both researchers and participants and thus serves as a useful mediation tool to broaden discussions with participants, 'complementing, augmenting, confirming and enlarging insight from other methods' (Bolton et al. 2001: 517;Mizen and Ofosu-Kusi 2010). The method can also be adapted so that it can be used in participatory research with disabled children, such as in Sri Lanka and India (Wickenden and Elphick 2016).
Participatory and creative methods
Creative methods are often tools to engage children and other research participants to strategically democratise the research process by encouraging participants to become collectors of evidence and to encourage free expression (de Benítez 2011; Johnson, Hart and Colwell 2014; Mizen and Ofosu-Kusi 2010). These methods are considered more inclusive than other data collection techniques, in part because they involve play. They also often reveal 'subjugated knowledges' that participants would not articulate in aural or text-based media. Because images are multivalent, it is important that researchers do not make assumptions about their content, and that their creators are asked to explain and clarify their meaning(s) (Atkinson 2006;Ennew 2003).
Visual methods, including drawing, mapping and photography, are accessible methodological devices to use with children of all ages. Free drawing, open mapping and photo elicitation have been used with children and youth to gain a better understanding of place, space and young everyday lives (Bolzman, Bernardi and LeGoff 2017;Bowles 2017;Johnson 2011;Mitchell 2006). All three methods can be used in initial fieldwork stages to understand not only the context, but also central concepts of children's work. Later, more structured visual methods, such as 24-hour clocks of daily activities, body maps and Venn diagrams, can be used to gain more in-depth, nuanced understandings of causal pathways, connections and relationships. They can also capture children's and young people's feelings about their everyday lives and the social elements encouraging them to work (Hastadewi 2009: 481;Johnson, West and Gosmann, forthcoming).
Ruth Leitch (2008: 39) notes that, despite the common use of drawing in small in-depth studies, there are also good examples of its use for largescale audits. A large-scale policy consultation process in Northern Ireland, for example, collected drawings from 1,100 children and young people aged 5-18 (Kilkelly et al. 2004). Other large-scale and qualitative research projects have used drawing within their qualitative methodology to complement quantitative data (Crivello, Camfield and Woodhead 2009;Crivello, Morrow and Wilson 2013).
Performative methods and drama can involve dramatic play, individual or group mime, improvisation, or even a rehearsed performance. Katz (2004) asked children to make a model of their village on a patch of ground, then gave them a set of miniature toys (animals, machinery and people), asked them to identify each toy and then to show her 'life in the village'. All the children (10-year-olds) engaged in extended 'geodramatic play' using their models and the toys, during which Katz involved them in a running commentary that provided insights into their social and environmental knowledge (ibid: 283). Children often find it easier to communicate through drama than to answer direct questions. Often puppets or other role-play objects are used, creating a layer of distance and anonymity for the children (see Boyden and Ennew 1997;Johnson et al. 2014). This is particularly pertinent in research on CHW as it allows children to discuss forms of work that may be shameful in a more distanced manner. In South Africa, theatre-based research helped to unveil emotional challenges and notions of vulnerability among undocumented migrant youth in Cape Town (Opfermann 2020).
Written, workshop-based methods, including diaries, worksheets, activity tables and spider diagrams, are also methods to explore children's perspectives on their lives (Punch 2001a(Punch , 2001bThomson 2008). The success of these methods relies on a certain level of literacy, and on children having adequate time. Diaries, for example, can be more or less visual to include younger children or less literate children and young people, with increasing use of video diaries (Buchwald, Schantz-Laursen and Delmar 2009). Another option is to create a daily activity chart, or clock, in which activities are logged at different times, or recounted and drawn onto a timeline (Dachi and Garrett 2003). Worksheets can be prepared for the children to complete on different aspects of their lives, complementing drawings and photographs, or being complemented by interviews. Worksheets allow for more detailed information to be obtained on the issues which children have identified as important in their lives. Spider diagrams and activity tables are methods to capture the range of activities and work children do and the local geography of their work (Punch 2001a(Punch , 2001b. Focus group discussions (FGDs) represent a space for children to share their understandings and experiences in an interactive manner but without the pressure of having to engage with a researcher in a face-to-face interview (Gibson 2007;Hoban 2017). Participants in a focus group are often chosen because of their insights and views on different aspects of the research topic. For example, in a study of children's work in northern Ghana, Hashim organised FGDs with male and female senior secondary school students to discuss work, education, adulthood and childhood, and with adult women brewing pito (local beer) (Hashim 2004).
In Ethiopia, Abebe organised FGDs with adults to understand their views of children's work and childhood (Abebe 2008). Dyson's study involved several rounds of FGDs with key child informants and their friends in India, and two rounds with adults (parents of the key informants), focusing on children's work and on household budgeting (Dyson 2014).
In-depth interviews
In-depth interviews can help to explore a certain topic or issue in more detail. Life history or life cycle interviews, for example, aim 'to explore aspects of the social spaces of children and childhood' to understand the relationships that are central to children's psychosocial and material wellbeing (Abebe 2008: 57).
Participatory, creative and/ or ethnographic methods can inform the structure and nature of in-depth interviews. Using these methods does not merely constitute a process of piloting but also one of co-construction to iteratively build on ideas throughout the process of research. There are also many examples of interview processes that can be made more child-friendly and focused by, for example, carrying out interviews in peer pairs, or using interview props such as puppets, dolls and photos or pictures (Greene and Hill 2005;Johnson et al. 2014).
Semi-structured interviews focusing on children's everyday activities can be an abbreviated form of time-use allocation studies. This was the case in Abebe's study focusing on the activities that children had done, when, where and with whom, while also exploring gender and age differences and contextualising children's activities within the livelihoods of their families (Abebe 2008). Katz (2004) employed ethno-semantic interviews with 10-year-olds to elicit taxonomies of shared knowledge and an understanding of relationships and processes within their community. In these interviews, Katz probed children's practices. If a child mentioned that he/she had picked fruit in the course of a conversation about their daily work, she would seek more information about both the category of 'fruit' and of 'other things that are picked'. Through these interviews, she was able to establish a taxonomy of plant knowledge and place knowledge that helped illuminate the children's understanding of environmental processes and interrelationships (Katz 2004: 282).
Involving children in doing interviews may also work to break down the boundaries between the researcher and the researched. Hecht (1998) conducted what he called 'radio workshops' with street boys in Recife, Brazil. He gave them a tape recorder and microphone and asked them to interview each other. He found out that children often responded better to their peers, and they often asked better questions than the researcher, and questions that the researchers had not thought of (Boyden and Ennew 1997: 127). Chin (2007) found that the observations and discussions she had with children doing research with her produced more interesting and reflexive material than the interviews themselves. Children acting as researchers also entails risks that need to be carefully negotiated. A study that adopted 'participatory' docudrama with traditional Qur'anic students (almajirai) in Kano, northern Nigeria, found that the research afforded children the opportunity to voice their concerns and challenge stereotypes but also led to suspicion and accusations from within the community (Hoechner 2015).
Certification methods
Finally, we explore methods related to certification systems in agricultural value chains. These are mostly used outside of research settings but are employed by the private sector. In considering certification methods, we refer to tools that gather information about children's engagement with work within certification programmes.
Certification programmes emerged in the 1980s in response to consumer demands for sustainability and fairness and their willingness to pay for sustainably produced food items.
The first certification programmes concerned organic production, especially in Organisation for Economic Co-operation and Development (OECD) countries. Later, in the 1990s, Fairtrade emerged in response to a greater focus on fairness in value chain relations between smallholder producers in developing countries. At the same time, the retail sector in Europe started with certification schemes around food safety and good agricultural practices (GAP), which resulted in EurepGAP and later GlobalGAP standards. Certification and voluntary standard schemes are centred on tropical export crops, especially bananas, cocoa, coffee, sugar and palm oil. A significant part of the total production of cocoa produced in Ghana and Côte d'Ivoire is under one or more certification schemes (ISEAL 2019).
Four types of mechanisms for data collection can be used to glean insights into children's engagement with work, and these are discussed below.
Audit reports
Audit reports represent the main tool for information gathering within certification schemes and voluntary standards systems, such as Fairtrade and the Rainforest Alliance. Typically, control points in the audit differ for certification of individual producers (e.g. plantations or larger producers), and for group certification (e.g. where the production is scattered among many smallholder producers).
Group certification requires an accredited Internal
Control System (ICS), within which data on quality are managed by each group or firm. Medium or larger producers are audited directly, without an ICS. The other process is the Chain of Custody Certification, with requirements about how the product is processed and combined in value-added products, moving downstream from producers to consumers. For example, the Rainforest Alliance is currently developing a framework to enable routine audit processes to collect robust evidence without significantly increasing costs or administration for farmers, producer groups or companies. It is based on information that should already be available to auditors, such as field observations, maps, farm or group records, and interviews.
The quality of the audits (third-party certification) is a concern. Often there is a layered system that controls the accredited audit firms, who control the compliance of certification holders (especially producers). For example, the Forest Stewardship Council (FSC) has an agency, Assurance Services International (ASI), which provides this control-oncontrol.
Common core indicators
Despite the diversity of data collection across schemes, there is a tendency to harmonise information collected in certification schemes using common core indicators. ISEAL supported the development of linked, geographically referenced data sets for basic data collected in each scheme. The ISEAL common core indicators can be mapped against the indicators for the United Nations SDGs. 3 Some indicators directly refer to children's activities, including school attendance, distance to primary school, number of farms restricting the use of chemicals by pregnant women and children, food security (e.g. months and days of inadequate access to food), perceived change in quality of life, and perception of change in level of control over household decisions.
ISEAL works on a range of innovation projects to harmonise data flows within and between certification schemes (ISEAL 2019). This is partly to identify common and easily collectable data 3 See the full list and SDG mapping.
by implementers, auditors and evaluators, and to generate systems to store, link and analyse this information, and open its access to researchers. In addition, ISEAL has developed guidance for structuring data-sharing agreements for personal and sensitive data.
Outcome and impact evaluations
In addition to data generated within certification schemes by producers and auditors, the minimum requirement of ISEAL members is for certification schemes (scheme owners) to undertake at least one in-depth impact evaluation per year that addresses at least two questions: • Is the intervention producing the desired and intended sustainability outcomes or impacts?
• What unintended effects (positive or negative) resulted from the intervention?
In this case, the intervention refers to implementation of certification schemes or voluntary standards systems. Data on their intended and unintended outcomes constitutes a potentially useful source of information in relation to children's work. Data either covers all certification holders or a sample of certification holders, and a small number of studies are in-depth impact evaluations. For example, the Rainforest Alliance's approach to assessing its certification system (which was developed together with the Sustainable Agriculture Network -SAN) includes programme-wide monitoring, sampled monitoring and focused research. While data for the first two types of assessments are collected within operations and as part of audits, data for focused research tend to be collected by a third and independent party (ISEAL Alliance 2017).
In the past 15 years, these requirements have resulted in a large (perhaps disproportional) body of research on the impact of certification systems, including various systematic reviews (Blackman and Rivera 2010;Blackmore et al. 2012;Oya, Schaefer and Skalidou 2018;Schleifer and Sun 2020). Most of these studies focus on intended outcomes, like income and yield. Only a few discuss the impact or outcomes related to (children's) work as (intended or unintended) effects of certification.
Child Labour Monitoring and Remediation System
Another relevant data source follows from the Child Labour Monitoring and Remediation System (CLMRS), a relatively novel component of some certification schemes. A CLMRS tends to use local facilitators to collect in-depth information on all households in a region. Nestlé, Mars and other processing brands, for example, implement CLMRS as part of their voluntary standards systems. Nestlé (2019) reports that, by the end of 2019, it had identified more than 20,000 cases of child labour. This includes hazardous children's work, according to the definition used in Côte d'Ivoire, such as work with sharp tools and exposure to agro-chemicals. In other words, these systems offer purposive quantitative information on child labour or hazardous work within the production and processing of specific products.
3 Investigating prevalence, drivers and dynamics, and impact After having discussed the range of methods in detail, we move on to discuss their use in relation to investigating three key questions concerning CHW, specifically: (1) prevalence; (2) drivers and dynamics; and (3) impact. For each of these questions, we explore the opportunities or challenges presented by each method.
Prevalence
The question of prevalence refers to gaining insights into the scale and scope of different forms of CHW. Table 2 presents an overview of the opportunities and challenges of the different methods.
Surveys have been widely used to gain insights into whether or not children participate in work, and to generate numbers about prevalence at a wider (national or sub-national) scale. The ability to collect information across a representative sample allows for quantification of the occurrence of children's engagement with work and understanding the scale of the problem across age, gender and other lines of disaggregation. Indeed, household surveys such as LSMS, MICS, LFS and others represent key instruments for generating estimates about child labour and to monitor progress towards SDG 8 (UNICEF and ILO 2019). Quantitative components within mixed-methods designs may also serve to provide insights into prevalence of engagement with certain activities. Mixed-methods studies that include quantitative data from large-scale household surveys usually include prevalence estimates.
However, surveys are relatively ill-equipped to provide more nuanced understandings of children's engagement with work, and particularly CHW. We identify three reasons for this.
First, the rigid nature of survey questionnaires generally limits opportunities for understanding CHW. As noted by Bhalotra and Tzannatos (2003) and based on our review of survey methods, questions regarding categories of work tend to be crude and generally only allow for distinguishing between work for wages, work on family farms or enterprises, or domestic work. Surveys that underpin impact evaluations of social protection programmes also vary in the level of detail and the type of data that is collected about children's work (de Hoop and Rosati 2014). Purposive child labour surveys tend to be less bounded by stipulations within the ICLS resolution and therefore offer more flexibility. A downside of most of these purposive surveys -in terms of estimating prevalence -is that they are not nationally representative and so will only provide a partial picture.
Second, a prerequisite for identifying whether or not children engage in certain types of activities is their inclusion in data collection exercises. National household surveys are notorious for excluding the most marginalised groups, including children living on the streets or in institutions, and refugee populations (Bhalotra and Tzannatos 2003;Global Coalition to End Child Poverty 2019). This is particularly problematic when studying CHW as these children tend to be at greater risk (Bhalotra and Tzannatos 2003) but their work may be hidden from view.
Third, information is often provided by a proxy respondent rather than by children themselves, with caregivers answering on children's behalf. This may lead to inaccurate information: while caregivers may be well informed about their children's engagement in work, they may not have precise information about how children allocate their time or about working conditions; social and cultural values may also lead to under-reporting (Dammert et al. 2018). Equally, children may overestimate time spent on certain work activities or domestic chores (Dziadula and Guzmán 2020). While self-reporting is generally seen as more accurate and therefore more preferable (Desiere and Costa 2019), administering questionnaires to both adults and children will generate the most accurate results (Dziadula and Guzmán 2020).
Qualitative and participatory methods are vital for obtaining detailed and context-specific data about children's activities, their engagement with different forms of work, and the extent to which these are considered harmful and by whom. Prevalence mapping can be undertaken to gain a participatory understanding of how widespread certain forms of harmful child labour are. Such mapping can be built up and understood as an iterative process as more harmful work is made visible and trust is established with participants.
Data obtained through qualitative and participatory methods can also serve to develop survey questionnaires in order to improve their ability to gain insights into the prevalence of forms of CHW. Participatory and observation methods can help to develop categories of activities and time intervals that could be adopted in time-use surveys, for example. Tudge and Hogan (2005) describe an ecological approach to recording observations of children's lives with different tasks and activities being categorised and then recorded at defined time intervals by researchers who record what the child was doing. An example of a sequential qualitative-quantitative study to improve prevalence estimates of forced labour is the ILO's International Programme on the Elimination of Child Labour (IPEC) and Special Action Programme to Combat Forced Labour (SAP-FL) (ILO/ SAP-FL/ IPEC 2012).
This study started with qualitative investigations and later developed and trialled quantitative survey tools.
Finally, certification methods can also provide insights into children's participation in certain types of activities. These certification systems may offer information beyond standard categories (adapted to local context and needs). The CLMRS, for example, collects metrics about the extent to which children work in agriculture and how many are involved in hazardous tasks (based on the CLMRS's own definitions). This offers information about prevalence within a certain industry or value chain. However, reliability of data may be a concern when using this information.
Drivers and dynamics
Different methods offer different strengths and challenges for understanding drivers and dynamics of CHW, as presented in Table 3.
Surveys are widely used for studying drivers and dynamics of child labour and children's engagement with work. Macro-level studies focus on correlates at country level and are mostly premised on cross-country data. The Understanding Child Work (UCW) programme, for example, considered country-level variables such as gross domestic product (GDP) per capita, ratification of ILO Convention No. 138, exports of clothing and textiles, and the Fragile States Index to understand differences in trends across countries (UCW 2017).
Micro-level studies are much more common and typically explore the role of demographic and socioeconomic characteristics of households and their members in explaining patterns of children's engagement with work. In Bangladesh, for example, the Household Expenditure Survey was used to investigate the role of household poverty and wealth in child labour. Regression modelling was used to estimate associations between independent variables such as household income and educational achievement of households and the dependent variable of children's work (Amin, Quayes and Rives 2004). The Young Lives study has led to research on the determinants of work participation and school attendance and their trade-off in Ethiopia (Haile and Haile 2012). Schoolbased surveys have also been used to understand how children's engagement with work is associated with academic performance (Guarcello et al. 2005).
The caveats outlined in section 3.1 in terms of the ability of surveys to capture the prevalence of children's hazardous or harmful work also hold for drivers and dynamics. The sets of questions that are included in surveys are often too limited to allow for detailed understanding of factors that are associated with, or cause, CHW. It is also important to note that due to the cross-sectional nature of many surveys, most studies allow for investigating association but do not offer insights into causality. Exceptions include studies that use longitudinal data and econometric methods that allow for estimating causal effect. In Ghana, for example, three waves of the Ghana Living Standards Survey were used to investigate determinants of child labour (Blunch, Canagarajah and Goyal 2002).
Qualitative and participatory methods -particularly when used in combination -can uncover 'subjugated knowledges' and everyday granular realities and constraints that are not necessarily articulated in surveys but are vital to understand why children engage in work. For example, photography and creative methods are suited to making different aspects of children's lives and their feelings more visible. Drama could be used to animate discussions about work, unequal power relations, expectations of labouring, and children's ability to influence their workday and load. FGDs, interviews, participant observation, diaries and mapping exercises all present vital tools for unveiling underlying choices and constraints in terms of work, both from the perspectives of children and others.
As is the case for survey methods, longitudinal data would allow for greater uncovering and understanding of factors playing into children's engagement with work. At present, there are no longitudinal mixed-methods studies specifically on the dynamics of child labour (e.g. how children's workloads change over time, or how changes in a household's poverty level may affect children's labour participation). This shortcoming has also been highlighted by other authors (Camfield 2014;Ibrahim et al. 2019;Kuimi et al. 2018).
Relatedly, narratives of change are necessary to get insights into the drivers and dynamics of children's work. A CLMRS may provide part of these narrative accounts within households that are at risk. Local facilitators are well suited to identify illustrative cases, for example, particularly as children are not attending school or may not be registered in the health post when injured or ill. In several of these Requires careful sampling to ensure a range of perspectives across respondents; requires time to build capacity in skills and ongoing ethical procedures to facilitate some of these methods
Localised longitudinal information can support analysis of changes in household conditions and harmful work
Local facilitators in CLMRS can help collect more in-depth information but are unskilled as researchers Source: Authors' own.
CLMRSs, a social worker, paid by the company, is responsible for visiting households. They raise awareness about the tasks that children of that age are considered to be capable of doing or not doing. However, a downside of working with local facilitators is that they are relatively unskilled as researchers.
Impact
We explore how different methods can shape an understanding of how CHW impacts various aspects of children's lives, and how interventions impact CHW (see Table 4).
Impact of child labour on children's lives
Survey methods are commonly used to assess the impact of child labour or children's engagement with work on different aspects of their lives.
Many studies are particularly interested in associations between work and education. For example, NCLS data from 12 countries was used to investigate associations between child labour and educational attainment (ILO 2015). Young Lives data underpinned a study of the impact of child labour on educational attainment in Vietnam (Mavrokonstantis 2011). Several mixed-methods studies (Orkin 2012;Woldehanna, Jones and Tefera 2008) also explored the impact of child labour on school attendance in Ethiopia. Qualitative and participatory studies can uncover intended and unintended consequences of work, placing these within contextual understandings of harm.
Four observations are important. First, as noted above, most analyses are based on cross-sectional data and only allow for gaining insights into associations but not causality. Fourth, the issue of temporality is key in understanding how work affects children, and whether or not it may be harmful (Maconachie et al. 2020). Work may only cause harm if children are exposed to a certain risk associated with that work over a longer period of time, and harm may also present itself long after children have stopped engaging in this work. For example, agro-chemicals may only cause harm if children are exposed to them over a longer period of time, but its effects could be immediate (e.g. chemical burns), medium term (e.g. respiratory problems) and/or long term (e.g. affecting reproductive health). While the range of methods are relatively well-equipped to pick up intensity of exposure through studying time use, few methods have enough of a longitudinal perspective to pick up on medium-to long-term effects, particularly if the potential for those effects is not yet known.
Impact of programmes on child labour
In many impact evaluations, surveys are central to the research design and constitute the primary data source for estimating programme effects, particularly in (quasi-)experimental settings. Evaluations cover programmes that have the reduction of child labour as a primary objective (e.g. educational interventions) and programmes that have it as a secondary objective (e.g. social protection).
A notable observation in relation to quantitative impact evaluations is that child labour (or children's work) tends to be loosely defined. Studies -and their underlying surveys -are often designed without clear reference to international guidelines or academic literature that problematises dominant understandings of child labour or children's engagement in work. This is certainly the case in relation to social protection. Evaluations of social protection programmes and their effects on child labour rarely follow the ICLS resolution (Dammert et al. 2018). Notions such as child labour or children engaged in productive activities are used interchangeably, with some evaluations denoting any type of work as child labour (ibid.). Even evaluations of programmes that focus squarely on reductions in child labour concede that there is no agreed definition of child labour and adopt their own operationalisation, such as in relation to educational programmes in Panama (Andisha et al. 2014).
Qualitative and participatory methods are crucial for uncovering intended and especially unintended impacts of interventions. Impact assessments that are child-centred can use a range of already tried and tested visual, narrative and mobile methods, as already discussed, including photo narrative workshops, sorting and ranking different work activities, observation of children's and young people's work, and using matrices with children's indicators for health and wellbeing. Time allocation methods will inform the understanding of how work fits with children's everyday realities, and genderdisaggregated data and analysis will be important. These methods combine observation and statistics. Impact work can be carried out with children, parents and other stakeholders in employment and the informal sector to compare different intergenerational perspectives. Drama could also be used to elicit reactions to interventions aiming to abolish child labour.
A CLMRS that exists within certain certification systems may offer specific information or scope for collecting data about the impact of certification on children's engagement with hazardous or harmful work. These multi-stakeholder and areabased approaches are one of the more promising interventions to address the issue of hazardous child labour (ICI 2011;ILO 2018). Remediation activities are at the heart of the efforts of CLMRS and involve supporting children, their families and communities to remove children from a situation of risk. The purpose is twofold: to try and prevent children from doing hazardous work in the first place; and to help children who are engaged in hazardous work to stop. The majority of remediation activities to date have focused on education, activities to improve family income, and assistance with farm-related work (Nestlé 2019). With the CLMRS still in its piloting phase, implementors of these initiatives are looking for research that can help them to perform these activities more effectively and in such a way that the costs are covered as part of the business strategies and sharing of risks and rewards in the supply chains. Difficult to attribute impact (although can be useful to assess contribution); questions can be scaled (to be administered to large samples) but methods need to be combined with other methods to gain full insight
Provide insights into impact of certification systems on CHW
Data may not be reliable Source: Authors' own.
4 Mixed-methods design 4 There are many variants of each mixed-methods designs, for further details please see Creswell et al. 2003. 5 About Verité.
Next, we consider the use of mixed-methods design in studies of child labour and children's work.
For the purposes of this review, we define the term 'mixed methods' as the combination of quantitative and qualitative approaches (and excluded studies that only combined multiple qualitative or quantitative approaches). Also, for feasibility purposes, we considered only studies that focused on child labour or children's engagement with work as a main outcome of interest. A total of 10 studies were identified to fit these criteria.
In this section, we provide an overview of mixedmethods designs that were used to underpin the studies identified, and assess the opportunities and challenges for using mixed methods when studying CHW.
Mixed-methods studies of child labour
We follow Creswell et al.'s approach to mixed methods study designs for the reviewed studies, highlighting four major types of mixed-methods designs (Creswell, Plano Clark and Garrett 2003). 4 First, in the triangulation design, both qualitative and quantitative data collection and analysis take place concurrently but separately. The findings from each method are then brought together in the final interpretation phase. Second and third, the explanatory and exploratory designs constitute sequential two-phase mixed-methods designs whereby qualitative and quantitative data collection and analysis take place at different times and the findings explicitly aim to build on each other. For example, in an explanatory sequential design, the qualitative data collection and analysis help to explain the initial quantitative results. Fourth, the embedded design is a mixed-methods design in which one data set provides a supportive, secondary role in a study based primarily on the other data type.
Two of the identified studies that used a mixedmethods design to study child labour as one of their main outcomes of interest (Ghorpade 2017;Zakar et al. 2015) employed concurrent triangulation design, whereby quantitative and qualitative data collection took place at the same time and findings were combined in the final analysis phase.
Half of the studies used a sequential design, mostly starting with the analysis of quantitative survey data and followed by in-depth qualitative work for more nuanced insights. Two studies used sequential mixed-methods designs with multiple qualitative and quantitative rounds that sequentially informed and built on each other (Orkin 2012;Verité 2016). The latter study presented a major multi-country research project on forced labour conducted for an international civil society organisation (called Verité 5 ). The project used a flexible approach with a variety of qualitative and quantitative methods. Country teams could select and adapt methods depending on their on-theground research needs and challenges. The authors of this study emphasised that the mixed-methods approach provided them with much-needed flexibility and the ability to adapt data collection efforts to dynamic and often insecure contexts.
Another study (Al Ganideh and Good 2015) used a sequential explanatory design for the data collection but employed a more pragmatic, iterative approach during the data analysis and triangulation. In practice, this meant that the researchers followed up hypotheses that emerged during the analysis of one data source with the other data source and vice versa. This more flexible approach helped the authors to provide more comprehensive insights and explanations than a strict one-way sequential approach might have done. A well-documented drawback of this type of mixing methods is the length of time necessary to develop and adapt methods in a sequential manner (Creswell et al. 2003) One study (Bhatia et al. 2020) used an embedded design with the quantitative approach being the primary data source. A small number of qualitative stakeholder interviews were undertaken merely to expand the quantitative findings. Another study used an embedded design with a primary qualitative component and secondary quantitative survey (O'Kane, Barros and Meslaoui 2018).
Only one study used a more innovative approach to mixed-methods research (Kiss et al. 2020), using a realist evaluation design with Bayesian network analysis to explore causal pathways to and drivers of forced labour in Nepal. The authors criticised traditional, linear mixed-methods approaches for simplifying the underlying causes of forced labour and failing to acknowledge its interlinked complexities. As a consequence, these traditional studies often concluded with inaccurate results and had limited explanatory power (ibid.).
Opportunities and challenges of mixed-methods design
In this section, we assess the potential and challenges when using a mixed-methods research design for studying children's engagement with work, and particularly CHW (see Table 5). Overall, mixed-methods approaches can be powerful as they combine strengths of various methods. They often help to challenge perceptions and assumptions about children's work and thus can facilitate a more holistic understanding of children's engagement in harmful work. The review of existing mixed-methods studies of child work also shows that this potential has so far been largely underexplored. The level of interwovenness between the quantitative and qualitative components in the retrieved studies was generally weak. In the majority of mixed-methods studies, the quantitative and qualitative components were conducted separately and, to a large degree, independently of each other.
With respect to prevalence, mixed-methods design offers real potential for making estimates of child work more meaningful and reliable. As noted in one study, NGO members stressed that national-level prevalence data were important to highlight the magnitude of child labour for their advocacy work but that they had limited use with regards to gaining fine-tuned insights to guide action and programmes. Local-level data and qualitative approaches were perceived as necessary for this (Bhatia et al. 2020). It follows that mixed methods offer promising opportunities for estimating prevalence of CHW by first gaining more detailed insights into working conditions and then estimating prevalence using quantitative data.
As noted in section 4.1, various mixed-methods studies have used both qualitative and quantitative methods to gain insights into the conditions that children work in (Bhatia et al. 2020;Al Ganideh and Good 2015). Nevertheless, few studies have made full use of the opportunity to preface survey data collection with in-depth qualitative data generation to map the prevalence of children's engagement with work from a more nuanced perspective.
In terms of the drivers and dynamics of child labour, many purely quantitative studies neglect the heterogeneity of child labour, which can significantly reduce the usefulness of findings to inform policy and practice (Krauss 2017). Mixedmethods approaches can facilitate the identification of meaningful sub-groups of child workers and what influences their participation in work, thereby ensuring that research is more inclusive. In Ethiopia, Orkin (2012) employed a sequential, multi-phased mixed-methods design to explore the drivers of both child labour participation and school attendance. Qualitative fieldwork with parents and children was used to identify characteristics of work and school that influenced participation, which was then used to inform and improve analysis of quantitative models on intra-household bargaining with regards to children's time allocation to either school or labour. In other studies with sequential design, the quantitative analysis proposed one or more potential drivers for child labour while the qualitative data was then able to provide details on the potential causal mechanisms behind the observed association (Shaffer 2013). For example, based on the econometric analysis, Woldehanna et al. (2008) found that children with highly educated mothers were more likely to work. Qualitative findings indicated that educated mothers were often more likely to work outside the home, thereby increasing domestic work for their children at home.
A considerable shortcoming, also observed in relation to other methods, is the lack of longitudinal data. This hampers the ability to explore what drives children's engagement with work over time, and limits the ability to understand the impact of children's work. Several authors noted the lack of long-term mixed-methods studies on the mediumand long-term consequences of work on children's health and wellbeing (Ibrahim et al. 2019;Kuimi et al. 2018).
The Young Lives study is a notable exception to this and has underpinned various investigations into the impact of children's work. Several studies (Orkin 2012;Woldehanna et al. 2008) explored the impact of child labour on school attendance. Drawing on both qualitative and quantitative evidence, the authors found that work and school attendance may be successfully combined depending on the time each activity takes and the characteristics of each activity. A potential pitfall when it comes to mixing methods is that tools may be premised on different understandings of what constitutes child labour or harmful forms of work, thereby potentially limiting the extent to which findings can be combined and complement each other. At the same time, these alternative views can facilitate a deeper understanding of why impacts do or do not play out.
Finally, an obvious but necessary observation from this review points to the overall lack of mixed-methods studies on children's engagement with work. This seems to echo the perennial and persistent divide between quantitative and qualitative research observed within development studies (Jones and Sumner 2009). Findings suggest that quantitative studies still mainly focus on assessing the prevalence, drivers and impact of child labour. By contrast, qualitative and participatory research seems more concerned with investigating children's experiences of labour and the dynamics and complexities surrounding it. We also find that the majority of studies focus on obtaining larger-scale data that can be contextualised with more qualitative methods. Relatively few studies adopt fully integrated designs or make use of child-centred and participatory methods in combination with quantitative methods.
Implications for ACHA
This review leads to reflections about implications for ACHA and its research design. Generally, the review of methods shows that there is real potential for ACHA to do something new, innovative and exciting from a methodological point of view. The review identifies two research gaps that ACHA can begin to fill. First, despite the wealth of research on child labour and children's work, few studies use a truly integrated mix of methods. This integration would enable them to think beyond and challenge standard notions of children's engagement with work. Second, only a relatively small body of literature (across all research looking at forms of child labour and children's engagement with work) seems to be concerned with children's hazardous and harmful work. This literature is primarily informed by smaller-scale ethnographic and participatory research due to the complexities and sensitivities surrounding those types of work. ACHA has an opportunity to adopt a research design that integrates methods across disciplinary divides in more holistic ways and, in doing so, to begin to understand the breadth and depth of children's harmful work in agriculture.
Research design principles
Following the review of methodological experiences, we frame implications for ACHA as research design principles, which would inform a more detailed research design. This would involve the following steps. • Giving space and weight to children's voices: Research that actively engages children and recognises their expert knowledge produces better data and identifies gaps in needs and priorities -and often produces unexpected findings (Johnson and Lewin forthcoming;Mizen and Ofosu-Kusi 2007;Van Blerk et al. 2009 (Maconachie et al. 2020). In Ghana, the research focuses on three relatively distinct supply chains (cocoa -international; inland fish -entirely national; shallots -entirely national). Research design needs to be adequately contextualised -for example, in terms of which stakeholders will be included as research participants. Contextualisation of research also requires taking account of differences across spaces and places Source: Authors' own.
Certification methods
Qualitative/ Participatory Action Research (PAR) methods
Drivers and dynamics
Qualitative/PAR methods
Impact
Qualitative/PAR methods (e.g. studying the forms of CHW in small-scale fisheries around Lake Volta in Ghana may also need to study the living conditions in areas from which children move to Lake Volta to engage in work).
CHILD-FOCUSED, INCLUSIVE, ETHICAL
• Adhering to ethical protocol and principles: Research design and individual methods need to be fully in line with ethical protocols, procedures and practice (Johnson 2020). Such ethical protocols are to be developed through local dialogue in keeping with the principle of building on capacity of local researchers and participants.
Methodological landscape
In line with the research design principles, we propose a rough methodological landscape that offers parameters within which ACHA's research design can be developed (Figure 1). The proposed methodological landscape differs from the existing predominant use of methods in researching children's engagement with work in a few ways: • The mixed-methods approach is more holistic and all-encompassing, fully integrating survey methods, qualitative and participatory methods and certification methods.
• Relatedly, this landscape gives greater weight to qualitative and participatory methods.
The complexities and sensitivities involved in researching CHW merit the use of such methods, particularly in the early stages of the research and to establish prevalence.
• Stronger linkages are in place between methods, aiming towards an integrated mixed-methods design as opposed to purely sequential or parallel designs. The bi-directional arrows propose an iterative research process whereby, for example, data from qualitative and participatory methods feed into survey design and findings from survey data can feed into ongoing ethnographic activities. As already noted, the exact combination of methods may be adapted over the course of the programme.
• Methods are integrated across the research process to make full use of the learning from individual methods and the expertise of respective researchers from design through to uptake of research findings. Crucially, this requires ample allocation of time in order to make full use of learning opportunities created through the research. | 2020-11-12T09:05:46.816Z | 2020-11-06T00:00:00.000 | {
"year": 2020,
"sha1": "06719de2f60ef7b8e34304277a75dca9931aa67e",
"oa_license": "CCBY",
"oa_url": "https://opendocs.ids.ac.uk/opendocs/bitstream/20.500.12413/15761/1/ACHA_Working_Paper_3.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d5bdf2f525f35310fd6f6418fe2401a76098d98c",
"s2fieldsofstudy": [
"Education",
"Psychology",
"Sociology"
],
"extfieldsofstudy": [
"Sociology"
]
} |
1668084 | pes2o/s2orc | v3-fos-license | Carryover effects of larval exposure to different environmental bacteria drive adult trait variation in a mosquito vector
The adult phenotype of mosquitoes depends on the types of bacteria encountered environmentally during development.
INTRODUCTION
For many holometabolous insects (that is, with complete metamorphosis), the ecological niche of larval stages differs greatly from that of adults. For example, mosquito larvae develop in aquatic habitats, whereas the adults live in terrestrial habitats. Holometabolism allows larvae and adults of the same species to exploit different resources and avoid intraspecific competition (1). However, larval and adult stages of holometabolous insects are not independent from each other, because the biotic and abiotic larval environment can influence adult life-history traits (2,3). In mosquito vectors of human pathogens, for example, it has been well documented that conditions such as temperature (4-7), diet (8)(9)(10)(11), competition (12)(13)(14)(15), soil substrate (16,17), and predator exposure (18) experienced during larval development can carry over and affect adult traits related to vectorial capacity. Vectorial capacity is a measure of vector-borne pathogen transmission potential that encapsulates the dynamics of vector-pathogen and vector-vertebrate host interactions (19), including vector life span and vector competence (that is, the intrinsic ability to acquire and subsequently transmit a pathogen).
Host-associated microbes, collectively known as the host microbiota, have manifold effects on host biology. Like other animals, insects establish symbiotic relationships with microbial communities that shape their physiological functions (20,21). In recent years, it has become clear that the symbiotic microbiota of insect vectors play an important role in their vectorial capacity (20,22). The native bacterial microbiota of mosquitoes can modulate their immune response and vector competence for human pathogens (23)(24)(25)(26). The relationship between mosquitoes and the endosymbiotic bacteria Wolbachia has been well documented, but the interactions between mosquitoes and their gut bacterial microbiota have not been described in such depth. Furthermore, our current understanding of how bacteria-insect vector interactions affect pathogen transmission is limited to adults. Little is known about whether the bacterial microbiota of larvae affect adult traits related to pathogen transmission. Our knowledge of bacterial communities in larval sites and between life stages is mainly descriptive (27)(28)(29)(30), although it was recently shown that mosquito larvae rely on bacteria to develop (31)(32)(33).
Because the mosquito gut microbiota composition is dynamic and susceptible to environmental changes (20,(34)(35)(36), we hypothesized that habitat-related differences in bacterial communities in larval development sites could mediate environmental variation in vector-borne pathogen transmission. We addressed this question in the mosquito Aedes aegypti, an important worldwide vector of medically significant arboviruses such as dengue, Zika, yellow fever, and chikungunya viruses. In Sub-Saharan Africa, A. aegypti exist in the form of two ecotypes: a "sylvatic" ecotype of A. aegypti found in forested habitats, ecologically similar to the ancestral form of the species, and a human-adapted "domestic" ecotype that thrives in urbanized environments (37,38). Whereas domestic A. aegypti larvae develop in artificial containers (cans, tires, jars, and flower pots) within or in close proximity to human habitation, the larvae of the sylvatic ecotype are typically found in natural breeding sites (rock pools, tree holes, and fruit husks).
First, we characterized the differences in the bacterial community composition between domestic and sylvatic A. aegypti larval development sites in Gabon and in the midguts of A. aegypti emerging from these sites. This initial description was used to justify our hypothesis that these differences may be functionally relevant at the adult stage. Second, we measured the variation in several adult traits related to vectorial capacity using gnotobiotic larvae (that is, sterile larvae subsequently exposed to a single bacterial isolate) made with native bacterial isolates from the same breeding sites in Gabon. In contrast with previous studies on bacteria-mosquito interactions that focused on the effect of a single bacterial isolate at the adult stage, our primary interest was to determine whether different bacteria-mosquito interactions at the larval stage could explain natural variation in adult traits. Dissecting the relative contribution of genetic and environmental factors in natural phenotypic variation is central to understand the evolutionary potential and mechanistic basis of a trait. Our study provides the proof of concept that exposure to different bacterial isolates during larval development results in variation in pupation rate and several adult phenotypes such as life-history traits and antimicrobial phenotypes.
RESULTS
Bacterial communities differ between domestic and sylvatic larval breeding sites We compared the bacterial communities between domestic and sylvatic larval development sites and the midguts dissected from surfacesterilized adult A. aegypti females using a metagenomics approach based on targeted sequencing of the 16S ribosomal RNA gene ( Fig. 1; for details, see Materials and Methods). Sylvatic samples were collected in gallery forests inside Lopé National Park, central Gabon, whereas domestic samples were collected in a nearby village. In the sylvatic environment, we characterized the bacterial communities in water collected from eight different A. aegypti larval development sites, nine midguts from adult females emerging from these sites, and six midguts from adult host-seeking females caught by human-landing catch (HLC) next to the larval development sites in the gallery forests. In the domestic environment, we characterized the bacterial communities in water collected from six A. aegypti larval development sites, eight midguts from adult females emerging from these sites, and three midguts from adult host-seeking females caught by HLC in the village. Overall, we analyzed eight water samples and 15 midguts from the sylvatic environment and six water samples and 11 midguts from the domestic environment. To control for contamination of bacteria introduced during sample processing, aliquots of reagents and blank samples were included as negative controls (for details, see Materials and Methods). A total of 2851 operational taxonomic units (OTUs) were identified among all the samples, of which 2412 were included in the analysis after removing OTUs that were present in the negative controls (that is, likely resulting from laboratory contamination).
OTU richness was higher in the water samples than in the midgut samples, and it was higher in sylvatic water samples than in the domestic water samples (fig. S1A and file S1). Despite differences in the total number of distinct OTUs between sample type and habitat, there was no significant difference in the Shannon diversity index (fig. S1B and file S1). Note that because of the small sample sizes, the analyses of bacterial diversity were likely underpowered to draw any conclusions about the differences between the midguts from freshly emerged adults and those from HLC adults. Plotting the overlap of OTUs between sample types revealed that although some bacterial community members were shared, a large proportion was unique to each sample type ( Fig. 2 and fig. S2). Within both the domestic and sylvatic habitats, there was only partial overlap between bacterial communities found in the midguts dissected following adult emergence, the midguts of HLC adults exposed to the natural environment, and water of larval development sites (Fig. 2, A and B). Notably, 28 OTUs (51%) found in the midguts from both freshly emerged and HLC adults were undetectable in the corresponding water samples. This limited overlap between midgut and water samples was not simply due to differences in rare OTUs. Among the 100 most abundant OTUs, six OTUs that were abundant in midgut samples were undetected in the water they emerged from ( fig. S3). Reciprocally, most of the OTUs that were abundant in water samples were not found in the midguts of adult emerging from them ( fig. S3). The midguts from freshly emerged adults harbored a larger number of unique OTUs and, therefore, a larger number of OTUs overall, compared to those from HLC adults (Fig. 2, A and B). Approximately a third of the OTUs identified in the midguts following emergence were shared between domestic and sylvatic habitats (Fig. 2C). While more than half of the OTUs found in sylvatic water sites were unique to this habitat, a majority of OTUs found in domestic water sites overlapped with OTUs found in sylvatic water sites (Fig. 2D). Among the OTUs that were shared between sample types, the abundance of 137, 2, 495, and 291 OTUs differed significantly (Wald test) between domestic and sylvatic water samples, domestic and sylvatic midguts, sylvatic midguts and water samples, and domestic midguts and water samples, respectively (file S2).
To determine whether the structure of bacterial communities differed between sample type and habitat, we used two different, complementary approaches (see Materials and Methods for details). In the first approach, a Bray-Curtis dissimilarity matrix was generated on the basis of OTU abundance and analyzed using nonmetric multidimensional scaling (Fig. 3A). In the second approach, a Bray-Curtis dissimilarity matrix was generated on the basis of k-mer presence/ absence and analyzed by hierarchical clustering (Fig. 3B). Regardless of the approach, the structure of bacterial communities markedly differed (P = 0.001) between water and midgut samples (Fig. 3). In addition, the structure of bacterial communities was distinct between domestic and sylvatic larval habitats (Fig. 3). Analyses of b diversity based on OTU counts confirmed that bacterial communities significantly differed between combinations of habitat and sample type (P = 0.005). Lack of significant dispersion effects within habitat (P = 0.372) and sample (P = 0.652) types confirmed the validity of the statistical model. Without replicate water samples from the same larval breeding site, the degree of within-habitat heterogeneity could not be accurately quantified. However, one of the domestic water samples clustered with sylvatic water samples, pointing to some degree of heterogeneity among domestic sites (Fig. 3B). As noted above, because of the small sample sizes, these analyses were likely underpowered to draw any conclusions about the differences between the midguts from freshly emerged adults and those from HLC adults.
Adult life-history traits vary between gnotobiotic larvae exposed to different bacterial isolates To assess the functional relevance of differences in bacterial communities in larval development sites, we generated gnotobiotic A. aegypti larvae by exposing axenic (that is, bacteria-free) larvae to a single bacterial isolate during their development ( Fig. 1; see Materials and Methods for details). We collected 168 bacterial isolates from the same larval development sites in Gabon in which we collected water for 16S targeted metagenomics. Of the 168 bacterial isolates, three were arbitrarily chosen for functional assays based on genetic dissimilarity, differences in the pupation rate of gnotobiotic larvae, and differences in the identity and proportion of midgut bacteria in adults emerging from gnotobiotic larvae (see Materials and Methods for complete description of the isolate screen). Two of the three isolates were isolated from domestic breeding sites, and on the basis of their full-length 16S sequence, they aegypti pupae (for later midgut dissection from emerging adults) were collected from sylvatic larval sites in gallery forests along the rivers and streams of Lopé National Park in Gabon and from domestic larval sites in a nearby village. (B) At domestic sites (lower pictures), samples were collected from artificial containers such as discarded plastic containers, tires, and metal tins. At sylvatic sites, samples were collected from rock pools (upper picture). (C) At each collection site, both water and pupae were collected into a sterile tube using a sterile pipette. The samples were brought back to the field station, and an aliquot of water was removed next to a Bunsen burner flame and frozen until processing. Back in the laboratory, the water samples were thawed and centrifuged, and the bacteria pellet was resuspended in sterile water and spotted on Whatman FTA cards for later DNA extraction. The pupae were held in the same collection tube until adults emerged. Midguts from adults were dissected within 12 hours of emergence next to a Bunsen burner flame and preserved for later DNA extraction. Midguts were also dissected from wild adult females caught by HLC. Deep-sequencing libraries were made using the V5-V6 hypervariable region of the 16S bacterial ribosomal RNA gene. The sequences were clustered into OTUs and used for analysis of taxonomical abundance and community structure. At the same time that an aliquot of water was frozen, another aliquot was also removed to make a glycerol stock. Upon return to the laboratory, the glycerol stocks were streaked out onto different medium types and individual colonies isolated. For functional assays in vivo, gnotobiotic larvae were created by adding a single bacterial isolate to sterile flasks containing axenic larvae. Adult mosquitoes that had undergone different gnotobiotic treatments as larvae were used to test for variation in life-history and antimicrobial phenotypes.
were assigned to Salmonella (Ssp_ivi) and Rhizobium (Rsp_ivi) genera. The third isolate (Esp_ivi) was isolated from a sylvatic breeding site and belongs to the Enterobacteriaceae family; however, classification at the genus level was inconsistent among databases (alternatively Salmonella, Escherichia, or Shigella). In the 16S data set, the Enterobacteriaceae and Rhizobium taxonomical groups were present in both domestic and sylvatic breeding sites, whereas the Salmonella taxonomical group was only found in domestic breeding sites. Note that the isolates were not chosen to reflect the dominant taxa identified by the targeted metagenomics approach. To minimize the potential confounding effects of specific interactions between mosquito genotypes and sympatric bacterial isolates, gnotobiotic mosquitoes were created using a wild-type mosquito genetic background from Thailand. We measured pupation rates in gnotobiotic larvae to determine whether the interaction with different bacteria present in the water alters larval development. As previously reported (31,32), when larvae were maintained as axenic, the larvae did not develop past the first instar stage (Fig. 4A). To assess the differences in the pupation rate of the different gnotobiotic treatments, we compared the growth rate (that is, slope of the exponential phase) and the time it took for 50% of the larvae to pupate using a threeparameter model of pupation dynamics (Fig. 4A). The nonaxenic larvae had a significantly faster growth rate than the gnotobiotic larvae, but no significant differences in growth rate were observed among the gnotobiotic larvae (file S3). This is in contrast to the initial screen of bacterial isolates, which is likely a result of greater statistical power in this data set; the initial screen consisted of three replicate flasks of larvae per treatment, whereas this data set included three replicate flasks per treatment from three independent experiments. Although the larval growth rate during the exponential phase was similar among gnotobiotic treatments, the time it took to reach 50% pupation significantly differed among treatments (file S3). The lag phase was shorter for larvae exposed to the Ssp_ivi isolate than for those exposed to the Esp_ivi or Rsp_ivi isolate, whereas there was no difference between larvae exposed to the Esp_ivi or Rsp_ivi isolate (Fig. 4A). The pupation rate at day 5, 9, or 12 of larval development was not dependent on the amount of bacteria in the flask on the corresponding day, nor was the pupation rate at day 9 dependent on the amount of bacteria inoculated into individual flasks upon egg hatching (file S4).
To assess the fitness of adult mosquitoes after being exposed to different bacterial isolates during larval development, we measured their life span and wing length (a proxy for body size). The life span of adult females did not significantly differ (P = 0.593) between gnotobiotic treatments (Fig. 4B), but there were significant differences (P = 8.6 × 10 −6 ) in their wing length ( Fig. 4C and Table 1). The larvae that were exposed to the Ssp_ivi isolate grew into adults with the largest wings, the Esp_ivi and nonaxenic treatments resulted in the smallest wing length, and the Rsp_ivi treatment resulted in an intermediate wing length. Adult antimicrobial phenotypes vary between gnotobiotic larvae exposed to different bacterial isolates To determine whether exposure to different bacterial isolates during larval development results in variation in susceptibility to microbes as adults, we measured the antibacterial activity of the hemolymph and the susceptibility to dengue virus of adult females emerging from gnotobiotic larvae. In mosquitoes, the hemolymph displays a strong immune response and is involved in immune priming and immune memory (39)(40)(41). To test for differences in the immune system of adult A. aegypti that had been exposed to different bacteria during larval development, we measured the antibacterial activity of the hemolymph based on its ability to clear Micrococcus luteus on an agar plate (see Materials and Methods for details), as previously described (42,43). In individual mosquitoes whose hemolymph resulted in detectable clearance of M. luteus, there was no significant difference (P = 0.207) in the intensity of M. luteus clearance among the three gnotobiotic treatments. However, there was a significant difference (P = 0.025) in the proportion of individuals whose hemolymph demonstrated detectable M. luteus clearance. Hemolymph collected from adult females who had been exposed to the Esp_ivi isolate as larvae showed significantly fewer individuals who were able to clear M. luteus ( Fig. 5A and Table 1).
Next, we examined whether the effect of Esp_ivi exposure during larval development only affected adult antibacterial immunity or was also involved in carryover effects on susceptibility to virus infection at the adult stage. We measured the variation in susceptibility to dengue virus (serotype 1) in adult A. aegypti females who had been exposed to different bacterial isolates during larval development. In two separate experiments, we measured the proportion of mosquitoes that became infected, the proportion of infected mosquitoes that developed a disseminated (that is, systemic) dengue virus infection, and the infectious titer of disseminated dengue virus in the head tissues 14 days after an infectious blood meal. Infection prevalence could be analyzed only in the first experiment because 94.5% of mosquitoes became infected in the second experiment. In the first experiment, 63.2% of mosquitoes were infected overall, and infection prevalence was not significantly affected by the gnotobiotic treatment (P = 0.3256; fig. S4). For the same reason as above, we only analyzed the dissemination prevalence in the second experiment because 95.8% of infected mosquitoes had a disseminated infection in the first experiment. Differences in infection and dissemination rates were the result of different infectious doses used in the two experiments (see Materials and Methods for details). In the second experiment, 70.2% of mosquitoes had a disseminated infection overall, and dissemination prevalence was not significantly affected by the gnotobiotic treatment (P = 0.8579; fig. S4). Among mosquitoes with a disseminated virus infection, we found modest but statistically significant differences in the infectious titer measured in the head tissues of adults exposed to the Esp_ivi and Ssp_ivi isolates as larvae (Table 1). Specifically, females exposed to the Esp_ivi isolate during larval development had fewer viral particles in the head than those exposed to Ssp_ivi (Fig. 5B). Lack of a significant experimentby-isolate interaction with regard to virus titer indicated that the effect of the isolate on virus titer was consistent across both experiments (Table 1). In the NMDS plot, Spearman correlation (r) and stress values are indicated. The normalized OTU count table used to perform the NMDS analysis is provided in file S8. In the heat map, samples are labeled according to their type (M, midgut; W, water) and habitat of origin (sylvatic or domestic). Midguts dissected following emergence are labeled to match the corresponding water sample (for example, midgut a1 was dissected from a mosquito that emerged from water sample a). Midguts from the same breeding site are marked with matching symbols. Red color indicates high similarity, and green color indicates low similarity.
DISCUSSION
To the best of our knowledge, we provide the first empirical evidence that exposure to different bacteria during larval development can result in variation in adult traits related to pathogen transmission by an important insect vector. We observed differences in the bacterial communities that inhabit ecologically distinct A. aegypti breeding sites in Gabon. Using native bacterial isolates derived from these natural breeding sites, we created gnotobiotic mosquitoes to reveal the functional consequences of differential bacterial exposure at the larval stage. These results improve our understanding of environmentally mediated effects that carry over from one life stage to another life stage in holometabolous insects. They also emphasize the importance of accounting for larval ecology to unravel the determinants of pathogen transmission by insect vectors of human pathogens.
The first aim of our study was to describe habitat-related differences in the bacterial composition of A. aegypti larval development sites. Our observation of distinct bacterial communities between domestic and sylvatic habitats supports previous observations of habitat-related differences in bacterial communities in mosquito larval sites (27)(28)(29)(30) and verifies an important assumption of our study. Another marked observation from our targeted metagenomics approach was the dissimilarity between bacteria found in the larval site water and those in the adult midguts emerging from these same sites. The bacterial composition is known to vary between the larval and adult mosquito midguts (44)(45)(46)(47)(48)(49), and adult mosquitoes are thought to have lost most of their midgut bacteria during metamorphosis (50,51). However, recent work has also shown that several bacterial community members in A. aegypti are transstadially transmitted (32) and that adult Anopheles may acquire their gut community from larval breeding sites (52). The limited overlap in bacterial OTUs between the matched water samples and the midguts from freshly emerged adults could be explained by at least five scenarios. (i) The OTUs that we detected in the adult midguts were also present in the water but at a frequency that was too low to be detected by our sequencing method. (ii) The bacteria found in the adult midguts were not acquired from the water during larval development and, instead, were inherited in alternative ways such as vertical transmission. Wolbachia can be inherited vertically in Culex species and Aedes albopictus (20,53,54), and vertical transmission of the bacterium Asaia has been observed in Axenic larvae (gray line) were included as negative controls. Statistical significance of pairwise differences in pupation rate between treatments was determined by using a threeparameter model to compare the slope of the exponential phase and the day when 50% of larvae pupated. Statistical significance for each pairwise comparison is indicated by a star in the inset table. The shaded ribbon around each curve represents SEM. (B) Adult female life span was determined by counting the number of dead females in triplicate cages. No statistical difference in life span was detected between the different treatments (P = 0.54). (C) Boxplots represent the wing length of adult females from different gnotobiotic treatments. Statistical significance of pairwise differences between treatments was determined by t test. Letters above the graph indicate statistical significance in which treatments with a letter in common are not significantly different from each other.
Anopheles mosquitoes (55). (iii) The bacteria found in the adult midguts were not present in the water but were acquired during the larval stage from larvae feeding on organic detritus such as leaves, wood, or other arthropods. (iv) The bacteria identified in the midguts were acquired from a different water depth than we sequenced. We did not control for the depth of the water that we collected for sequencing. It is possible that bacterial communities differ between water at the bottom and at the top of a pool (56) and that the bacteria that we identified in the mosquito midguts were acquired from the bottom of the pools or vice versa. (v) The low sample size of eight to nine midguts from freshly emerged adults per habitat may not capture the true taxonomical diversity. There is great variability in the composition of mosquito midgut bacterial microbiota, and we may need a larger sample size to capture all taxa present. Finally, we found that the number of OTUs detected in the midguts (that is, OTU richness) was significantly smaller than that in the water samples. This finding is consistent with previous reports of the low complexity of bacterial communities typically found in mosquito midguts (20,57).
The second aim of this study was to assess whether exposure to different bacteria during larval development resulted in changes in adult traits involved in vectorial capacity. It was previously observed that the addition of any bacteria to axenic mosquito larvae rescued their development and allowed them to pupate and become adults (31,32). Although the authors of the study did not elucidate the mechanism underlying this observation, they hypothesized that the requirement for bacteria in the larval water was not nutritional because nonaxenic larvae maintained on sterile food in a sterile environment were still able to develop. Our observation of differences in pupation rate and adult body size hints at a possible role of nutrition and/or metabolism and deserves further work. Despite differences in pupation rate and adult size, the life span of adults exposed to different bacteria as larvae was Table 1. Test statistics of wing length, hemolymph lysozyme-like activity, and titer of disseminated dengue virus. Wing length was compared with an analysis of variance. The proportion of hemolymph extracts with detectable antibacterial activity were analyzed with a logistic regression and analysis of deviance. FFU counts in head tissues were log 10 -transformed and compared with an analysis of variance. With the exception of wing length, the model includes the effect of the isolate (Ssp_ivi, Esp_ivi, Rsp_ivi, and nonaxenic), the experiment (two repetitions), and their interaction. Df, degrees of freedom; LR, likelihood ratio. *P < 0.05; **P < 0.01; ***P < 0.001.
S C I E N C E A D V A N C E S | R E S E A R C H A R T I C L E
similar regardless of the bacterial isolate. It is possible that other bacteria besides the three isolates that we used could alter adult life span, as was shown for the endosymbiont Wolbachia in Aedes mosquitoes (58,59). We observed differences in adult susceptibility to systemic dengue virus dissemination and differences in the innate immune response to M. luteus. Mosquito hemolymph is regarded as an essential component of the immune system and plays an important role in pathogen recognition and elimination and in immune memory. We detected differences in the ability of hemolymph collected from adults exposed to different bacteria as larvae to clear M. luteus on an agar plate. Specifically, hemolymph from adults exposed to the Esp_ivi isolate during larval development was less efficient at clearing M. luteus compared to other gnotobiotic treatments. Protection against M. luteus does not imply uniform protection across all bacteria species, and further work remains to be carried out to determine whether the effect we observed extends to other types of bacteria. It was the Esp_ivi treatment that resulted in adults who were better at controlling systemic dengue virus dissemination (that is, less infectious viral particles in the head tissues), pointing to a potential trade-off between bacterial defense and the ability to control viral infections. Infectious titer of disseminated dengue virus is positively correlated with the probability of virus detection in A. aegypti saliva (60) and is often used as a proxy for transmission potential. Adult body size has been shown to influence the susceptibility of A. aegypti to viral infection (61), but differences in wing length did not explain the differences observed in dengue virus dissemination in our experiments. Both the nonaxenic and Esp_ivi treatments resulted in significantly smaller adults; however, the level of viral dissemination was high in the control and low in the Esp_ivi treatment.
Differences in hemolymph antibacterial activity and titer of disseminated dengue virus, despite similar midgut infection rates the gnotobiotic treatments, are consistent with those in hemolymph-mediated immune priming. Note that although our results suggest immune differences among different gnotobiotic treatments, further work will be necessary to establish a link between specific immune mechanisms and susceptibility to bacterial colonization of the midgut and dengue virus dissemination. An alternative explanation is that differences in bacterial exposure during larval development result in differences in the composition of the gut bacterial microbiota of adult mosquitoes, which could indirectly modulate the antibacterial and antiviral immune responses. In line with this hypothesis, we observed differences in the bacterial composition and the number of cultivable bacteria present in the midguts of adults who underwent different gnotobiotic treatments (file S3), although there was no clear correlation between different bacterial communities and the phenotypes tested.
Together, our results provide the proof of principle that exposure to different bacteria during larval development can influence the variation in adult traits in the holometabolous insect A. aegypti. By building on the observation that in Gabon, the composition of bacterial communities differs between ecologically distinct A. aegypti larval development sites, we demonstrated that experimental exposure to different natural bacterial isolates at the larval stage can influence the potential transmission of a medically relevant human pathogen. A. aegypti larvae in nature would not be exposed to a single bacterium like in our gnotobiotic experiments, which did not fully capture the true complexity of a natural situation. Because of the inability to culture all bacteria present in larval development sites and the logistical difficulties to recreate relevant natural bacterial communities under laboratory settings, our study rather establishes the proof of concept that habitat-mediated differences in the bacterial communities of A. aegypti larval sites can influ-ence adult traits. Evidence that differential larval exposure to bacteria, and thus larval ecology, may contribute to phenotypic variation in mosquito vectorial capacity is an important step toward a more comprehensive understanding of how environmental conditions shape the risk of vector-borne disease.
Field sampling
Water from larval breeding sites and midguts of A. aegypti females emerging from the same larval sites were collected in Gabon in November 2014. Sylvatic collections were made inside Lopé National Park (latitude, −0.148617; longitude, 11.616715), and domestic collections were made in Lopé village (latitude, −0.099221; longitude, 11.600330) approximately 6 km from the sylvatic collection sites (Fig. 1). All of the sylvatic collections originated from rock pools, and the domestic collections were from various types of artificial containers and tires ( Fig. 1B and file S5). At each larval breeding site, pupae and water were collected into a sterile 50-ml conical tube with filter-top lid using a sterile plastic pipette and brought back to the Station d'Etude des Gorilles et Chimpanzés field station. Upon arrival at the field station, 10 ml of water was transferred to a new sterile 50-ml conical tube next to the flame of a Bunsen burner and immediately stored at −20°C. The remaining water and pupae were held until the adults emerged and were visually identified as A. aegypti. The frozen water was transported back to the Centre International de Recherches Médicales de Franceville facilities in Franceville, Gabon, where the water was thawed and centrifuged at 3400 rpm for 10 min. Under a laminar flow cabinet, the supernatant was removed and replaced with 500 ml of sterile water to resuspend the bacterial pellet. The resuspended bacteria were spotted onto Whatman FTA cards (WB120401, GE Life Sciences), allowed to dry, and then wrapped in sterile foil for transport to Institut Pasteur in Paris.
Within 12 hours of adult emergence, the midguts from A. aegypti females were dissected and stored in RNAlater (Qiagen) at +4°C. RNAlater was initially chosen to preserve the midgut tissue with the hope of being able to recover both RNA and DNA from the samples, but it was not possible to isolate a sufficient amount of RNA and DNA from each sample, so only DNA extractions were performed. Because of the lack of a laminar flow cabinet at the field station in Lopé National Park, the midguts were dissected within 50 cm of a Bunsen burner flame in an effort to maintain sterility. Before and between each dissection, the dissecting tools were disinfected with 3% bleach. Adult A. aegypti were removed from the tube in which they emerged and were coldanesthetized. The mosquito was then surface-sterilized in 3% bleach and rinsed in sterile phosphate-buffered saline (PBS), and the midgut was dissected in a drop of sterile PBS. Because of the limited access to ethanol in the field, surface sterilization was only performed with 3% bleach. Negative controls were included in an attempt to control for potential contamination of bacteria introduced at this step (see below). The dissected midguts were placed in individual sterile tubes containing RNAlater that had been filtered through a 0.2-mm filter and aliquoted under sterile conditions. The midguts in RNAlater were stored at +4°C until being frozen at −20°C upon their arrival at Institut Pasteur in Paris until the DNA extraction was performed. Following the same procedure, the midguts from non-blood-fed, host-seeking adult females caught by HLC were also dissected. Human volunteers sat next to either the rock pools (sylvatic habitat) or the artificial containers (domestic habitat) where A. aegypti had been previously observed and caught the females as they landed and were preparing to probe.
DNA extractions
DNA from bacteria originating from the water samples was extracted from the Whatman FTA cards following the organic DNA extraction procedure provided by Whatman. DNA extractions were performed on two consecutive days, mixing samples each day to randomize a potential batch effect. Briefly, the filter paper was cut into small pieces using sterile scissors and soaked in 500 ml of extraction buffer [10 mM tris-HCl (pH 8.0), 10 mM EDTA, disodium salt (pH 8.0), 100 mM sodium chloride, and 2% (v/v) SDS] and 20 ml of proteinase K (20 mg/ml) prepared in sterile water overnight at 56°C with agitation. An equal volume of buffered phenol (pH 8.0) was added, vortexed briefly, and centrifuged for 10 min at maximum speed. The upper aqueous phase was transferred to a new microcentrifuge tube containing 500 ml of chloroform, vortexed thoroughly, and centrifuged for 10 min at maximum speed. The upper aqueous phase was transferred to a new tube containing 50 ml of 3 M sodium acetate (pH 5.2). Eight hundred microliters of 100% ethanol was added and vortexed. The DNA was allowed to precipitate at −20°C for at least 1.5 hours. The DNA was recovered by centrifuging for 30 min at maximum speed. The supernatant was discarded, and 1 ml of 70% ethanol was added to the pellet and centrifuged for 20 min at maximum speed. The supernatant was removed, and the pellet was airdried for 30 min and dissolved in 50 ml of sterile water. Because of the large number of samples, the extractions were performed in two batches. To control for contamination of bacteria introduced during the DNA extraction, a negative control was made from each day of extraction by performing the extraction on a blank sample.
Upon thawing the samples, the midguts were removed from the RNAlater and added to 300 ml of lysozyme (20 mg/ml) dissolved in Qiagen ATL buffer in a sterile tube containing grinding beads. The samples were homogenized for two rounds of 30 s at 6700 rpm (Precellys 24, Bertin Technologies). The samples were incubated at 37°C for 2 hours, after which 20 ml of proteinase K was added and vortexed briefly, followed by another incubation of 4 hours at 56°C on a shaker (300 rpm). After the incubation, 200 ml of Qiagen AL buffer and 200 ml of 100% ethanol were added to the samples and vortexed to mix. The lysate was transferred to a Qiagen DNeasy Mini Spin column (except midguts from HLC adults that were passed through Qiagen AllPrep columns), washed with buffers Qiagen AW1 and AW2 following kit instructions, and eluted in 20 ml of sterile water. To control for contamination of bacteria introduced both during the midgut dissections in the field and during the DNA extraction in the laboratory, negative controls were made by performing the same DNA extraction procedure on an aliquot of RNAlater opened at the field station in Gabon, an aliquot of sterile PBS used for midgut dissections that was opened at the field station in Gabon, and a blank midgut sample.
16S sequencing
Libraries were made from 8 sylvatic water samples, 6 domestic water samples, 15 sylvatic midguts (9 freshly emerged and 6 HLC adults), and 11 domestic midguts (8 freshly emerged and 3 HLC adults). In addition, two technical replicates were prepared with different primer pairs used on the same water samples. Technical replicates were used to confirm the repeatability of the sequencing results. Custom-made polymerase chain reaction (PCR) primers targeting the hypervariable V5-V6 region of the bacterial 16S ribosomal RNA gene were designed following Fadrosh et al. (62). These custom primers were designed to include the necessary Illumina adapters and indexes so that only one round of PCR was needed and therefore avoid multiple rounds of PCR that could lead to a sampling bias. To overcome the issues that arise when sequencing libraries with low-diversity sequences, such as PCR amplicons, heterogeneity spacers consisting of 0 to 7 base pairs were added to the custom primers so that the sequences would be sequenced out of phase (62). A total of eight forward and eight reverse primers were designed (file S6) and used in all 8 × 8 combinations to amplify all the breeding site water and midgut samples. Four microliters of each breeding site water sample and 6 ml of each midgut sample were used to amplify the V5-V6 16S region in triplicate using Expand High-Fidelity polymerase (Sigma-Aldrich) following the manufacturer's instructions, with the addition 0.15 ml of T4gene32 and 0.5 ml of bovine serum albumin (20 mg/ml) per reaction to improve PCR sensitivity. Water samples were amplified for 30 cycles, and midgut samples were amplified for 40 cycles. The three PCRs were pooled, and the PCR products were purified using Agencourt AMPure XP magnetic beads (Beckman Coulter). The purified PCR products were quantified by Quant-iT PicoGreen dsDNA fluorometric quantification (Thermo Fisher Scientific) and pooled for sequencing on the Illumina MiSeq platform (Illumina). The sequencing run failed multiple times with no achievable explanation except for the inability of the sequences to bind to the flow cell. The failed custom sequencing tags were replaced with sequencing tags used successfully in previous projects (57), which required performing a second round of PCR because all extracted DNA from the midgut samples had been used in the initial PCR. The second round of PCR used new custom primers containing the same V5-V6 region to rescue the samples (file S7). One microliter of each library was amplified in triplicate using Expand High-Fidelity polymerase (Sigma-Aldrich) for eight PCR cycles. The three PCRs were pooled, purified using AMPure XP magnetic beads (Beckman Coulter), and quantified using Quant-iT PicoGreen dsDNA fluorometric quantification (Invitrogen). Library quality was checked by Bioanalyzer (Aligent Technologies), and 300-base pair paired-end sequences were generated on the Illumina MiSeq platform using a V3 kit (Illumina). The raw sequence data are available at the European Nucleotide Archive under accession number PRJEB16334.
Targeted bacterial metagenomics analysis Read filtering, OTU clustering, and annotation were performed with the MASQUE pipeline (https://github.com/aghozlane/masque), as described by Quereda et al. (63). A total of 2851 OTUs were obtained at 97% sequence identity threshold. The statistical analyses were performed with SHAMAN (shaman.c3bi.pasteur.fr) based on R software (v3.1.1) and bioconductor packages (v2.14). Because bacterial communities were expected to differ substantially between mosquito midguts and water samples, the normalization of OTU counts was performed at the OTU level by sample type (midgut or water) using the DESeq2 normalization method. All samples including the negative controls and technical replicates were included in the normalization step. The technical replicates were removed from the data set before analysis. To account for possible contamination at various steps in the sampleprocessing pipeline, the OTU counts were corrected with the reads from the negative controls (see above). All OTUs found in the negative control samples were removed from the normalized OTU table unless the count in a real sample was >10 times higher than the mean OTU count in the negative controls. This operation was performed with a homemade script in R (64). This normalized OTU count table with the OTUs found in the negative controls removed (file S8) was used for the richness, Shannon index, Venn diagrams, abundance heat map, and NMDS analysis. Observed richness, Shannon index, and Bray-Curtis distances were calculated with the vegan package in R (65). The effects of sample types and ecotypes on the bacterial richness were tested by fitting a generalized linear model (GLM) with a Poisson distribution. The SEs were corrected for overdispersion using a quasi-GLM model where the variance is given by the mean multiplied by the dispersion parameter. A c 2 test was applied to compare the significance of deviance shift after adding the covariates sequentially. The effects of sample type, ecotype, and their interaction on Shannon index were tested by fitting a linear model with a normal error distribution. The response variable was power-transformed to satisfy the model assumptions. The significance of each variable was tested with an analysis of variance (ANOVA) after adding the covariates sequentially. The results of the two models were confirmed by the convergence of backward and forward selection based on the Akaike information criterion. The Bray-Curtis distances were plotted with an NMDS method constrained in two dimensions. The Spearman correlation with real distances and stress value was estimated with the vegan R package (65). Effects of habitat and sample type on b diversity were tested with the betadisper and adonis permutational multivariate ANOVA methods from the vegan R package with 999 permutations of the Bray-Curtis distance matrix derived from OTU counts.
In SHAMAN, a GLM was fitted and vectors of contrasts were defined to determine the significance in abundance variation between sample types. The GLM included the main effect of habitat (sylvatic or domestic), the main effect of sample type (midgut or water), and their interaction. The resulting P values were adjusted for multiple testing according to the Benjamini and Hochberg procedure. All OTUs that were present in the negative controls were excluded from the final list of differentially abundant OTUs.
To confirm the OTU-based results with an OTU-independent method, a dissimilarity matrix was generated with the SIMKA software (66). Reads with a positive match against the sequences assembled from the negative controls were removed using Bowtie v2.2.9 (67). Then, k-mers of size 32 and occurring at least greater than two times were identified with SIMKA. Bray-Curtis dissimilarity was estimated between each sample.
Bacterial isolation
At the same time water was removed to freeze for DNA extraction, an aliquot of the larval site water was added to 50% sterile glycerol to make 20% glycerol stocks of the larval site water. The glycerol stocks were frozen at −20°C until they were transported back to Institut Pasteur in Paris. Upon arrival in Paris, the glycerol stocks were streaked out onto agar plates made with LB medium [LBm; LB with NaCl (5 mg/ml)] and PYC medium [peptone (5 g/liter), yeast extract (3 g/liter), and 6 mM calcium chloride dihydrate (CaCl 2 ·2H 2 O) (pH 7.0)] plates and incubated for 3 days at 30°C. LBm and PYC were chosen for being generalist media. Individual colonies were picked from the plates and used to inoculate 3 ml of the appropriate media, which were shaken at 30°C until bacterial growth occurred and used to create new glycerol stocks of the individual isolates. The same colony was also put into 20 ml of sterile water and exposed to two rounds at 95°C for 2 min and resting on ice for 2 min. The samples were then centrifuged for 5 min at maximum speed to remove cell debris, and the supernatant was used to amplify the entire 16S region by PCR [5′-AGAGTTTGATCCTGGCTCAG-3′ (forward) and 5′-AAGGAGGTGATCCAGCCGCA-3′ (reverse)] using Expand High-Fidelity Polymerase (Sigma-Aldrich). The PCR products were purified using the QIAquick PCR Purification kit (Qiagen), quantified by NanoDrop (NanoDrop Technologies Inc.), and sequenced by Sanger sequencing. The sequences were aligned and classified at the genus level using the SILVA database (www.arb-silva.de/). The raw sequence data are available at the European Nucleotide Archive under accession number PRJEB16334. Individual colonies were chosen on the basis of size, color, and morphology. The purity of the colonies used in the gnotobiotic experiments was verified by restreaking the bacteria on multiple occasions.
Gnotobiotic larvae
Axenic larvae were created using the eighth generation of an A. aegypti laboratory colony derived from a natural population originally sampled in Thep Na Korn, Kamphaeng Phet Province, Thailand, in 2013. This mosquito strain was used to create gnotobiotic larvae as a common genetic background from a different geographical region that had, presumably, not encountered the specific bacterial isolates introduced. The rationale was to avoid potentially confounding effects of local adaptation between mosquitoes and bacterial isolates. Eggs were gently scraped off the paper they were laid on into a 50-ml conical tube. The eggs were incubated in 70% ethanol for 5 min, 3% bleach for 3 min, and 70% ethanol for 5 min. The eggs were then rinsed in sterile water three times and allowed to hatch in sterile water in a vacuum chamber. Upon hatching, the larvae were transferred to sterile 25-ml tissue-culture flasks with filter-top lids and maintained in 15 ml of sterile water. Larvae were seeded to a density of 10 to 15 larvae per flask. The larvae were maintained on 50 ml of sterile fish food every other day. The water of the larval flask was not changed for the duration of the experiment. Fish food was made sterile by resuspending ground-up fish food with water and autoclaving it for 20 min. Axenic larvae were made gnotobiotic by adding a single bacterial isolate of choice. One to 3 days before inoculating the larval flasks, the bacteria were streaked out onto agar plates with their appropriate medium. They were allowed to grow 1 to 3 days until colonies of roughly similar size were obtained. A single bacterial colony was picked and added to each 25-ml flask. The sterility of the axenic larvae, as well as efficient colonization of the gnotobiotic larvae, was verified by PCR (see below). Five third-instar larvae were collected from each gnotobiotic treatment, and 10 axenic larvae were collected from three replicate flasks at the same time (5 days after hatching). Pooled larvae from each treatment were surface-sterilized by rinsing them once in sterile water, soaking in 70% ethanol for 10 min, and rinsing three times in sterile water. The larvae were then homogenized in Qiagen ATL extraction buffer, and DNA was extracted using the Qiagen DNeasy Blood and Tissue kit. The presence or absence of bacteria was qualitatively verified by PCR using the same primers listed above for bacteria identification. Homogenates from the surface-sterilized larvae were plated to confirm that the added bacteria had colonized the larvae and that only a single morphological colony matching that of the input bacteria was present. The water in which gnotobiotic larvae developed was also plated to confirm that only the expected morphological colony was present. The axenic larval flasks were maintained for the duration of the experiment and manipulated in the same way to serve as negative controls. The amount of bacteria measured in the water of gnotobiotic treatments was not correlated to pupation rate (file S4).
Selection of bacterial isolates for functional assays
Because cultivable bacteria only represent a small fraction of all bacteria present, and because specific bacterial isolates do not necessarily represent OTUs, the selection of isolates for functional assays was unrelated to the 16S metagenomics data. In particular, the choice of bacterial isolates did not depend on their relative abundance or habitat of origin. Instead, it was based on an arbitrary set of selection criteria described below. The original collection of 168 bacterial isolates was narrowed down to 37 isolates to test in an initial screen of pupation rate with the hypothesis that bacterial isolates that resulted in differences in larval growth kinetics would potentially induce phenotypic differences at the adult stage. The 37 test isolates were chosen on the basis of genetic dissimilarity to other isolates (<95% genetic similarity) and previously being associated with Aedes mosquitoes in the literature. To test the pupation rate of each of the 37 initial test isolates, individual colonies were inoculated into triplicate flasks of axenic larvae, as described above. The number of pupae was counted in each flask every day for 17 days. The list of 37 isolates was further narrowed down to 16 candidate isolates based on those that reached 60% pupation. Of the 16 candidate isolates, three isolates were chosen on the basis of differences in pupation rate (file S9) and differences in the cultivable bacterial composition found in adult midguts after 4 to 6 days in the insectary (file S10). On the basis of their full-length 16S sequence, two of the three isolates were assigned to Salmonella (Ssp_ivi) and Rhizobium (Rsp_ivi) genera. The third isolate (Esp_ivi) was assigned to the Enterobacteriaceae family, but classification at the genus level was inconsistent among databases (alternatively Salmonella, Escherichia, or Shigella). Whereas Escherichia was previously found in wild A. aegypti specimens, and Shigella and Rhizobium were found in wild A. albopictus specimens, Salmonella was not previously reported to be associated with Aedes mosquitoes (20,31). In all cases, colonies belonging to the bacterial genera that were added during the larval stage could not be recovered from the corresponding adult midguts. Even when the same bacterial genus was detected in adult midguts (file S10), the 16S sequence was distinct.
Adult life-history traits After adult emergence from the different gnotobiotic treatments, 18 to 20 females were placed into triplicate 1-pint cardboard cups and maintained under standard insectary conditions (27 ± 1°C; relative humidity, 75 ± 5%; 12:12-hour light/dark cycle) on a sugar diet. The number of dead mosquitoes was recorded daily for 60 days until >90% of mosquitoes had died. The wings of the individual females harvested in the second replicate of the vector competence experiment (see below) were kept for later analysis. Wing length was measured from the tip (excluding the fringe) to the distal end of the allula using an ocular micrometer and a dissecting microscope. When both wings were intact, the mean of the two wing lengths was used for the statistical analysis.
Lysozyme-like activity of hemolymph Antibacterial activity of the hemolymph was measured by a bacterial growth inhibition zone assay. In this assay, mosquito hemolymph was spotted onto an agar plate containing M. luteus, and the antibacterial activity of the hemolymph was measured by the area of visible bacterial clearance around the hemolymph sample. Five to 7 days after adult emergence from gnotobiotic treatments, hemolymph was collected from females and placed on agar plates seeded with M. luteus. To make the agar plate, 10 ml of agar solution [2× agar (BD BactoAgar, Becton, Dickinson and Company), freeze-dried M. luteus (5 mg/ml; Sigma-Aldrich), streptomycin (0.1 mg/ml; Sigma-Aldrich), and 67 mM potassium phosphate buffer (pH 6.4)] was plated, and 3-mm holes were punched in the solidified agar. Twenty females (two replicates of 10 females each) from each treatment were cold-anesthetized and stored on ice until hemolymph was collected. To collect hemolymph, 2 ml of anticoagulant solution [60% Schneider's medium (Sigma-Aldrich), 10% fetal bovine serum (FBS), and 30% citrate buffer (pH 4.5) (98 mM NaOH, 186 mM NaCl, 1.7 mM EDTA, and 41 mM citric acid)] was injected into the thorax using a finely drawn glass capillary and a bulb dispenser (Microcaps, Drummond Scientific Co.). Ten microliters of the anticoagulant solution was then injected into the abdomen, and hemolymph was collected through capillary action by placing a capillary tube next to the injection site. The hemolymph was immediately placed on ice and then deposited in the cutout holes on the agar plates. The plates were stored at 30°C for 24 hours, and the number of individuals with detectable M. luteus growth inhibition and the size of M. luteus growth inhibition zone were calculated. The size of M. luteus growth inhibition was determined by using ImageJ (www.imagej.nih.gov/ij/) to calculate the diameter of the clear zone. The diameter of the clear zone for the hemolymph samples was converted to lysozyme-like activity using a standard curve generated by spotting 10-fold serial dilutions of lysozyme (200 mg/ml; Sigma-Aldrich) and measuring the diameter of the clear zone using ImageJ.
Vector competence
Following the gnotobiotic treatments, pupae were picked every day for 1 week, and adults were allowed to emerge under standard insectary conditions (27 ± 1°C; relative humidity, 75 ± 5%; 12:12-hour light/dark cycle). The adults were maintained in the insectary for 3 to 7 days after emergence on a standard sugar diet. Females were starved for 24 hours before the infectious blood meal. Vector competence assays were performed as previously described (68). Briefly, mosquitoes were experimentally exposed to a wild-type dengue virus serotype 1 isolate (KDH0026A) originally from Thailand (69). The isolate was passaged five times in A. albopictus C6/36 cells before its use in this study. The virus stock was diluted in cell culture medium (Leibovitz's L-15 medium + 10% heat-inactivated FBS + nonessential amino acids + 0.1% penicillin/streptomycin + 1% sodium bicarbonate) to reach a dose of 2.4 × 10 5 FFU/ml in the first experiment and 7.15 × 10 5 FFU/ml in the second experiment. One volume of virus suspension was mixed with two volumes of freshly drawn rabbit erythrocytes washed in distilled PBS and 60 ml of 0.5 M adenosine 5′-triphosphate. After gentle mixing, 2.5 ml of the infectious blood meal was placed in each of several Hemotek membrane feeders (Hemotek Ltd.) maintained at 37°C and covered with a piece of desalted porcine intestine as a membrane. After feeding, fully engorged females were sorted into 1-pint cardboard cups and maintained under controlled conditions (28 ± 1°C; relative humidity, 75 ± 5%; 12:12-hour light/dark cycle) in a climatic chamber for 14 days. After 4 days (experiment 1) and 14 days (experiments 1 and 2), detection of dengue virus RNA was performed with a two-step reverse transcription PCR assay. Heads and bodies were separated from each other, and bodies were homogenized individually in 400 ml of RAV1 RNA extraction buffer (Macherey-Nagel) during two rounds of 30 s at 5000 rpm (Precellys 24). Total RNA was extracted using the NucleoSpin 96 Virus Core Kit following the manufacturer's instructions (Macherey-Nagel). Total RNA was first reverse-transcribed to complementary DNA (cDNA) with random hexamers using M-MLV Reverse Transcriptase (Invitrogen). The cDNA was amplified by 45 cycles of PCR using the set of primers targeting the NS5 gene [5′-GGAAGGAGAAGGACTC-CACA-3′ (forward) and 5′-ATCCTTGTATCCCATCCGGCT-3′ (reverse)]. Amplicons were visualized by electrophoresis on 2.5% agarose gels.
In both experiments 1 and 2, the heads from infected bodies were titrated by standard focus-forming assay in C6/36 cells, as previously described (68). Briefly, heads were homogenized individually in 300 ml of Leibovitz's L-15 medium supplemented with 2× Antibiotic-Antimycotic (Life Technologies). C6/36 cells were seeded into 96-well plates, and each well was inoculated with 40 ml of head homogenate and incubated for 1 hour at 28°C. Cells were overlaid with a 1:1 mix of carboxymethyl cellulose and Leibovitz's L-15 medium supplemented with 0.1% penicillin (10,000 U/ml)/ streptomycin (10,000 mg/ml), 1× nonessential amino acids, 2× Antibiotic-Antimycotic (Life Technologies), and 10% FBS. After 3 days of incubation, cells were fixed with 3.7% formaldehyde, washed three times in PBS, and incubated with 0.5% Triton X-100 in PBS. Cells were incubated with a mouse anti-dengue virus complex monoclonal antibody (MAB8705, Merck Millipore), washed three times with PBS, and incubated with an Alexa Fluor 488-conjugated goat anti-mouse antibody (Life Technologies). FFU were counted under a fluorescence microscope.
Statistical analysis of mosquito phenotypes
All statistical analyses were performed in R v3.1.2 (www.r-project. org), unless where otherwise noted. Analysis of pupation rate was based on a three-parameter logistic model (Cumulative_proportion = K/(1 + e − B(time − M) ) describing the cumulative change in pupation rate over time for each condition using least-squares nonlinear regression with the minpack.lm R package (https://cran.r-project.org/web/packages/ minpack.lm/minpack.lm.pdf). In this logistic model, K represents the saturation level of pupation rate (that is, the final pupation rate), B is the growth rate (that is, rate of change per unit time during the exponential phase), and M is the time at which the proportion of pupae equals 50% of the saturation level K. The extra sum-of-square F test was used to compare single parameters between two curves representing the cumulative pupation rate over time for two conditions. The P value was derived from the F test based on the F distribution and the number of degrees of freedom.
Survival data were analyzed using a time-to-event model and Kaplan-Meier estimator in the survival R package (http://CRAN.Rproject.org/package=survival). Continuous variables (wing length, CFU counts, and FFU counts in the head) were analyzed using a full-factorial linear regression model and type III ANOVA, followed by verification of the normal distribution of the residuals. Binary traits (CFU prevalence, lysozyme-like activity prevalence, and vector competence binary phenotypes) were analyzed using a full-factorial logistic regression model and analysis of deviance.
SUPPLEMENTARY MATERIALS
Supplementary material for this article is available at http://advances.sciencemag.org/cgi/ content/full/3/8/e1700585/DC1 fig. S1. Bacterial communities are richer (but not more diverse) in larval breeding site water than in mosquito midguts. fig. S2. Bacterial families differ between habitat and sample types. fig. S3. Dominant OTUs differ between habitat and sample types. fig. S4. No difference in the prevalence of midgut infection or systemic dissemination of dengue virus following different gnotobiotic treatments. file S1. Test statistics of richness and Shannon diversity index between sample and habitat type. file S2. Differentially abundant OTUs between sample types. file S3. Test statistics for larval growth rate and time to 50% pupation. file S4. Test statistics for the relationship between the amount of bacteria present in larval flasks and pupation rate. file S5. Habitat description of larval breeding sites sampled. file S6. Original oligonucleotide sequences used for 16S sequencing. file S7. Final oligonucleotide sequences used for 16S sequencing. file S8. Normalized OTU count table used for OTU-based analyses. file S9. Pairwise comparisons of growth rates (that is, slope of the exponential growth phase) among the 16 candidate bacterial isolates. file S10. Identity of cultivable bacteria present in midguts of adults exposed to different bacteria as larvae 4 to 6 days after emergence and maintained under standard insectary conditions as adults. | 2017-08-27T06:38:04.900Z | 2017-08-01T00:00:00.000 | {
"year": 2017,
"sha1": "70c86829daf549890f367d9ee12f1e79e81d29bb",
"oa_license": "CCBYNC",
"oa_url": "https://advances.sciencemag.org/content/advances/3/8/e1700585.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "63b1fc137d685476ae4c87295ca85088ce5e8e93",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
55039889 | pes2o/s2orc | v3-fos-license | The spatial sign covariance matrix and its application for robust correlation estimation
We summarize properties of the spatial sign covariance matrix and especially look at the relationship between its eigenvalues and those of the shape matrix of an elliptical distribution. The explicit relationship known in the bivariate case was used to construct the spatial sign correlation coefficient, which is a non-parametric and robust estimator for the correlation coefficient within the elliptical model. We consider a multivariate generalization, which we call the multivariate spatial sign correlation matrix.
Introduction
Let X 1 , . . . , X n denote a sample of independent p dimensional random variables from a distribution F and s : R p → R p with s(x) = x/|x| for x = 0 and s(0) = 0 the spatial sign, then S n (t n , X 1 , . . . , X n ) = 1 n n i=1 s(X i − t n )s(X i − t n ) T denotes the empirical spatial sign covariance matrix (SSCM) with location t n . The canonical choice for the location estimator t n is the spatial median µ n = argmin Beside its nice robustness properties like an asymptotic breakdown-point of 1/2, it has (under regularity conditions, see [12]) the advantageous feature that it centres the spatial signs, i.e., 1 n n i=1 s(X i − µ n ) = 0, so that S n (µ n , X 1 , . . . , X n ) is indeed the empirical covariance matrix of the spatial signs of the data. If t n is (strongly) consistent for a location t ∈ R, it was shown in [5] that under mild conditions on F the empirical SSCM is a (strongly) consistent estimator for its population counterpart S(X) = E(s(X − t)s(X − t) T ).
There are some nice results if F is within the class of continuous elliptical distributions, which means that F possesses a density of the form ) for a location µ ∈ R p , a symmetric and positive definite shape matrix V ∈ R p×p and a function g : R → R, which is often called the elliptical generator. Prominent members of the elliptical family are the multivariate normal distribution and elliptical t-distributions (e.g. [2], p. 208). If second moments exists, then µ is the expectation of X ∼ F , and V a multiple of the covariance matrix. The shape matrix V is unique only up to a multiplicative constant. In the following, we consider the trace-normalized shape matrix V 0 = V /tr(V ), which is convenient since S(X) also has trace 1. If F is elliptical, then S(X) and V share the same eigenvectors and the respective eigenvalues have the same ordering. For this reason, the SSCM has been proposed for robust principal component analysis (e.g. [13,15]). In the present article, we study the eigenvalues of the SSCM.
Eigenvalues of the SSCM
Let λ 1 ≥ . . . ≥ λ p ≥ 0 denote the eigenvalues of V 0 and δ 1 ≥ . . . ≥ δ p ≥ 0 those of S(X). Explicit formulae that relate the δ i to the λ i are only known for p = 2 (see [19,3]), namely Assuming λ 2 > 0, we have δ 1 /δ 2 = λ 1 /λ 2 ≤ λ 1 /λ 2 , thus the eigenvalues of the SSCM are closer together than those of the corresponding shape matrix. It is shown in [8] that this holds true for arbitrary p > 2, so as long as λ j > 0. There is no explicit map between the eigenvalues known for p > 2. Dürre et al. [8] give a representation of δ i as one-dimensional integral, which permits fast and accurate numerical evaluations for arbitrary p, We use this formula (implemented in R [17] in the package sscor [9]) to get an impression how the eigenvalues of S(X) look like in comparison to those of V 0 . We first look at of equidistantly spaced eigenvalues λ i = 2i p(p + 1) , i = 1, . . . , p, Eigenvalues of S(X) Eigenvalues of S(X) for different p = 3, 11, 101. The magnitude of the eigenvalues necessarily decreases as p increases, since p i=1 λ i = p i=1 δ i = 1 per definition of V 0 and S(X). As one can see in Figure 1, the eigenvalues of S(X) and V 0 approach each other for increasing p. In fact the maximal absolute difference for p = 101 is roughly 2 · 10 −4 . In the second scenario, we take p − 1 equidistantly spaced eigenvalues and one eigenvalue 5 times larger than the rest, i.e., This models the case where the dependence is mainly driven by one principle component. As one can see in Figure 2, the distance between the two largest eigenvalues is smaller for S(X) than for V 0 . This is not surprising in light of (2). Thus in general, the eigenvalues of the SSCM are less separated than those of V 0 , which is one reason why the use of the SSCM for robust principal component analysis has been questioned (e.g. [1,14]). However, the differences appear to be generally small in higher dimensions.
Estimation of the correlation matrix
Equation (1) can be used to derive an estimator for the correlation coefficient based on the empirical SSCM: the spatial sign correlation coefficient ρ n ( [6]). Under mild regularity assumptions this estimator is consistent under elliptical distributions and asymptotically normal with variance where a = v 11 /v 22 is the ratio of the marginal scales and ρ = v 12 / √ v 11 v 22 is the generalized correlation coefficient, which coincides with the usual moment correlation coefficient if second moments exists. Equation (4) indicates that the variance of ρ n is minimal for a = 1, but can get arbitrarily large if a tends to infinity or 0. Therefore a two-step procedure has been proposed, the two-stage spatial sign correlation ρ σ,n , which first normalizes the data by a robust scale estimator, e.g., the median absolute deviation (mad), and then computes the spatial sign correlation of the transformed data. Under mild conditions (see [7]), this two-step procedure yields an asymptotic variance of which equals that of ρ n for the favourable case of a = 1. Since (5) only depends on the parameter ρ, the two-stage spatial sign correlation coefficient is very suitable to construct robust and non-parametric confidence intervals for the correlation coefficient under ellipticity. It turns out that these intervals are quite accurate even for rather small sample sizes of n = 10 and in fact more accurate then those based on the sample moment correlation coefficient [7]. One can construct an estimator of the correlation matrix R by filling the off-diagonal positions of the matrix estimate with the bivariate spatial sign correlation coefficients of all pairs of variables. This was proposed in [6]. Equation (3) allows an alternative approach: First standardize the data by a robust scale estimator and compute the SSCM of the transformed data. Then apply a singular value decomposition S n (t n , X 1 , . . . , X n ) =Û∆Û T , where∆ contains the ordered eigenvaluesδ 1 ≥ . . . ≥δ p . One obtains estimateŝ λ 1 , . . . ,λ p by inverting (3). Although theoretical results are yet to be established, we found in our simulations that the following fix point algorithm works reliably and converges fast. LetΛ denote the diagonal matrix containingλ 1 , . . . ,λ p , thenV =ÛΛÛ T is a suitable estimator for for the shape of the standardized data and R withr ij =v ij / v iivjj an estimator for the correlation matrix, which we call the multivariate spatial sign correlation matrix. Contrary to the pairwise approach, the multivariate spatial sign correlation matrix is positive semi-definite by construction. Theoretical properties of the new estimator are not straightforward to establish. By a small simulation study we want to get an impression of its efficiency. We compare the variances of the moment correlation, the pairwise as well as the multivariate spatial sign correlation under several elliptical distributions: normal, Laplace and t distributions with 5 and 10 degrees of freedom. The latter three generate heavier tails than the normal distribution. The Laplace distribution is obtained by the elliptical generator g(x) = c p exp(− |x|/2), where c p is the appropriate integration constant depending on p (e.g. [2], p. 209).
We take the identity matrix as shape matrix and compare the variances of an offdiagonal element of the matrix estimates for different dimensions p = 2, 3, 5, 10, 50 and sample sizes n = 100, 1000. We use the R packages mvtnorm [10] and MNM [16] for the data generation. The results based on 10000 runs are summarized in Table 1.
Except for the moment correlation at the t 5 distribution, the results for n = 100 and n = 1000 are very similar. Note that the variance of the moment correlation decreases at the Laplace distribution as the dimension p increases, but not so for the other distributions considered. The lower dimensional marginals of the Laplace distribution are, contrary to the normal and the t-distributions, not Laplace distributed (see [11]), and the kurtosis of the one-dimensional marginals of the Laplace distribution in fact decreases as p increases.
Equation (5) yields an asymptotic variance of 2 for the pairwise spatial sign correlation matrix elements regardless of the specific elliptical generator, which can also be observed in the simulation results. The moment correlation is twice as efficient under normality, but has a higher variance at heavy tailed distributions. For uncorrelated t 5 distributed random variables, the spatial sign correlation outperforms the moment correlation. Looking at the multivariate spatial sign correlation, we see a strong increase of efficiency for larger p. For p = 50 the variance is comparable to that of the moment correlation. Since the asymptotic variance of the SSCM does not depend on the elliptical generator, this is expected to also hold for the multivariate spatial sign correlation, and we find this confirmed by the simulations. The multivariate spatial sign correlation is more efficient than the moment correlation even under slightly heavier tails for moderately large p. Table 1: Simulated variances (multiplied by √ n) of one off-diagonal element of the correlation matrix estimate based on the moment correlation (cor), the pairwise spatial sign correlation (sscor pairwise) and the multivariate spatial sign correlation matrix (sscor multivariate) for spherical normal (N), t 5 , t 10 , and Laplace (L) distribution, several dimensions p and sample sizes n = 100, 1000.
An increase of efficiency for larger p is not uncommon for robust scatter estimators. It can be observed amongst others for M-estimators, the Tyler shape matrix, the MCD, and S-estimators (e.g. [4,18]). All of these are affine equivariant estimators, requiring n > p. This is not necessary for the spatial sign correlation matrix. One may expect that the efficiency gain for large p is at the expense of robustness, in particular a larger maximum bias curve. Further research will be necessary to thoroughly explore the robustness properties and efficiency of the multivariate spatial sign correlation estimator. | 2016-06-07T19:37:30.000Z | 2016-06-07T00:00:00.000 | {
"year": 2017,
"sha1": "edf6b4a038da99f72fba08f300e2e804231e3249",
"oa_license": "CCBY",
"oa_url": "https://www.ajs.or.at/index.php/ajs/article/download/vol46-3-4-2/544",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "1e626a60cba465efa005c6058fcc6f4ac8a0d771",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
245583393 | pes2o/s2orc | v3-fos-license | Development of Online Exam Questions for Javanese Language Subjects with a Cultural Responsive Assessment Approach
This study aimed to develop web-based Javanese school exam questions using the East Javanese language and emphasizing cultural questions as a form of culturally responsive assessment. This study used the Borg & Gall model development method, which included three steps: the conceptual model, the hypothetical model, and the empirical model. Good scores were obtained from the questionnaire instrument for practitioners. The results of the small group test showed that the mean score of SMKN 3 Malang students in the post-test was 12.11 (19.86%) higher than in the pre-test. The field test of students resulted in a mean of B (good). The form and content of the questions developed in this study can be used as a guideline for other schools in the area, or for other environments that have a similar language and culture.
Introduction
Language skills have very many uses in life, and we can say that all living systems require language skills. When people do not have good language skills, there will be difficulties when gathering in the community.
In Java, language skills and competencies are considered as one of the skills of every individual. Javanese is a broad language; it has a speech level, namely ngoko pronouncing "low-level utterances" and high-level utterances krama, which are used as a medium for understanding brotherhood, respect, and hierarchy among speakers [1].
The case among Javanese learning in East Java is considered homogeneous; the learning reference used is the Yogyakarta or Surakarta dialect. Students who learn Javanese are considered their first language as standard Javanese. The fact is that many students living in East Java are mainly used to using Indonesian; besides that, some students also use the East Javanese dialect. Students who study Javanese in ICMEd East Java can be divided into three things: language, ethnicity, and Javanese learning needs [5].
Apart from these challenges, Javanese language learning is also faced with the Javanese Language Primary School Examination, which is considered one of the prerequisites for student graduation as an implementation of Governor Regulation Number 19 of 2014. Not all students can accept Javanese language lessons easily. Some students think that Javanese is a complex and challenging subject.
One approach that can be used to appreciate the Javanese Language Exam is the Cultural Responsive Assessment approach. Luykx [2], in his book General Evaluation, describes two ways to reduce cultural bias in conducting academic assessments, namely (1) conducting evaluations based on cultural knowledge of certain groups of students, (2) conducting neutral evaluations; reliance on evaluation tools for cultural knowledge is minimized. The Cultural Responsive Assessment approach in this study uses the method of conducting research based on the cultural knowledge of certain groups of students.
Apart from the Javanese language exam content approach, other approaches were also taken to answer the progress of the times. Witherington [6] argues that technological progress is subject to different assessment practices. Previous forms of evaluation that used pen and paper have now switched to using computers. One of the advantages of a computer scoring system is that student work performance can be processed faster.
From the description above, two things can be observed to develop the Javanese Language School Examination at SMKN 3 Malang, namely, 1) the questions that need to be done must be adjusted to the cultural characteristics of the students, 2) the form of Javanese school exam questions using website-based assessments. ICMEd This paper is structured as follows. In Part 2, we will explain the theory that underlies this research. Section 3 describes the methodology used in this study. Part 4 describes the results of research and development. Finally, at the end of section 5 describes the end of the study which contains the conclusions of this study and suggestions for further research.
Related Works/Literature Review
To avoid plagiarism and provide views on the differences between this study and other studies. So the researcher presents several existing studies, which have a research focus similar to this research, namely: (1) Application of CBT (Computer Based Test) In-Network Service Technology Learning at SMK Negeri 1 Tuban. This study discusses the application of computer-based assessment using the Beesmart application. The purpose of this study was to determine student learning outcomes after using CBT.
In this application, the questions are packaged more interactively and reduce paper costs. Based on learning outcomes through learning outcomes with posttest given to 35 students of SMK Negeri 1 Tuban, it was stated that student learning outcomes were good because as many students received posttest scores> 80. Student responses to the application of CBT were categorized as good, with a percentage score of 76.17%. With good student learning outcomes and student responses to CBT, it can be concluded that CBT is more effective in helping test implementation. (2) Designing a Web-Based Online Exam Application System at SMA Negeri 1 Kalirejo. This research was conducted at SMA Negeri 1 Kalirejo. In this study, the authors created a web-based online exam system at SMA Negeri 1 Kalirejo. This website-based online exam system development method uses a prototype. The part of the product that expresses the logical and physical interface of the external interface that it displays. The Online Exam System provides the advantage that there is no need to procure exam papers and saves time for exam correction so that efficiency and effectiveness are the goals of making an online exam system. (3) When western epistemology and indigenous worldviews converge: A responsive assessment of culture in practice. This study examines standards-based research and multicultural research perspectives. This study looks at how culturally sensitive estimates can bridge diversity in society.
Of the three studies, it can be seen; two studies only focus on making CBT questions without focusing on content, while one research focuses on content about how culturesensitive assessments can bridge diversity in society. Based on the three studies above, the researcher tries to accommodate the three studies above in one study. The hope is ICMEd that by conducting this research, the researcher can determine whether the assessment should be closer to the culture around students or whether there is no connection.
Material & Methodology
The subjects involved in this study were students of class XII at SMKN 3 Malang.
The research method used is the development research method, following the model developed by Borg & Gall [7] with research procedures including (1) preliminary survey; (2) planning the model to be developed; (3) conducting product trials; (4) product development; (5) validation test; and (6) socialization and implementation of research results (products). Based on these steps, the research procedure is divided into three main stages. In the first stage, a needs assessment was carried out by analyzing the needs with the following objectives.
1. Identifying the implementation of the learning process carried out at SMKN 3 Malang.
2. Identifying the infrastructure (media) used in the teaching and learning process and Javanese language assessment at SMKN 3 Malang. 3. Identify the assessment model used in the Javanese language learning process. 4. Identify the impact of implementing the assessment model used.
The second stage is developing the first phase of research; the steps are as follows.
1. Develop a Javanese language assessment website for SMKN 3 Malang in collaboration with Masterweb developers.
2. Develop a manual for the operating system of the Java language assessment website.
3. Compiling the content of the Javanese language assessment website with questions that use the East Javanese language and the arek culture.
Validation of the three products mentioned above by involving experts such as R&D, learning and learning experts, and design experts so that the two products have content validity can be accounted for. The third stage is the stage of conducting trials (experiments) on (1) the use of Javanese language assessment websites and (2) question content that contains Javanese culture and language. The data source in this study ICMEd is primary data (obtained from observations and interviews with Javanese language teachers and students of SMKN 3 Malang class XII), while the data collection techniques used are through observation, tests, and interviews.
To get data as reinforcement for the third stage in this study. When running the test, the researcher used a purposive sampling method. At first, the researcher tried out the website using questions without adjusting the language and content. Furthermore, the researcher eliminated students who had filled out the test and selected 26 students who were obtained from 10% of the total number of students who took the test to undergo the test at a later stage. Elimination is based on a specific purpose. Student characteristics that are considered include: 1. Students who are immigrants in the city of Malang.
Students who are native Malang citizens (born and raised in Malang)
3. Students who have not received Javanese lessons since SD and SMP.
Examination website views
The following is a web-based online exam view that has been designed at SMK Negeri 3 Malang:
Fill in the exam question
The resulting questions are computer-based school exam questions with a source of local knowledge material in the Malang region, and the language used in the next question is the language of the East-Javanese dialect.
Prototype Test Results
A prototype test was carried out, namely the assessment of level I users consisting of 236 students who were all students at level XII at SMKN 3 Malang. From the test results, several questions were often missed. The frequently asked questions are questions where the number of correct answers is less than 50% or less than half the number ICMEd Table 1: Web-based online exam view
No.
Website Views Description
1.
The application starts by displaying the login page before users (students, teachers, admin) access the dashboard with different access rights. After the user logs in by entering the username and password, then they can enter the dashboard menu.
2.
After the admin user enters the dashboard page, there is a select menu according to access rights. The teacher dashboard page has access to question data and can make questions, exams, and determine the exams to be carried out and exam results.
3.
After the teacher enters the dashboard, the teacher can create questions by selecting the question data menu and selecting add questions. Figure shows the fields that must be filled in by the teacher in making questions, namely subjects, teacher's name, picture of questions (if any), weight of questions, questions, and answers. Each teacher has the right to create, edit, and delete the questions he makes.
4.
After the teacher makes the exam questions, there is a list of the question data that has been made. Question data is displayed based on basic competencies.
5.
On the student dashboard, it will only be shown on 2 menus, namely the dashboard as the home page displaying the name, username, the subject being taken, and the exam menu for each student to carry out the exam that has been held by the subject teacher.
6.
After students log in and take the exam that has been made by the subject teacher, students must work on multiple-choice questions randomly and are given time according to teacher policy when making the exam.
7.
After all, students have taken the exam, the teacher can see the results of the previously made exams on the exam results menu. Figure 8 shows the display of the Javanese test results. Teachers can also print exam results by clicking print which is located on the top right of the exam results menu.
of students who answered the question. On items that are often overlooked, the data is divided into four parts, namely question numbers, questions, correct answers, and ICMEd Presented a text of the puppet story, the learner is asked to determine the mandate within the text presented.
The story of the puppet story that was chosen tells the story of Karna's death because when the students of class XII went to class X told the cinema Mahabharata on ANTV became popular and the episode was among the most talked about.
14 Presenting a listening question, learners can see the background of the conversation.
During the audio-visual piece of the presentation, the learner is asked to identify the background of the place where the talk is to be made.
Conversations are selected using a variety of eastern languages that are commonly used.
16.
Presented as a matter of listening, learners can identify the type of sekar macapat.
In the audio of one of the accelerator tunes that the learners listen to find the type of sekar macapat mentioned. The type of sekar macapat that is heard is the sekar macapat that is often taught at the previous level of education.
39.
Given the type of performing arts, learners can limit the choice of images, to reflect the cash characteristics of the performing arts. The following words are true except 92 / 236 Presented in drama tex, students can detect errors in the use of linguistic rules in these texts.
ICMEd
The student's choice of answer choices is used to indicate the exact number of presentations in each question. It is important to see how many languages and cultures are used to make it easier for students to work on problems.
Revised Version Test Results
The evaluation results on the pretest and posttest from 26 students became the object of the test questions. The 26 students were selected students based on the criteria previously described in the research method. The researcher took the 26 students from 10% of the total students who had taken the prototype test results. The form of questions contained in the questionnaire is positive or negative. Positive questions, for example: "I often visit body shop outlets." Negative questions such as: "The repair shop can't recommend a good product for my body care needs." That is ICMEd done so that respondents are more careful in answering so that there is no consistency in answers [8].
Identify words that have opposing meanings and avoid them according to what we find in the book 1-2-3 Magic Effective Discipline for Children 2-12, which states that thinking, saying, or acting negatively, causes the brain to change and acquire negative information into a type of adrenaline hormone. In tiny amounts, the hormone adrenaline is needed, but it can hurt the body in large quantities. Unlike the case when someone thinks, says, or acts well, the hypothalamus will give orders to produce the hormone noradrenaline, a hormone that produces calm, inner comfort, which is a good indicator of accuracy [9].
The success of the post-test and pre-test reflects the success of the exam questions as an assessment tool for students who have different cultural backgrounds with the content of the questions. Assessment is often carried out using a language that is different from the language commonly used by test takers, using a multicultural language approach. An evaluation should be mono-language and multicultural; however, this would be difficult to assess regional languages, given the diversity of cultures and limited literacy. It should also be noted that most ethnic groups try to use the same language but still come from different groups. Therefore, assessment concerning culture should accommodate the uniqueness of these ethnic groups by creating a monolingual review that adapts multiculturalism even though there is only one language in circulation as described by Tanzer [3].
Student Reaction
After carrying out the user trial phase, the researcher also did not forget to give a questionnaire to see how students responded to the research that had been done.
Researchers try to see how students react to a new habit, namely working on problems that use language and culture that they usually encounter every day. The student experience is essential to see to provide a complete picture of the assessment model that is applied. As a result, the researcher got something that was expected in this study.
Some students responded positively to the efforts made by the researcher; the students were quite attentive in addition to filling out the questionnaires that were distributed.
They also did not forget to give some constructive, positive suggestions. Based on Figure, the student assessment in the exam question indicator attracted the students with 25% excellent grades, 64% good grades, and 11% adequate grades. It can be argued that the question of computer-based school exams attracts the attention of students. The frequency distribution of the assessment can be seen in this image 2.
ICMEd
The figure shows that the student's assessment of the easy-to-understand question indicator is 19% like good grades, 66% like good grades, 11% like good grades, 2% like bad scores, and 2% like bad grades. From this information, it can be concluded that from the opinions of 26 respondents, the highest frequency is in the "good" category, which means that the computer-based school exam questions are easy to understand.
QuesƟons related to student life
Excellent Good Enough
Less
The choice of questions is based on the social life of students of SMK Negeri 3 Malang. One of the questions that raise this matter is drama material, where the researcher deliberately uses dialogues that are common in SMKN 3 Malang. The figure shows that the student's scores on the question selection indicator are 21% excellent, 45% good, 32% poor scores, and 2% low scores.
Less
Each question is designed to have a different difficulty level starting from C1-C6. This is in line with the realm of cognitive knowledge derived from Andérson's revised Bloom's taxonomy. Based on the picture, it can be seen that the students' assessment with question indicators has various levels of difficulty. 28% of students stated that the setting of the difficulty level of the questions was perfect, 59% of students said that the set of the level of question quality was good, 11% of students said that the setting of the quality of the questions was lacking, while 2% stated that it was very lacking.
Discussion
The research findings show that students perceive language and cultural adjustments in questions as applicable in an assessment process. We expected this before we conducted this research.
Students' attitudes towards assessment models that accommodate diverse cultures; however, some students stated that the adjustment of cultural content was exciting and fun. Some students gave positive responses, as shown in the student reaction section in the previous chapter.
Working on questions that can accommodate the culture around students is a new experience for many students, and the central aspect of this assessment is to introduce students to a new habit that is that their culture is valuable. Most students reported that they felt the questions were like talking to themselves, but some expressed their discomfort. After the researcher traced it, it turned out that even though the students were born and raised in Malang, their parents were immigrants from the Yogyakarta area, which is undoubtedly different from the question content. The student reported that "the language of the questions was a little rough, and should be refined a little bit." The research data also shows that students feel that the questions are very close to what they do daily. Researchers then recalled Beaulieu's [4] opinion, arguing that 66% of all programs designed by teachers have nothing to do with the culture around students, and programs that have to do with culture are said to be exclusively a different approach to teaching. Some school programs mostly have nothing to do with culture.
Then it will be a question of how teachers increase student attendance and participation in activities held in the school environment if students themselves sometimes think that a school is a foreign place to them. As a critical reflection, the research findings provide a positive picture of how implementing a culturally responsive pedagogy has a positive impact on students.
The problem is that we deliberately lifted it from students' daily lives to get to these valuable findings. That is the same as what was found in Kacerja's [11] research in learning mathematics using his culturally responsive pedagogy approach. Here's what he can convey: "I realize that using real-life situations can be a way to prevent students from perceiving that maths lessons are boring and far from the problems they will encounter every day. There are several ways to take this approach: creating a large project with students that will be worked on throughout the school ICMEd year, or perhaps a more straightforward way is to develop a topic that is in the textbook, reinforced by a few additional issues. Inviting students to engage and formulate learning that is close to their real-life situations can present an exciting way for students, without the need to fear the effect of resistance from students because students are not familiar with the topic." Although there are a few negative comments about the research results, as seen in the student reactions section, it should also be noted that even though some of these students stated this, it turned out that these students were also among the ones who got the best scores when the post-test was carried out, this can be understood that they unconsciously support the theory that language and cultural adjustments in problems make it easier for them. They need help to help them see the value of what they are doing on their own. However, the research conducted by researchers is not without significant challenges.
One of the main challenges of applying the assessment that uses the language and culture around one of them is the lack of literacy about language and culture developed in the Malang region. Fortunately, the researchers received valuable assistance from the artists and cultural observers in Malang, whose function was to enrich the material to be included in the questions to be tested. We do this so that later what we put in the questions does not conflict with what students encounter in their lives. That is what researchers do depart from the opinion of Pelkowski [12], which states that teachers need to understand the cultural frame of reference for each student, but not limited to their way of communication, understanding, and learning, this is useful for planning how they will differentiate content, process, or their assessment product.
The experience of using exam questions containing culture-responsive content is also intended to provide students with an authentic assessment experience. Moreover, this assessment was carried out in Javanese language lessons, which are positioned as regional local content that hopefully can teach and provide valuable experiences for students about what they need to know and master related to the culture that develops in their environment. Research also proves that students need to adjust learning and assessment in school with what they encounter and face in their daily lives when they become part of the community. 3. The results of research on computer-based school exams designed by researchers can only be used on a local server (only accessible at SMKN 3 Malang), but the "raw" data can be used elsewhere.
Conclusion
This research provides valuable insights into students' perceptions and the impact of implementing research and development undertaken. These findings are very valuable for a research and development since it can be understood that the research carried out has these benefits for the intended target. The research findings support the conclusion of [10] that when a teacher uses information on students' culture, life experiences, interests, and preferred learning strategies, the teacher will make something relevant to their students. Teachers can also help students move from what they know to what they need to get something meaningful for themselves. The data analysis also shows that the use of East Javanese culture and language can support authentic student-centered pedagogy. Further research is needed, namely on explaining how best to articulate the benefits of using East Javanese culture and language to develop student professional skills in this developing cultural area.
It is interesting to note that students appreciate the innovations made by researchers, namely to bring what is in their environment into the exam questions. Although the generalizability of this study is limited, the findings identify several benefits of using East Javanese culture and language to initiate a culturally responsive school climate. DOI Further research will aim to explore ways to maximize the use of East Javanese culture and language in the curriculum section in schools and be used in exam questions.
There is also the potential to use the model used in this study to support learning in disciplines outside of language education. It will become a new tool in helping educators exploit the full potential of the culture around students as a culture-responsive learning practice. | 2021-12-31T16:13:20.821Z | 2021-12-29T00:00:00.000 | {
"year": 2021,
"sha1": "daeb4f12bd86df568482dba63364980e13c0b6b0",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/9995/16409",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "4dae6cab69053531b559a4daf4e7a3f3a00b619b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
} |
110033494 | pes2o/s2orc | v3-fos-license | Selection of Microstrip Patch Antenna Substrate for WLAN Application Using Multiple Attribute Decision Making Approach
This paper presents a material selection approach for selecting microstrip patch antenna substrate for WLAN applications using multiple attribute decisionmaking (MADM) approach. In this paper, differentmicrowave dielectricmaterials for substrate and their properties like relative permittivity, quality factor, and temperature coefficient of the resonant frequency are taken into consideration and MADM approach is applied to select the best material for microstrip patch antenna. It is observed that Pb 0.6 Ca 0.4 ZrO 3 is the best material for the antenna substrate in MPA for WLAN applications. It was observed that the proposed result is in accordance with the experimental finding thus justifying the validity of the proposed study.
Introduction
Due to increase in variety of mobile communication equipment, such as laptops, personal digital assistants, smartphones, and wireless modems, the demand for adaptable and functional miniature antennas are growing very fast [1]. A wireless local area network (WLAN) links two or more devices using some wireless distribution method (typically spread-spectrum or OFDM radio) and usually providing a connection through an access point to the wider internet. This gives users the mobility to move around within a local coverage area and still be connected to the network [2,3]. To achieve this, microstrip patch antennas (MPAs) play an important role and are good candidates for wide-band applications.
Material selection for MPA in general and dielectric materials for its substrate in particular are complicated as there are a number of materials that have been proposed; however, each of these materials has certain merits and limitations.
The key performance indices for MPA substrate are relative permittivity, quality factor, and temperature coefficient of the resonant frequency. This paper discusses a strategy for selecting suitable material for substrate based on multiple attribute decision making (MADM) approach [4] in order to improve the antenna performance. This paper is organized as follows. Section 2 explains the microstrip patch antenna design. Section 3 explains the selection criteria of materials. This section describes the decision making approach and details of technique for order preference by similarity to ideal solution (TOPSIS), while Section 4 includes the application of TOPSIS method to antenna substrate material selection in MPAs. Section 5 presents the conclusions drawn from this study.
Microstrip Patch Antenna Design
Schematic view of microstrip patch antenna (MPA) is shown in Figure 1. The patch and the ground plane are separated by a dielectric substrate. For WLAN application the size of antenna should be small. In order to reduce the size of the antenna, substrate with high dielectric constants must be used. However, this may cause the increase in the cost of material and the frequency shift due to the material tolerance and result in narrowing of the operating bandwidth. So, the design of the antenna patch also plays an important role.
Dielectric substrate materials used in patch antennas include ceramic, semiconductor, ferromagnetic, synthetic, composite, and foams. The properties of substrate materials commonly studied while designing the patch antennas are permittivity ( ), quality factor ( -factor), and temperature coefficient of the resonant frequency ( ). Certain other properties sintering temperature, thermal properties, and chemical compatibility are also considered while dealing with the fabrication aspects.
For antenna in WLAN routers the desirable properties are low permittivity, high quality factor, low temperature coefficient of resonance, and low cost. This is because low permittivity allows larger bandwidth and a high quality factor provides better selectivity and is also responsible for strong resonance and high frequency stability. The variation of resonant frequency with temperature is undesirable, so the absolute value of this parameter should be as low as possible. If we are going for mass production of such antennas then cost should be as low as possible.
Multiple Attribute Decision
Making Approach MCDM (multiple criteria decision making) methods can be used to solve uncertainty problems. Multiple criteria decision making (MCDM) can be broadly divided into (i) multiobjective decision making (MODM); (ii) multiattribute decision making (MADM).
MODM is optimization of an alternative or alternatives on the bases of prioritized objectives.
MADM is selection of an alternative from a set of alternatives based on prioritized attributes of the alternatives.
MODM studies decision problems in which the decision space is continuous and design alternatives are defined implicitly by a mathematical programming structure (a typical example is mathematical programming problems with multiple objective functions).
On the other hand, MADM concentrates on problems with discrete decision spaces and alternatives are defined explicitly by a finite list of attributes. In these problems the set of decision alternatives has been predetermined. Moreover, an attribute with a direction may be an objective. Different types of methods are developed based on MADM approach. For the present work, we adopted the TOPSIS method to find the best alternative [4]. TOPSIS stands for technique for order preference by similarity to ideal solution. This is based on the idea that the chosen alternative should be nearer to the ideal solution and away from the negative-ideal solution in some geometrical sense.
The TOPSIS method takes the following steps.
Step 1 (construct the normalized decision matrix). The value of each criterion is normalized to 1. The characteristics × (GHz) , (ppm/ ∘ C), and base material cost ($/g) are considered for criterion with higher benefits in higher value; each value is divided by the highest value. For criterion with higher benefits in lower value, each value is divided by the highest value, giving the highest value a value of 1, and then these values are subtracted from 1. So we get the normalized decision matrix .
Step 2 (construct the weighted normalized decision matrix). With the set of weights = ( 1 , 2 , 3 , . . . , ) the weighted normalized matrix can be generated as follows: where " " is the number of alternatives and " " is the number of criteria.
Step 4 (calculate the separation measure). Next the separation distances of each alternative from the ideal solution and the negative-ideal solution are reached by the -dimensional Euclidean distance method. That means * is the distance and the distance from the negative-ideal solution is defined as follows: Here, V * and V − are the values of the best and the worst alternative in a particular criteria " . " Step 5 (calculate the relative closeness to the ideal solution). The relative closeness of an alternative with respect to the ideal solution * is represented by where 0 < * < 1 and = 1, 2, 3, . . . , .
Apparently an alternative is closer to the ideal solution as * approaches 1.
Step 6 (rank the preference order). Now a preference order can be ranked according to the order of * . Therefore, the best alternative is the one nearer to the ideal solution and away from the negative-ideal solution.
The labels proposed for the weighting are the following: = {Essential, Very High, Fairly High, High, Moderate, Low, Fairly Low, Very Low, Unnecessary} .
Result and Discussion
Various properties of different possible materials for patch antenna substrate suitable for WLAN applications are tabulated and shown in Table 1. ] . ( Highest value of weight is given to that parameter which is more essential, in this case it is and least value is given to the parameter which is not that much important. Hence the weighted matrix is The weighted normalized matrix is ] . ( In this, ideal and nonideal solutions are as follows: Now we can calculate the ideal solution from the following: So the ideal solutions are given in Table 2. The ranks are given based on their " " values. The material with highest value was given the best rank. So from these ranks we can say that, out of the selected materials, Pb 0.6 Ca 0.4 ZrO 3 was found to be the best material for the substrate of microstrip patch antenna for WLAN application followed by The outcome of this study is compared with the patent filed by Killen and his coinventors [14] and it is observed that using this material best performance of the antenna with miniaturized size can be obtained which is required for WLAN application. So this validates our study of dielectric substrate ceramic material for microstrip patch antenna.
Conclusion
This paper presented the material selection for the microstrip patch antenna substrate for WLAN applications using multiple attribute decision making (MADM) approach using technique for order preference by similarity to ideal solution (TOPSIS). It was observed that Pb 0.6 Ca 0.4 ZrO 3 is the best material for the antenna substrate for WLAN applications followed by | 2019-04-13T13:11:20.882Z | 2014-10-22T00:00:00.000 | {
"year": 2014,
"sha1": "76ca09cbeffde509ed68512ff7c8ca5be674d649",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/archive/2014/703181.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "56478031ac34d2792f8aef4ffc3a276b98bcfc3c",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
258671259 | pes2o/s2orc | v3-fos-license | Onychomycosis: Old and New
Onychomycosis is a common chronic fungal infection of the nail that causes discoloration and/or thickening of the nail plate. Oral agents are generally preferred, except in the case of mild toenail infection limited to the distal nail plate. Terbinafine and itraconazole are the only approved oral therapies, and fluconazole is commonly utilized off-label. Cure rates with these therapies are limited, and resistance to terbinafine is starting to develop worldwide. In this review, we aim to review current oral treatment options for onychomycosis, as well as novel oral drugs that may have promising results in the treatment of onychomycosis.
Introduction
Onychomycosis is a chronic fungal infection of the nail that results in discoloration, onycholysis, and nail plate thickening. The infection most commonly occurs in the toenails and can involve any component of the nail unit, including the nail bed, nail matrix, and nail plate. Onychomycosis affects patients of all ages. However, several studies have established higher prevalence with older age [1][2][3][4]. Other risk factors include diabetes, tinea pedis, poor circulation, immunosuppression, psoriasis, Down syndrome, occlusive footwear, and obesity [5][6][7][8][9].
The worldwide prevalence of onychomycosis is estimated at 10% and accounts for up to 50% of nail diseases [10,11]. Dermatophytes are a common culprit of onychomycosis, with the species Trichophyton rubrum and Trichophyton mentagrophytes responsible for 60-70% of infections [12]. Yeasts are responsible for approximately 20% of onychomycosis, and non-dermatophytes account for the remaining 10% [13,14]. Studies have demonstrated that mixed infections, non-dermatophytes, and yeasts are more prevalent than previously thought, especially in warmer climates.
Onychomycosis is challenging to treat and is associated with high recurrence rates and treatment failure. Given the limited cure rates with topical antifungals, oral antifungals may be needed in most cases. Oral treatments require lengthy duration of treatment, which poses a risk of adverse effects and drug interactions. Relapse rate can be as high as 25%, and recurrence rates can vary from 6.5% to 53% [15]. Terbinafine and itraconazole are the only approved oral therapies, but fluconazole is commonly utilized off-label. However, complete cure rates (both mycologic clearance and visually clear nails) are limited, and they can range from 35%-55% for terbinafine [16][17][18], 14-43% for itraconazole [16][17][18][19][20], and 21-48% for fluconazole [21,22]. Recently, there are reports of concerning reports of terbinafine resistance in superficial mycoses in India and Europe, and novel agents may play an important role to achieve a cure [23][24][25]. In this review, we aim to describe current treatment options, as well as novel drugs, that may have promising outcomes in onychomycosis.
Methods
The literature was identified by performing Medline, SCOPUS, Cochrane library, and Google Scholar database searches.
Clinical Presentation
Patients typically present with white-yellow nail discoloration, hyperkeratosis, onycholysis, and subungual debris. The toenails are the most frequently affected, predominantly the great toenail, with rare involvement of the fingernails [26]. Concomitant tinea pedis or plantar hyperhidrosis are usually found [27,28].
Efforts have been made to establish a clinical classification of onychomycosis. Since the first classification by Zaias [29], additional updates have been proposed [30][31][32]. The clinical classification by Hay and Baran is commonly used among clinicians due to its description of a wide range of subtypes of nail invasion (Table 1) [30]. Table 1. Onychomycosis subtypes and their associated clinical features and etiologic agents.
Subtypes of Onychomycosis
Distal lateral subungual onychomycosis (DLSO) is the most frequent subtype of onychomycosis and occurs when fungal invasion originates from the distal or lateral undersurface of the nail plate. Clinical features include hyperkeratosis, onycholysis, and white-yellow nail discoloration [26]. However, other color changes, such as black, orange, and brown, have been reported. Additionally, dermatophytomas, which present as white, yellow, orange, or brown longitudinal streaks or patches, may also be seen as complications of dermatophyte infection [33,34].
Superficial onychomycosis (SO) is due to the fungal invasion of the upper surface of the nail plate and typically affects toenails. Clinically, it is characterized either by superficial white or black patches or transverse striae that can be scraped off. The most common etiologic agents are T. rubrum [35] and T. mentagrophytes [36]. It is often associated with interdigital tinea pedis.
Proximal subungual onychomycosis (PSO) is the result of the invasion of the inner nail plate from fungi penetrating from the proximal nail fold, predominantly by T. rubrum. Other implicated organisms include Candida [37], Aspergillus [38], and Fusarium [39]. Clinically, it presents as a white patch of the proximal nail plate, starting from the proximal nail fold or multiple transverse white bands. The superficial nail plate is normal. When paronychia is associated, Candida species or molds are commonly implicated [40]. This subtype of onychomycosis is also common among immunosuppressed individuals.
Endonyx onychomycosis (EO) is the consequence of direct invasion of the distal nail plate. It presents as a lamellar nail splitting with milky patches. The main etiologic agents are T. soudanense and T. violaceum [30,31].
Mixed pattern onychomycosis (MPO) occurs when more than one pattern of nail plate fungal infection is found within the same nail. The most common combination is DLSO with SO, or PSO with SO. A wide range of etiologic agents may be implicated (Table 1) [30].
Total dystrophic onychomycosis (TDO) is the end stage of chronic onychomycosis, mainly DLSO or PSO. Clinically, it is characterized by nail plate crumbling and thickening of the nail bed with debris [30]. The most common agents are dermatophytes (T. rubrum), as well as some molds [30].
Secondary Onychomycosis
This is a clinical scenario when fungal invasion occurs secondary to a non-infectious nail condition, such as trauma, psoriasis, lichen planus, etc. Clinical features of the underlying nail condition are associated with hyperkeratosis, nail discoloration, onycholysis, patches, and striae. This subtype is a common presentation, and diagnosis may be challenging.
Diagnosis
Clinical signs of nail thickening, yellow, white, or brown nail discoloration, onycholysis, subungual debris, and multiple nail involvement, should prompt clinicians to assess for onychomycosis. Dermoscopy findings in onychomycosis include jagged proximal edges with spikes, longitudinal striae, a ruined appearance, and longitudinal ridges along the nail bed [41]. Dermoscopy can also be used to assess for trauma or other nail disorders, such as nail psoriasis, subungual warts, and lichen planus.
Diagnosis by clinical features is not always reliable, and laboratory isolation of fungi continues to be the diagnostic gold standard. A sensitive test for onychomycosis includes nail clippings sent in formalin for histopathology and periodic acid-Schiff (PAS) stains and other fungal stains. Alternatively, potassium hydroxide (KOH) preparation of nail scrapings can also be performed to visualize hyphae. While both the PAS stain and KOH preparation are sensitive tests, neither method can identify specific pathogens. Fungal culture can be used for identification of pathogens; however, its sensitivity is low, and there is a risk of false-positive results due to contamination.
Polymerase chain reaction (PCR) is an excellent diagnostic method that has high sensitivity, but false positives are common. Given that the number of mixed onychomycosis and non-dermatophyte infections are becoming more prevalent, molecular diagnosis can aid in the selection of appropriate antifungal treatment.
The presentation of onychomycosis can also help determine areas of the nails needed for the sample collection. Nail clippings should be at least 4 mm to maximize diagnostic accuracy [42]. For DLSO, samples should be taken from the nail bed in the proximal area of involvement where the concentration of hyphae will be the highest [43]. In white superficial onychomycosis (WSO), a specimen should be collected by scraping the superficial aspect of the nail with a No. 15 scalpel [8]. In PSO, the proximal nail can be obtained with a 3 mm punch of the proximal nail plate [43,44].
Defining a Cure
Onychomycosis studies often report the mycologic cure, clinical cure, and complete cure rates. A mycologic cure is achieved when both the culture and direct microscopy are negative after medical treatment. A clinical cure is defined as a normal appearance of the affected nail. A complete cure is defined as both negative mycology and absence of clinical signs in the nail. The goal of treatment is complete cure; however, patients often have nail abnormalities before the development of the fungal infections and will not achieve fully normal nails after treatment. In this review, we will focus primarily on complete cure rates.
Treatment Approach
When approaching onychomycosis treatment, the three main pharmacologic strategies include oral treatment, topical treatment, or combination therapy.
Topical treatment of onychomycosis is complicated by several factors, including limited access to the nail bed and complex pharmacologic properties needed to allow for nail plate penetration. Patients who have nail plate thickening and onycholysis particularly add to the suboptimal nature of topical drug transportation. Nail lacquers represent a unique pharmacologic property that allows for higher levels of diffusion across the nail plate as its drug concentration increases, while the solvent evaporates [45]. Leading nail lacquer products, ciclopirox 8%, amorolfine 5%, and efinaconazole 10%, have reported complete cure rates of 5.5-8.5%, 15.2-17.8%, and 15.2-25.6%, respectively [46][47][48][49][50]. Despite lower cure rates, topical monotherapy is recommended in mild toenail onychomycosis limited to the distal nail, as well as WSO. Recent data, however, show that efinaconazole can be very effective in treating onychomycosis complicated by dermatophytomas and longitudinal spikes [51,52].
While higher complete cure rates are seen in oral antifungal monotherapy than topical monotherapy, topical treatment can play a role in maintenance therapy after the completion of oral antifungal therapy. Oral therapy is generally recommended for patients with DLSO involving more than 50% of the nail, DLSO affecting more than two nails, PSO, and deep WSO [53].
Oral antifungal agents available for the treatment of toe and fingernail onychomycosis include terbinafine, itraconazole, and fluconazole. Terbinafine monotherapy with continuous 250 mg dosing remains the current first-line treatment recommendation. Terbinafine pulse dosing, itraconazole, and fluconazole therapy are second-line treatments that are useful when there are contraindications with terbinafine use.
Oral Treatments
Several treatment options exist for the treatment of onychomycosis (Table 2). Oral terbinafine is the most effective Food and Drug Administration (FDA)-approved treatment for onychomycosis [54]. Terbinafine treatment duration is typically a minimum of six weeks and twelve weeks for fingernail and toenail onychomycosis, respectively. Terbinafine complete cure rates vary between 35% and 78% in patients with toenail onychomycosis [16][17][18]55]. Terbinafine sensitivity is not well studied in onychomycosis involving non-dermatophyte molds or yeast [56].
Terbinafine pulse dosing regimens can vary in dose, duration, and frequency. A common regimen is two pulse regimens of terbinafine 250 mg daily for four weeks, followed by four weeks off. A meta-analysis by Gupta et al. determined that continuous terbinafine regimen is generally superior to pulse dosing for mycologic cure. However, both continuous and pulse dosing had similar complete cure rates [57,58]. Other pulse regimens for oral terbinafine include 500 mg daily for one week, followed by three weeks of no treatment, repeated every month for three months [59]. Studies assessing cost [60] and compliance [61] did not identify a difference between continuous and pulse regimens. However, pulse dosing may be preferred due to patient preference, side effects, comorbidities, and risk of potential drug-drug interactions [57,60].
Contraindications to terbinafine use include its interaction with other pharmaceuticals that are metabolized by cytochrome P-450 enzyme 2D6. Most notably, metoprolol is among this category of cP450 2D6 metabolized drugs, making terbinafine treatment discouraged in patients taking metoprolol. In these cases, oral treatment options for onychomycosis include itraconazole, an alternative broad-spectrum antifungal. Itraconazole is FDA-approved for the treatment of onychomycosis and is effective against dermatophytes, yeasts, and nondermatophyte molds. Dosing options include a continuous treatment regimen of 200 mg daily for three months or a four-pulse treatment regimen of 400 mg daily for one week, followed by a three-week pause in drug treatment. Complete cure rate with itraconazole treatment ranges between 14-43% [16][17][18][19][20].
An additional second-line treatment option includes the oral antifungal fluconazole. Fluconazole is not FDA approved for the treatment of onychomycosis. However, clinical trials have established its efficacy in dermatophyte nail infections. Typical dosing regimen of fluconazole is 150-450 mg once weekly for a duration of six months for fingernails and 12 months or more for toenails [22,62]. Complete cure rates range from 21% to 48%, depending on dosage and treatment duration [21,22]. Antifungal resistance is a variable that also should be considered when choosing treatment options. As dermatophytic infections evolve over time, the efficacy of oral antifungals may change, as well. In a recent clinical trial, commonly prescribed antifungals, including terbinafine, itraconazole, and fluconazole, were tested against chronic and chronic relapsing tinea corporis, cruris, and faciei [25]. After four weeks of treatment, all drugs demonstrated a cure rate around 8% or less. After eight weeks of treatment, cure rates with terbinafine, itraconazole, and fluconazole were reported as 28%, 66%, and 42%, respectively. This study indicates the reality of antifungal resistance, as well as evidence suggesting the effectiveness of itraconazole in terbinafine-resistant dermatophytosis. Similar reports of increasing terbinafine resistance in Trichophyton species have been reported in Europe [23,24]. More studies are needed to further assess treatment regimens in multi-drug resistant cases.
Due to the systemic nature of oral antifungal medications, side effects and drug-drug interactions should be considered when prescribing each of these medications. Terbinafine has been associated with hepatic injury. Therefore, liver function tests are recommended prior to starting treatment, although some doctors recommend evaluating liver function throughout the course of terbinafine treatment [20]. Itraconazole therapy carries a high risk of drug interactions and should be used cautiously in patients with cardiac conditions due to its risk of heart failure and arrythmias [63]. Fluconazole is also associated with high risk of interactions (not much when used as pulse treatment) cardiovascular risk and has been reported to prolong the QT interval [64]. Additional side effects with fluconazole include significant congenital defects, and avoidance of this medication during pregnancy is strongly advised [65].
Interval monitoring in oral antifungal treatment is unnecessary in adults and children without preexisting hematologic or hepatic abnormalities [66]. The recommendation for continued hepatic function monitoring while receiving treatment with an oral antifungal has since been removed by the FDA. Due to higher incidence rates of drug-induced liver injury seen in patients with preexisting hepatic conditions, dosing adjustments and continued interval monitoring is beneficial in these cases.
Poor response to treatment is seen at higher rates in individuals with subungual keratosis measuring more than 2 mm and fungal infection involving the lateral nail or over 50% of the entire nail unit [15]. Recently a few studies showed that topical treatment with efinaconazole can be useful in this setting [51,52]. Given the terbinafine sensitivity to dermatophytes, onychomycosis caused by mixed infection or resistant organisms are also associated with poor prognostic outcomes. In the setting of decreased peripheral circulation, as in elderly or diabetic patients, poor treatment response to first-line oral antifungals has been reported.
Special Populations
Onychomycosis is more common in adults than in children. Prevalence-based studies are conflicting, with some reports ranging from 0.2% to as high as 7.66% [67,68]. While previously considered rare, several studies suggest that onychomycosis in children is on the rise [69][70][71][72]. While there are no FDA-approved medications for pediatric patients, evidence suggests that current treatments are safe in children, and dosing should be readjusted according to weight.
Onychomycosis has a higher incidence among elderly patients, with a reported prevalence nearly 20% in patients greater than 60 years old [73]. Few studies have evaluated the efficacy of oral antifungals in the elderly population [74][75][76]. Cure rates with approved therapies terbinafine and itraconazole have been reported to be lower when compared to adult population [16,76,77]; however, they are considered a safe option when possible, and drug interactions have been evaluated [75].
Immunocompromised patients, such as HIV positive individuals with CD4 counts less than 500, have a higher prevalence of tinea pedis and onychomycosis [78,79]. There are scarce studies that have evaluated oral antifungals in the HIV population [74,75]; however, terbinafine has shown to be efficacious and well tolerated [75]. Of note, HIV positive individuals who are on antiretroviral therapy with normal CD4 counts can eradicate the infection. Additionally, combined antiretroviral therapy itself has demonstrated to clinically improve onychomycosis [80].
Terbinafine Resistance: A Public Health Concern
Terbinafine resistance is an emerging problem globally, and isolates have been documented in India and Europe with increasing frequency [23,24]. The Trichophyton species is commonly identified, specifically T. rubrum, T. interdigitale/mentagrophytes, and T. indotineae [23,[81][82][83]. Terbinafine inhibits squalene oxidase and interferes with ergosterol production, a compound necessary for fungal plasma membrane structure. Trichophyton resistant cases arise when point mutations develop in the squalene oxidase gene [81][82][83][84][85]. While cases of antifungal resistance have been largely reported in dermatophytic infections of the skin, the emergence and spread of these organisms is an important public health concern that can have significant consequences in onychomycosis cases.
As with antimicrobial resistance, dermatophyte-resistant species can be secondary to natural microbial changes over time, increased exposure to antifungals, and/or noncompliance. Mycological identification in suspected onychomycosis cases is crucial to avoid unnecessary antifungal and steroid use, and compliance should be emphasized in patients who begin onychomycosis therapy. In cases of recalcitrant infections, antifungal suscepti-bility testing (AFST) should be explored to guide management. Additionally, azole-based treatments with itraconazole, longer treatment duration, and/or combination therapies may be necessary to eradicate terbinafine-resistant infections.
Novel Onychomycosis Therapies
New oral antifungal agents show promising results in the treatment of onychomycosis. Many of these are azole-based treatments. However, these medications are not approved by the FDA for the treatment of onychomycosis, and some are not available in the United States. As challenges continue to rise with terbinafine resistance, these new therapies may be promising in the treatment of superficial mycoses, such as onychomycosis (Table 3). Fosravuconazole L-lysine ethanolate is a ravuconazole prodrug approved in Japan for the use of onychomycosis. A phase III trial demonstrated a complete cure rate of 59.4% with fosravuconazole 100 mg daily for 12 weeks [86]. Fosravuconazole also has a preferable safety profile in the elderly due to less potent inhibition of cytochrome P450 [87].
Posaconazole is a broad-spectrum azole that is FDA-approved for invasive Aspergillus and Candida infections and oral pharyngeal candidiasis. A phase IIB trial by Elewski et al. evaluated the use of posaconazole in onychomycosis and demonstrated a complete cure rate of 51.4% with posaconazole 200 mg daily for 24 weeks [88]. The efficacy and safety profile are favorable. However, the cost of posaconazole and lack of FDA approval for onychomycosis limits its use.
Oteseconazole, also known as VT-1161, is a tetrazole antifungal that is FDA-approved for recurrent vulvovaginal candidiasis. A phase II trial assessed oteseconazole for distal and lateral subungual dermatophyte onychomycosis and reported mycologic cure rates between 41-45% at 60 weeks. Patients received either oteseconazole 300 mg once daily for two weeks, followed by 300 mg once weekly for 10 or 22 weeks, or 600 mg once daily for two weeks, followed by 600 mg once weekly for 10 or 22 weeks [89].
Voriconazole is a broad spectrum, triazole antifungal FDA approved for invasive Aspergillosis, candidemia in non-neutropenics, other deep tissue Candida infections, esophageal candidiasis, and Scedosporiosis and Fusariosis. A prospective clinical trial assessed the use of voriconazole for onychomycosis and reported a complete cure rate of 67.9% with eight weeks of therapy (200 mg twice daily for four weeks, followed by 200 mg once daily for four weeks) [90]. Dose adjustments may be needed in patients with hepatic or renal impairment [91].
Albaconazole is a new broad-spectrum antifungal with activity against dermatophytes and yeasts. Its efficacy in onychomycosis has been described in randomized clinical trials, as well as recently in a systematic review. Doses range from 100 to 400 mg once weekly for a 24-36 weeks treatment period. Its efficacy is dose-dependent, with the highest complete cure rate at 400 mg once weekly for 36 weeks (33%). However, further studies are needed to evaluate its security profile [92,93]. Albaconazole is not currently FDA-approved for any indication.
Other Treatments
Oral therapies for the treatment of onychomycosis can be augmented with the use of adjunct treatments, such as lasers, photodynamic therapy (PDT), or nail trimming/debridement. Antifungal combination therapy has also shown efficacy in treatment advancement.
Several studies have analyzed the use of the long pulsed neodymium-doped yttrium aluminum garnet (Nd:YAG) laser, diode laser, and fractional carbon dioxide (CO 2 ) laser in onychomycosis. Lasers are currently FDA-approved for mild temporary clearance of the nail. It is theorized that lasers can be fungicidal through photothermolysis, with rapid elevation of temperature leading to fungal cell death. However, randomized trials yielded poor results with no statistical difference in patients who had laser therapy compared to placebo [94,95]. A study by Lim et al. did show improvement in onychomycosis when lasers were used as combination therapy with topical amorolfine for 12 weeks. The authors concluded that the beneficial effects may be through a combination of direct fungicidal effects and nail changes by the lasers, allowing deeper penetration of the topical drug [96]. Therefore, lasers can be considered as an adjunct therapy in elderly patients, patients with hepatic or renal disease, or other contraindications, but patients should be cautioned of the limited clinical improvement and high costs associated with lasers [97].
PDT is a non-invasive therapy that uses photosensitizing agents and a light source to generate reactive oxygen species and subsequently destroy cells in a given area. Many dermatophytes responsible for onychomycosis can absorb photosensitizing agents, making them susceptible to apoptosis by PDT [98]. PDT itself presents an optimal treatment option for patients with contraindications to oral antifungals. PDT used in combination with oral antifungals has also shown higher cure rates and a shorter treatment duration [99]. The most common photosensitizing agents used in PDT for onychomycosis are aminolevulinic acid (ALA), methyl-aminolevulinic acid (MAL), and methylene blue (MB). The number of treatment sessions varies from three to twelve sessions with incubation times of 1-5 h. The efficacy of MAL and MB use in PDT in conjunction with oral terbinafine demonstrated a 70% complete cure rate in both modalities [100].
As an additional treatment option, nail debridement can be used in combination with oral antifungals. Patients who underwent aggressive nail debridement with oral terbinafine therapy demonstrated higher clinical cure rates (59.8% vs. 51.4%) and complete cure rates than patients who received terbinafine therapy alone [101]. Data published on this treatment regimen, however, are relatively limited.
Combination therapy of antifungals has been shown to improve treatment response compared to monotherapy. In addition to improved efficacy, combination therapy may also help to combat antifungal resistance, which is being encountered at an increasing frequency [102]. However, more studies are needed to assess oral antifungal combination therapy [103].
Conclusions
Onychomycosis is a common nail disease caused by dermatophytes, yeasts, and NDMs. Oral treatment is indicated in moderate to severe cases, multiple digit involvement, and/or failure of topical therapies. Although several antifungal oral therapies are available, terbinafine-resistant isolates are emerging and can have significant impact on future management. Novel drugs are needed to combat this challenge, and several studies have demonstrated promising preliminary data with newer antifungal therapies. Additional studies are needed to assess and analyze safety profiles, dosages, and establish guidelines for these new drugs. | 2023-05-14T15:12:34.941Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "90f04dd77911de754ba240060e8967036662cf58",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/jof9050559",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c84766dc83946b16891671f058bc0ec0cfb9136",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264487248 | pes2o/s2orc | v3-fos-license | Propagation of light in cold emitter ensembles with quantum position correlations due to static long-range dipolar interactions
We analyze the scattering of light from dipolar emitters whose disordered positions exhibit correlations induced by static, long-range dipole-dipole interactions. The quantum-mechanical position correlations are calculated for zero temperature bosonic atoms or molecules using variational and diffusion quantum Monte Carlo methods. For stationary atoms in dense ensembles in the limit of low light intensity, the simulations yield solutions for the optical responses to all orders of position correlation functions that involve electronic ground and excited states. We calculate how coherent and incoherent scattering, collective linewidths, line shifts, and eigenmodes, and disorder-induced excitation localization are influenced by the static interactions and the density. We find that dominantly repulsive static interactions in strongly confined oblate and prolate traps introduce short-range ordering among the dipoles which curtails large fluctuations in the light-mediated resonant dipole-dipole interactions. This typically results in an increase in coherent reflection and optical depth, accompanied by reduced incoherent scattering. The presence of static dipolar interactions permits the highly selective excitation of subradiant eigenmodes in dense clouds. This effect becomes even more pronounced in a prolate trap, where the resonances narrow below the natural linewidth. When the static dipolar interactions affect the optical transition frequencies, the ensemble exhibits inhomogeneous broadening due to the nonuniformly experienced static dipolar interactions that suppress cooperative effects, but we argue that, e.g., for Dy atoms such inhomogeneous broadening is negligible.
Disordered media of resonant light emitters provide rich mesoscopic systems [24][25][26] where light can mediate strong interactions.Cold and dense atomic ensembles in the limit of low light intensity (LLI) represent systems of dipolar emitters where light can induce strong position-dependent correlations, even though each individual atom's response to a coherent light field behaves like a linear classical oscillator [27][28][29].The light-mediated resonant DD interactions are highly sensitive to atomic positions, leading to radiative excitations in each atom that are correlated to the positions of every other atom in the sample.This granularity of atoms, which determines their optical response, cannot be captured by standard electrodynamics of continuous media, where light-induced interactions between atoms are treated in an averaged sense [29][30][31][32].In the limit where the atoms are considered stationary during imaging, the collective optical response of N atoms is dependent upon the position correlation functions ρ j (r 1 , . . ., r j ), for j = 1, . . ., N, that are determined before the light interacts with the sample [28].When atomic positions are independent and uncorrelated, all these initial correlations factorize ρ j (r 1 , . . ., r j ) = ρ 1 (r 1 ) . . .ρ 1 (r j ), and solving for the optical response involves calculating the radiative excitations of the atoms that can, nevertheless, still be correlated in a disordered system [33,34].However, highly non-trivial position correlation functions between electronic ground-state atoms ρ j (r 1 , . . ., r j ) can exist due to quantum statistics and atomic interactions.Aside from the fundamental interest in cooperative responses in such systems, the scattered light then also directly conveys information about these correlations, serving as a diagnostic method.
Here we solve the collective optical responses of stationary dipolar emitters experiencing static long-range DD interparticle interactions and discuss experimental consequences of the findings.The emitters can represent atoms or molecules, provided that radiative coupling between molecular vibrational states can be neglected [35].The positions of dipoles, determined in the absence of light at zero temperature, are sampled using quantum Monte Carlo methods [36,37] which approximately generate the position correlation functions ρ j (r 1 , . . ., r j ), encompassing all orders j = 1, . . ., N. We consider strongly confined traps that take the shape of oblate (pancake-shaped) and prolate (cigar-shaped) geometries, where the static DD interaction is dominantly repulsive.We focus on the interaction regimes that are sufficiently weak, such that the density distributions are not crystallized.The configurations of dipoles where the positions do not fluctuate independently dramatically influence the optical response in situations where the system is homogeneously broadened, as previously also demonstrated in simulations of light propagation using the Metropolis algorithm in the presence of nontrivial correlations ρ j (r 1 , . . ., r j ) for quantum degenerate ideal fermionic atoms in a one-dimensional optical waveguide [38].The effect of static DD interactions on light scattering has recently been studied in Ref. [39] without including position correlations ρ j (r 1 , . . ., r j ) of the atoms in the analysis.
The strength of recurrent scattering in cooperative optical responses depends on the interparticle separation in units of arXiv:2310.16158v3[cond-mat.quant-gas]11 Jan 2024 the resonance wavenumber of light.Increased density leads to more pronounced deviations from the single-atom Lorentzian lineshape.To characterize the impact of the static DD interactions, we systematically vary the strength of these interactions while approximately maintaining a constant peak density.Our observations reveal that, in both prolate and oblate traps, repulsive static interactions lead to short-range ordering among the dipoles, which, in turn, curtails fluctuations in the light-induced resonant DD interactions, particularly as these interactions become very large at short interparticle separations.This phenomenon is identified in increased coherent reflection and optical depth that are accompanied by reduced incoherent scattering.In an oblate trap, coherent transmission and reflection resonances narrow at low density but broaden at high density.In a prolate trap, the scattered light resonance can narrow even below the natural linewidth.The collective resonance shifts are substantially smaller than those predicted by the collective Lamb shift in continuous medium electrodynamics, underscoring the violation of the standard optics in the system [29,30,40,41].Intriguingly, especially in a prolate trap, the presence of the static DD interactions enables better targeted excitation of subradiant eigenmodes at high densities, although the linewidth of the eigenmodes can exhibit significant variation between different realizations due to the position fluctuations in the ensemble.Additionally, we find that the main difference between the two-level and the isotropic J = 0 → J ′ = 1 transitions is the emergence of resonances where the orientations of the dipoles in an oblate trap, in the latter case, are parallel to the normal of the trap plane.
In disordered dense samples, the excitations can become very localized leading to high concentrations of energy.We analyze the excitations peaks and their widths in individual realizations and find that the static interactions enhance the localization peak strengths, providing control and manipulation of optical fields on a subwavelength scale.We also study the effect of the static DD interactions on optical transition frequencies.When these interactions start substantially influencing the resonances, atoms become inhomogeneously broadened due to the effect of the atoms experiencing the static DD interactions nonuniformly.This broadens the resonances and reduces cooperativity.However, the simulations indicate that such effects for Dy atoms are negligible.
The article is organized as follows: Section II provides the theoretical background for describing static interactions and light-matter coupling, while Sec.III gives an overview of stochastic electrodynamics and the quantum Monte Carlo methods.The optical responses in an oblate trap, when the strength of the static interactions is varied for a constant peak density, are in Sec.IV A, the differences between twolevel and isotropic transitions in Sec.IV B, inhomogeneous broadening in Sec.IV C, and the results in a prolate trap in Sec.IV D. The localization of excitations is presented in Sec.IV E and the optical responses for variable densities in Sec.IV F before concluding remarks in Sec.V.
A. Static long-range dipole-dipole interactions
We assume particles, henceforth referred to as atoms, with the mass M that are harmonically trapped and experience long-range DD interactions where the trapping potential with the frequency ω j , is associated with the characteristic length scale ℓ j = [ℏ/(Mω j )] 1/2 .We consider the atoms confined in an oblate or prolate trap (Fig. 1) with all the static (e.g.magnetic) dipoles µ j oriented in the same direction.The static DD interaction potential V dd between the atoms, which is independent of the light-induced radiative optical DD coupling between the atoms, is then given by where r ℓ j = r j − r ℓ is the vector joining the atoms j and ℓ and rℓ j = r ℓ j /|r ℓ j |.For magnetic dipoles, we have C = µ 0 .The DD interaction can be characterized by the interaction length The DD interaction diverges at small interatomic separations and, following Ref.[12], we introduce an additional repulsive short-range interaction potential, similar to the Lennard-Jones potential, We maintain in the simulations fixed ratios c 6 = 0.0271R 6 dip E dip and c 12 = 4.47 ] where V dd is dominantly repulsive.
B. Collective atom-light interactions
We solve the optical response of the atoms in the limit of LLI where the individual atoms respond to light linearly and behave as a set of coupled, driven harmonic oscillators.In this section, we first consider the commonly employed basic formalism [42] for the coupled dynamics between the atoms and light for a fixed set of atomic positions {r 1 , r 2 , . . ., r N }.When the positions fluctuate, these equations are then utilized in stochastic electrodynamics simulations for a disordered system in Sec.III A. The atoms are illuminated by a near monochromatic Gaussian incident light field with the positive frequency component of the amplitude E(r), the wavevector k, and frequency ω = kc = 2πc/λ.The incident field drives a two-level or a |J = 0, m J = 0⟩ → |J ′ = 1, m J ′ = µ⟩ atomic transition and is detuned ∆ ( j) µ = ω − ω ( j) µ from the resonance ω ( j) µ of the level µ in the atom j that may vary between the atoms due to the interactions.The system dynamics is described by the positive frequency components of the atomic polarization amplitudes P of the dipole matrix elements d j = DP ( j) .All observables are expressed in terms of slowly varying field amplitudes and atomic variables by factoring out the rapidly oscillating dominant frequency of the driving light Ee −iωt → E, P ( j) e −iωt → P ( j) [28].
The positive frequency component of the scattered field E (ℓ) s from the atom ℓ at r ℓ is given by where G(r) is the monochromatic dipole radiation kernel [43] describing the radiated field at point r due to an oscillating dipole at the origin with the dipole d where r = |r| and r = r/r.The positive frequency component of the total external field driving the atom j is given by the incident field plus the scattered fields from all the other atoms Thus, the dipole amplitude of each atom depends on the fields from all other atoms in the system, leading to a set of coupled equations for N atoms with a configuration of positions {r 1 , r 2 , . . ., r N }.In the LLI limit of the coherently driven system, these equations determine the polarization of each atom and the population of the excited state can be neglected [33].
The dynamics then corresponds to a damped classical linear oscillator amplitude driven by E ext dP ( j) where G ( jℓ) µν = ê * µ • G(r j − r ℓ )ê ν and the single atom linewidth We solve Eq. ( 9) in the steady state, assuming that the time scale 1/γ is short compared with the time scale of any centerof-mass motion of the atoms.The dynamics of the atomic polarizations can be compactly represented in the matrix form [34] where b 3 j−1+ν = P ( j) ν , f 3 j−1+ν = iDê * ν • E(r j )/ℏ.The offdiagonal elements of the matrix H describe the light-mediated interactions between the atoms and are given by The diagonal elements are iγ, while the diagonal matrix δH contains the detunings ∆ ( j) µ .Finally, the total field can be calculated from the sum of the incident light field and scattered field from all the atoms We describe the response in terms of the collective excitation eigenmodes v j which correspond to the eigenvectors of H [44][45][46], with eigenvalues δ j + iν j .These represent the collective line shift δ j = ω 0 − ω j from the single atom resonance ω 0 (here assumed degenerate) and the collective linewidth ν j .When ν j < γ, excitations are called subradiant, whereas ν j > γ, they are super-radiant [47].We determine the occupation of the eigenmode b from [42,48]
III. STOCHASTIC SIMULATIONS A. Stochastic electrodynamics
We have discussed how the coupled dynamics between atoms at a fixed set of positions {r 1 , r 2 , . . ., r N } and coherent light in the limit of LLI can be solved using Eqs.( 9), (13), and (6).When atomic positions fluctuate, the system becomes a disordered medium where light can establish correlations among the atoms in cold and dense ensembles that significantly modify the optical response, even within the classical regime [29].Here we briefly review how position fluctuations are conveniently described by employing secondquantized atomic field operators for the ground and excited states ψg,e (r) [28], where the label e implicitly incorporates the Zeeman levels of the J = 0 → J ′ = 1 transition.The positive frequency component of the atomic polarization density operator then takes the form P(r) = ψ † g (r) d ge ψe (r), with d ge representing the dipole matrix element between the electronic ground and excited levels.We adopt the convention that the repeated level index e is summed over.The expectation value of P(r) with respect to the fixed atomic positions {r 1 , r 2 , . . ., r N } is related to P ( j) in Eq. ( 9) via [34] ⟨ Solving for the light field from Eqs. ( 13) and ( 6) in the general case of fluctuating positions requires an integral of the radiation kernel and the expectation value ⟨ P(r)⟩.Multiply scattered light establishes correlations between the atoms, leading to a hierarchy of equations of motion for the correlation functions of atomic density and polarization [27,28].Specifically, in the limit of LLI, we introduce normally ordered correlation functions The LLI is indicated by including at most one P factor in each expression and by the fact that the ground-state correlations ρ k are not perturbed by light.Then P k (k = 1, . . ., N) satisfy the dynamics for degenerate, uniform transition frequencies The ground-state correlations ρ p exist before the light enters the sample.P p (r 1 , . . ., r p−1 ; r p ) represents the correlation function for ground state atoms at r 1 , . . ., r p−1 , given that there is polarization at r p .The strong coupling emerges from the third term on the right-hand side, representing repeated exchanges of a photon between the same atoms, giving rise to recurrent scattering processes.The challenge of solving Eq. ( 18) arises from the complex hierarchy of N equations for N atoms.Standard optics of continuous media is recovered by factorizing all correlations [28][29][30][31], i.e., disregarding any light-established correlations between the atoms P k (r 1 , . . ., and considering an initial classical uncorrelated distribution of atoms ρ n = ρ n 1 .At very low densities, ρ/k 3 ≪ 1, the hierarchy can be truncated, e.g., by including only correlations between pairs of atoms, resulting in a closed equation for P 2 .In this case, the optical response also depends on the groundstate pair correlation function ρ 2 (r 1 , r 2 ).The behavior of a resonance linewidth is strongly geometry-dependent but can be integrated for semi-infinite quantum degenerate ensembles, already exhibiting through ρ 2 (r 1 , r 2 ) linewidth broadening for bosonic atoms [27], narrowing for fermionic atoms [49], and the effects of quasiparticle pairing [50,51].
An alternative approach to solving the LLI response of Eq. ( 18) is through a stochastic Langevin-type method.This method considers the dynamics for excitations for a given set of fixed atomic positions {r 1 , r 2 , . . ., r N } while treating the atomic positions as stochastic variables.In each stochastic realization, the positions of the atoms are sampled to match the proper correlation functions ρ p (r 1 , . . ., r p ), for p = 1, . . ., N in the absence of light.Subsequently, the excitations and the scattered light for the specific set of atomic positions {r 1 , r 2 , . . ., r N } are solved using Eqs.( 9) and (13).The expectation values are obtained by ensembleaveraging over many realizations.This stochastic classicalelectrodynamics approach of coherent scattering can be formally shown to converge to an exact solution for stationary atoms at arbitrary densities by reproducing the correct hierarchy of correlation functions for both 1D scalar theory [33] and 3D vector-electrodynamics with a single electronic ground level [34].The probability distribution required to sample the normally-ordered ground-state atom correlation functions ρ p (r 1 , . . ., r p ) is determined by the many-body wave function P(r 1 , r 2 , . . ., r N ) = |Ψ(r 1 , r 2 , . . ., r N )| 2 [33].
For uncorrelated atoms, where ρ n = ρ n 1 , each atom can be sampled independently.Such scenarios include an ideal Bose-Einstein condensate, a Mott-insulator ground state in an optical lattice, and classical atoms.In the case of atoms obeying Fermi-Dirac statistics in a 1D waveguide, the optical response has been solved by stochastically sampling the atomic positions [38].In Ref. [38], the atomic positions are defined by the Slater determinant of the many-body wave function and sampled using the Metropolis algorithm.Additionally, to account for the influence of a hard-core radius of the classical atom distribution, simulations have been conducted in light scattering by excluding a hard-sphere volume around the already existing atoms in the sampling [52].
Here we calculate the optical response of atoms that exhibit quantum-mechanical position correlations due to static longrange DD interactions that are present before the light enters the sample.We consider an atomic gas in a strongly confined oblate or prolate trap at zero temperature where the DD interactions are dominantly repulsive.These repulsive static DD interactions inhibit small interatomic separations, therefore suppressing light-mediated resonant DD interactions through increased interatomic spacing.While this effect draws some parallels to Fermi-Dirac statistics [38,49], the DD repulsion between the atoms quickly becomes more substantial and is more long-range, which obscures quantum-statistical characteristics of the atoms in the optical response.Although the simulations are performed with bosonic atoms, any signatures of Bose-Einstein statistics in the scattering are rapidly washed out by the DD interactions.The ensemble can represent a strongly correlated quantum many-body system with long-range static DD interactions where the position of every atom affects the positions of every other atom.The correlated atomic positions in the calculations are therefore sampled using quantum Monte Carlo methods, as described in the following section.
B. Quantum Monte Carlo sampling
Solving the stochastic electrodynamics simulations of the optical response, described in Sec.III A, requires sampling the atomic positions in a dipolar gas that are distributed according to the square modulus of the ground-state wave function at zero temperature.We have used the variational and diffusion quantum Monte Carlo (VMC and DMC) methods to find approximate ground-state wave functions and stochastic realizations of atomic positions to solve for the radiative excitations in Eq. ( 9) and scattered light in Eqs. ( 13) and ( 6) for each given set of fixed positions.
In the VMC method, quantum mechanical expectation values are evaluated using a trial many-body wave function that is an explicit function of interparticle distances.The Metropolis algorithm is used to sample position realizations from the square modulus of the wave function for Eq. ( 9), estimators of the energy and other expectation values.Free parameters in the trial wave function are obtained by minimizing the energy expectation value [53].
In the DMC method, drift, diffusion and branching/dying processes governed by the many-body Schrödinger equation in imaginary time are simulated in order to project out the ground-state component of a trial wave function [36].The product of the trial wave function and the solution of the imaginary-time Schrödinger equation is represented as the ensemble average of a discrete population of walkers (weighted delta functions), and the Green's function for the imaginary-time Schrödinger equation is treated as a transition-probability density for walkers over discrete time steps.After equilibration, the walkers are distributed as the product of the trial wave function and its ground-state components, and walkers provide atomic configurations for Eq. ( 9) and ensemble-averaging expectation values.For a bosonic gas the ground-state wave function is nodeless and hence there are in principle no uncontrolled approximations in the DMC method.In practice, the trial wave function must have a sufficiently large overlap with the ground-state wave function that the algorithm can be equilibrated on a tractable timescale.
The DMC algorithm generates atomic configurations distributed as the product of the trial wave function and its ground-state component.The error in the distribution of atomic configurations is therefore first order in the error in the trial wave function.Our trial wave functions were of form where the orbitals ϕ(r i ) were Gaussian functions that could be (initially) centered in the middle of the trap obtained by a brute-force minimization of the potential energy.The position and the width of each Gaussian orbital in each Cartesian direction were treated as optimizable parameters.The Jastrow exponent J contained polynomial two-and three-body terms [54].In addition, we used two-body Jastrow terms designed to impose physically appropriate behavior on the wave function at short range.If the interaction between atoms were of the isotropic form d 2 /r 3 , where d is a constant, then the Jastrow exponent would have to contain a pairwise term u d (r) = − 8d 2 µ/r to ensure that the local-energy contribution from a coalescing pair of atoms, E L2 = −∇ 2 Ψ/(2µΨ)+d 2 /r 3 , diverges more slowly than 1/r 3 at short range, where µ is the reduced mass of a pair of atoms.This negative, divergent two-body Jastrow term u d (r) has the effect of making the wave function go rapidly to zero at coalescence points, and we have continued to use this term even in the presence of anisotropic dipolar interactions.Similarly, two-body Jastrow terms going as −1/r 5 were used to impose the exact short-range behavior on the wave function in the presence of a repulsive r −12 interaction in our calculations using Lennard-Jones potentials.All our VMC and DMC calculations were performed using the casino software [37].
C. Scattered light
Since we consider here the LLI limit where each atom responds to light classically, the scattered light from the atomic ensemble is directly evaluated from the stochastic simulation data by ensemble-averaging over many realizations [55].We then have for the transmission T and reflection R where E and E * denote the positive and negative frequency components, respectively, and the intensity I = 2cϵ 0 ⟨E * • E⟩.
The lens regions centered on the optical axis in the forward and backward directions are A and A ′ , respectively.The scattered field consists of a mean field ⟨E s ⟩ and fluctuations δE s = E s − ⟨E s ⟩ [56].The coherent scattering originates from the mean field part, while the incoherent contribution is due to the position fluctuations of the atoms.Then where the last term includes both the coherent (the first term) and incoherent (the second term) contributions.The coherent scattering is mostly directed in a narrow cone along the incident field direction, while the incoherently scattered light is less directed, and we vary the numerical aperture (NA) of the lenses in these two cases.The coherent transmission is frequently expressed as the optical depth OD coh = − ln T coh .
IV. OPTICAL RESPONSE
We first consider the optical response of weakly excited, independently sampled trapped atoms.This corresponds to a cold disordered atomic ensemble where the atomic positions are not correlated, but light can establish correlations between the excitations of different atoms at small interatomic separations due to random position fluctuations.Variations of related scenarios have been investigated theoretically and several experiments have reached at least close to the regime where a random ensemble of atoms could show such correlations [29-32, 40, 41, 46, 57-78].The optical response in an oblate trap is illustrated in Fig. 2, while the atom number is varied for the J = 0 → J ′ = 1 transition.Even though the atoms are noninteracting before the light enters the sample, at increasing densities due to the light-mediated interactions the resonances are broadened and the lineshapes increasingly deviate from the independent-atom Lorentzian profile, exhibiting multiple peak resonances and asymmetric profiles.Coherent transmis-sion and reflection also show a pronounced density-dependent resonance shift.The dependence of the correlated response on the peak 2D density per wavenumber squared ρ2D /k 2 is clearly visible in Fig. 2. For ρ2D /k 2 ≪ 1, the response is close to the independent-atom Lorentzian, while cooperative recurrent scattering dominates at ρ2D /k 2 ∼ 1.
The effect of the static DD interactions on the atomic density profile is immediately obvious in Fig. 3 where we show the numerically calculated (using Monte Carlo simulations, Sec.III B) ground-state density profile of N = 100 atoms at zero temperature in a symmetric oblate trap.Due to the tight confinement of the atoms along the orientation of the dipoles in the z direction, the lines joining the atoms are almost perpendicular to the dipoles and the interactions are repulsive, as shown in the interatomic dipolar potential in Fig. 3(a).This results in a well-known effect of increasing cloud radii and flattening density profiles.In the simulations of the optical response, we accommodate these effects by varying the beam focusing (also to adjust the overlap between the incident and coherently scattered light) and by using a sufficiently large lens to collect the light.Aside from the density profiles, the static DD interactions have a much more substantial effect on the optical response due to the change in atomic correlations that alter the light-mediated interactions between the atoms, as we will show in the next sections.
FIG. 3. Dipolar potential between two dipoles perpendicular to the separation and the 2D density profile as a function of the radius for 100 atoms for different interaction lengths R dip [Eq.( 4)] for ℓ x /ℓ z = 100.
A. Varying static dipolar interaction in an oblate trap
The crucial parameter in the cooperative response of independent atoms is the atom separation in the units of 1/k that determines the strength of recurrent scattering and the emergence of light-induced correlations (Sec.III A) [27,30].Increasing densities lead to deviations from the continuous media electrodynamics due to these correlations.To isolate the effect of static DD interactions and the resulting position distributions on the optical response, we therefore consider atomic ensembles with constant ρ2D /k 2 while varying the dipolar length R dip [Eq.( 4)].We express the interaction strength as the ratio between R dip and the average separation between atoms in the xy plane at the trap center R dip ρ1/2 2D (in a prolate trap we use R dip ρ1D ).We take N = 200 atoms and adjust the peak 2D density ρ2D by changing the trapping frequencies.The transmission and reflection are shown in Fig. 4 for two-level atoms for linear polarization in the trap plane at the densities ρ2D /k 2 ≃ 0.1 and 1, respectively, by varying R dip ρ1/2 2D ≃ 0, 0.017, 0.15, 1.6.We find an increase in the resonant coherent reflection and optical depth for increasing R dip , as also shown in Fig. 5.This is associated with reduced incoherent scattering (with the exception of the incoherent reflection in the strongest interaction case of ρ2D /k 2 ≃ 0.1 which has particularly high coherent reflection).At high densities, the coherent scattering is already strong for independent atoms R dip = 0.This is not true for the low density case ρ2D /k 2 ≃ 0.1 when the relative increase in the coherent resonant scattering as a function of R dip is more noticeable at weak interaction strengths.At ρ2D /k 2 ≃ 0.1, the transmission and reflection resonance linewidths narrow with increasing R dip , but at the higher density case ( ρ2D /k 2 ≃ 1) there is broadening (Fig. 5).At ρ2D /k 2 ≃ 1, the resonance is shifted as a function of R dip , whereas the interaction-dependent shift at the low density case is absent.At ρ2D /k 2 ≃ 1, the lineshape is substantially deviated from the independent-atom Lorentzian and this effect is magnified by the static interactions.The incoherent scattering exhibits a very broad resonance.
The effect of the static DD interactions on the lightmediated interactions between the atoms at a given density is due to position correlations.Increasing the repulsive interactions modify the distribution of interatomic separations introducing short-range ordering of the atoms, as shown in the probability distributions of the nearest-neighbor and all-atom separations in Figs. 6 and 7.As R dip and repulsion increase, the atoms are prevented from coming close to each other.This suppression of small interatomic separations prevents the light-mediated DD interactions, which scale as ∝ 1/r 3 at small separations, becoming very large.In disordered ensembles this reduces the fluctuations of light-induced resonance shifts between any atom pairs, resulting in reduced inhomogeneous broadening of resonance frequencies due to the resonance shifts and reduced incoherent scattering.In Fig. 7, the most pronounced short-range ordering occurs for R dip √ ρ2D ≃ 1.6 when the atoms are unable to approach each other, representing the most dramatic lineshape deformation in Fig. 4. We additionally find in Figs. 6 and 7 that the large separations are suppressed due to increased trap frequencies that maintain the constant peak density value.
Eigenmodes
The optical response of the atoms can be analyzed in terms of the collective excitation eigenmodes.In Fig. 8, we show the normalized histogram distribution of the eigenmodes as a function of their collective resonance linewidths ν and line shifts −δ for ρ2D /k 2 ≃ 0.1 and 1 (the minus sign is used to align the plots with the laser detuning in lineshapes).This corresponds to the optical responses shown in Fig. 4 for the independent-atom case R dip = 0 and for the most strongly interacting case R dip √ ρ2D ≃ 1.6.Owing to the relatively low density of the ρ2D /k 2 ≃ 0.1 case, the eigenmodes for R dip = 0 are strongly peaked around the single atom resonance and linewidth.The static DD interactions cause the highly occupied region to spread, extending towards blue-detuned superradiance and red-detuned subradiance.This peak region is also shifted consistently with the collective resonance shift in Fig. 4.
With increasing density, the size of the central region generally increases both in width in resonance and in linewidth.The distribution also forms long arms that extend into regions of super-radiant and subradiant modes.At ρ2D /k 2 ≃ 1 in Fig. 8, the mode density is high far from the single-atom resonance even for independent atoms.The distribution becomes highly asymmetric at high density with pronounced blue-detuned subradiant eigenmodes.The static interactions further magnify these modes and also produce very super-radiant reddetuned modes.These changes in eigenmode distributions correspond to resonance broadening of the lineshapes with increasing density in Fig. 4.
We also calculate which eigenmodes are occupied at specific laser frequencies in Fig. 4.Here Eq. ( 14) is used to produce a histogram plot of the occupied mode probability distribution in Fig. 9. Intriguingly, the presence of the static DD interactions allows for better targeted excitation of subradiant eigenmodes at high atom densities, as shown in terms of increased and more localized occupations in Fig. 9(c), (d) for a subradiant eigenmode resonance at the detuning ∆ ≃ −0.6γ.Although the incident light strongly couples to a single eigenmode already for the case of independently distributed atoms, the coupling is even more selective in the presence of static DD interactions.Similarly, due to the static coupling, it is more difficult to excite the modes off resonance.Although the modes are only excited over a narrow range of frequencies, they still extend over a wide range of linewidths due to the position fluctuations even with strong static interactions.
Selectively exciting subradiant eigenmodes of atoms in oblate trapping geometries, as well as storing photons in such modes, has been actively investigated when the atoms form periodic arrays and individual atoms are trapped at specific spatial locations, see, e.g., Refs.[42,48,[79][80][81][82].Here the driving of subradiant modes is studied instead in disordered ensembles with short-range ordering due to the static DD interactions.In Sec.IV D, we show how these effects are even more pronounced in a prolate trap where small atom numbers also permit better targeting of individual eigenmodes.
B. Isotropic vs two-level transition
The optical response of two-level atoms in Fig. 4 is qualitatively similar to the response of atoms with an isotropic J = 0 → J ′ = 1 transition to positive circular polarization in the trap plane.At low densities, the transmission and reflection lineshapes are very similar (Fig. 10).At the high density case with ρ2D /k 2 ≃ 1, the coherent scattering is still only slightly modified, but more notable deviations in the lineshape appear in the incoherent transmission.In the case of the J = 0 → J ′ = 1 transition, the incoherent scattering has a dominant peak at ∆ ≃ 0 and a smaller peak at ∆ ≃ −2γ, while in the two-level case the ∆ ≃ 0 peak is less pronounced.In the isotropic case, the dipoles can be excited in the direction normal to the trap plane, despite these components not being directly driven by the incident light.This can result in scattering that is not captured by the lenses.To calculate this out-of-plane excitation, we define the expectation values of the magnitudes of the atomic polarization components, ground-state atom distributions and correlations.Depending on the physical system considered, the static DD coupling can also influence the optical transition frequencies.Since the static DD field experienced by each atom depends on the relative positions of the other atoms, the effect is not uniform throughout the sample.For example, in the case of static magnetic DD interactions where the electronic ground and excited levels of each atom exhibit a magnetic dipole, the atoms can experience level shifts due to ground state -ground state, ground state -excited state, and excited state -excited state interactions which depend on the specific level structure and atom. 1 For simplicity, we consider a general model to demonstrate the effect of inhomogeneous broadening of resonance frequencies that results from the nonuniformly experienced static DD interactions between the atoms.We introduce position dependent transition frequency shifts in individual atoms where the shift in the atom j at r j is caused by the static DD interaction of all the other atoms ℓ at positions r ℓ and D denotes the effect of the static field on the transition frequency.Introducing Eq. ( 25) in the simulations broadens the distribution of the atomic transition frequencies in the ensemble and shifts the peak value of the atomic transition frequency as a function of D. In optical responses, the broadening is analogous to inhomogeneous broadening in thermal and other ensembles [29,40,84].The transition frequencies are most strongly shifted for any atom pairs that are very close to each other [see the ground state -ground state coupling in Fig. 3(a)].However, the repulsive force between the atoms reduces the likelihood of very strong shifts; the DD interaction between the electronic ground level atoms inhibits atomic distributions with short interatomic separations.
The nonuniform change in the resonance frequencies broadens the transmission and reflection resonances at different coupling strengths in Fig. 11.To make the different cases comparable, the resonances are shown with respect to the most likely shift in the transition frequency δ in each case.The consequences of these level shifts on Dy atoms are discussed in Sec.V where we argue that their effect on the optical response is negligible.
D. Varying static dipolar interaction in a prolate trap
We consider N = 10 atoms trapped in a prolate trap at peak densities ρ1D /k ≃ 0.1 and 1.The static dipoles again are all parallel and repulsive, oriented perpendicular to the long trap axis.The atoms, with the J = 0 → J ′ = 1 transition, are illuminated by a Gaussian beam, with positive circular polarization, propagating either along the long axis of the trap ('on-axis') or perpendicular to it ('off-axis').The on-axis scattered light power and the atomic pair distributions are shown in Fig. 12.At low atom densities, the lineshape is close to the Lorentzian with the resonance near the single atom resonance, while at high densities it is asymmetric.The coherent scattering resonance is shifted towards negative detuning.Similarly to the oblate case in Sec.IV A, we find the resonance narrowing with increasing R dip (Fig. 13), increasing coherent scattering, and decreasing incoherent scattering, but now the HWHM resonances are very narrow and below the linewidth of an isolated atom.These changes due to R dip originate from the short-range ordering of the atoms that suppresses the fluctuations of the light-mediated DD interactions.
Due to small atom numbers and eigenmodes, it is easier to identify individual modes at high densities in a prolate trap than in an oblate one.The possibility of highly selected targeting of the eigenmodes as a result of static DD interactions, highlighted in Sec.IV A, becomes even more pronounced in a prolate trap, as shown in Fig. 14(e), (f) for ∆ = −0.9γ.We also find that the most occupied eigenmode in most stochastic realizations for the system of Fig. 14(f) has close to 0.9 normalized occupation.The average occupation of the most occupied eigenmode over many realizations is about 0.66 (with the standard deviation of 0.16), as a result of some realizations having two eigenmodes more highly occupied.The linewidth of the excited subradiant mode for N = 10 atoms in this case considerably varies between stochastic realizations, but some subradiant modes with a broader resonance in Fig. 14(f) have a smaller wavenumber than those with a narrower resonance.
The scattering in the off-axis case is markedly different to the on-axis case (Fig. 15).While the coherent scattering is again strengthened by the DD interactions, the lineshape for the off-axis scattering becomes notably deformed, with double and triple peaks appearing in the coherent and incoherent scattering, respectively.The extra incoherent peak indicates the effect of the multilevel structure of the atoms, as the reso- nance is not captured by the lens for coherent scattering.
At the single-atom resonance, the static DD interactions induce subradiant excitations in Fig. 16, but the dominant excitations still exhibit the single-atom linewidth as in the independent-atom case.This unchanged dominant occupation across the interaction strengths explains the unshifted central peak in Fig. 15.The potential for targeted excitation of subradiant eigenmodes due to the static DD interactions is again shown in Fig. 16(c), (d) at large detunings.However, the subradiant modes in the both cases co-exist with superradiant eigenmodes.These super-radiant modes are visible in Fig. 16(c), (d) as distributions for ν > γ that extend over a wide range of resonance frequencies and can be excited even off resonance.It is these super-radiant eigenmodes that are also responsible for the additional resonances in the incoherent lineshape in Fig. 15.Owing to their broad resonance linewidths, these modes show up in the lineshape profile.
E. Localization of atomic polarization
One of the immediate consequences of cooperative emitter responses is the resulting intricate interplay between the collective excitation eigenmodes and disorder in emitter positions.This can dramatically affect the near-field landscape of the optical response, resulting in the localization of eigenmodes and highly concentrated excitation energies [85,86].In optics, this can be exploited, e.g., in achieving strong coupling between the excitation modes and a quantum emitter, thereby modifying its decay rate [87].A strongly localized excitation can effectively drive quantum emitters by acting as a cavity with its quality factor determined by the collective linewidth of the mode.We find that the near-field excitation landscape of the atoms exhibits strong localization of excitations on a subwavelength scale that depends on the atom density and the static interaction strength.The localization rapidly increases with the atom density, independently of the static interaction.Examples of single stochastic runs of atomic polarization density profiles are shown in Fig. 17.In the low density case, the profile is closer to the Gaussian, while the localized excitations are visible at high densities.As the static interaction is increased, the value of the peak excitation increases (Fig. 18).This is more pronounced for the most localized 2%.
F. Varying the atom density in an oblate trap
Instead of fixing the peak atom density, we now study the optical response for fixed static DD interaction length in terms of the resonance wavenumber of light R dip k [Eq.( 4)].This allows to investigate how light transmission and reflection vary with the density by changing the trap frequency ℓ x k for constant ℓ x /ℓ z = 25 (Fig. 19).The setup is similar to the one studied in Sec.IV D (see Fig. 1) and we consider the independent atoms R dip = 0 and static dipoles with R dip k ≃ 0.1.The independent-atom case behaves qualitatively similarly to the previous example where we only varied the atom number.The lineshapes at the peak densities ρ2D /k 2 ≃ 1.4 and 3.3 in Fig. 19 are asymmetric in all the cases, displaying an increased optical depth for blue detuning and enhanced incoherent scattering for red detuning.The static interactions sig- nificantly increase the red-detuned incoherent scattering.The density-dependent resonance broadening in Fig. 20 becomes more dramatic with the static DD interactions.The peak value of coherent reflection first significantly increases with density and then at high densities start decreasing again.This may be due to the eigenmode resonances becoming spectrally more distinguishable, as illustrated for the eigenmode occupations in Fig. 20.At high density ρ2D /k 2 ≃ 3.3, the occupation is prominent only around modes for which the laser frequency is resonant.There is little occupation of modes away from this resonance.This selectivity is responsible for the broad transmission resonances at high densities, as different eigenmodes can be excited at different frequencies and the mode resonances extend over a wide range of frequencies.
V. CONCLUDING REMARKS
We successfully solved a challenging hierarchy of N equations (18) for the correlation functions of N atoms.These equations encompass correlations among atoms in their electronic ground states as well as those involving both ground and excited states.This was made possible using stochastic electrodynamics simulations of coupled radiative dipole excitations where the positions of the dipoles are correlated by static repulsive DD interactions and sampled using quan- tum Monte Carlo methods.For the bosonic fluid-like states considered in this paper, ergodicity is not expected to pose a challenge when seeking the ground state using diffusion quantum Monte Carlo simulation, with the accuracy of the sampled distributions scaling linearly with the error in the trial wave function.As long as the sampling of positions is precise, the stochastic electrodynamics simulations of coherently scattered light converge to an exact solution for stationary, laserdriven atoms at arbitrary densities in the limit of LLI [33,34].The methodology can be extended to include also stronger static interactions and other strongly correlated ensembles.
Our main findings showed how the repulsive static interactions lead to short-range ordering among the dipoles, which, in turn, curtails fluctuations in the light-induced resonant DD interactions (for an oblate trap in Sec.IV A and in a prolate trap in Sec.IV D).This phenomenon affects the resonance widths and shifts (Secs.IV A and IV D).Furthermore, it is identified in increased coherent reflection and optical depth that are accompanied by reduced incoherent scattering.The effects of the static interactions on the optical response can be analyzed in terms of the collective excitation eigenmodes (Secs.IV A 1 and IV D) and we found, e.g., that the presence of the static DD interactions enables much better targeted excitation of subradiant eigenmodes at high densities especially in a prolate trap, despite disordered atom distributions.For an isotropic level structure, the excited dipoles can exhibit nonnegligible orientation even along the normal of an oblate trap (Sec.IV B).Intriguingly, the static interactions can also enhance the peak strengths of the disorder-induced excitation energy localization, providing control and manipulation of optical fields on a subwavelength scale (Sec.IV E).
Optical responses in the presence of static DD interactions could be investigated, e.g., using atoms or polar molecules.The repulsive static DD interactions in a prolate and oblate trap can suppress inelastic losses in both systems even at high densities [1].Dy atoms have a large magnetic moment µ ≃ 10µ B , where µ B is the Bohr magneton [4].For example, the 626nm transition has been used in magneto-optical trapping of 162 Dy [88].For ρ2D /k 2 ≃ 3.3, this gives the average interatomic separation 0.09λ at the center of the trap and R dip √ ρ2D ≃ 0.4, with R dip ≃ 21nm.If the static DD interactions strongly affect the optical transition frequencies, resonant emitters can experience them nonuniformly and become inhomogeneously broadened (Sec.IV C).The strength of the level shifts in Eq. ( 25) generally depends on the level structure, but similar estimates for Dy indicate a much weaker effect than any of the cases shown in Fig. 11, and therefore a negligible influence on the optical response.
For alkali-metal atoms the magnitude of the magnetic dipole |µ| ≲ µ B and R dip is more than two orders of magnitude smaller than for the case of Dy.The simulation results in Fig. 4 then indicate that the static magnetic DD interactions between the atoms could not contribute to observable effects, e.g., in the coherent light scattering experiments of Ref. [71].
Heteronuclear polar molecules possess large electric dipole moments whose strength can be controlled by orienting the molecule by external electric field [16].Also atomic Rydberg excitations under conditions of electromagnetically-induced transparency can be used to manipulate collective optical interactions [89,90] and the DD interaction between atoms in Rydberg states can be tuned to a weak-interaction regime [91].Rydberg transitions induce DD interactions resulting in level shifts that determine which atoms engage in resonant scattering of light.Consequently, the positions of the atoms that are resonant with the incoming light can become correlated due to the Rydberg DD interactions.Furthermore, in recent experiments, it has been demonstrated that polar molecules can be utilized in effectively controlling atomic resonances [9].Although the control in Ref. [9] was obtained through Rydberg states, one could envisage a scenario in which an oblate trap of polar molecules exhibiting repulsive static DD interactions is placed on the top of a prolate atom trap.The interactions between these molecules and the atoms could potentially be harnessed to ascertain, through induced atomic level shifts, which atoms can engage in resonant interactions with light.This selective process might ensure that only the atoms located atop the molecules are in resonance.Consequently, the position correlations initially associated with the molecules would be transferred to the optically interacting atoms.
FIG. 1 .
FIG. 1. Schematic illustration of the trapping geometries illuminated by coherent light.(a) An oblate (pancake-shaped) trap in the xy plane (ℓ z ≪ ℓ x = ℓ y ), with the static atomic dipoles aligned perpendicular to the plane along the light propagation direction.An elongated prolate (cigar-shaped) trap (ℓ z ≫ ℓ x = ℓ y ) with the static dipoles aligned perpendicular to the long axis.The incident light in (b) propagates along either the long or short axis of the trap.
FIG. 12 .
FIG. 12. Coherently and incoherently forward-scattered power (in units of incident light intensity/k 2 ) from 10 atoms in a prolate trap (ℓ z /ℓ x = 25) for different static dipole interaction strengths at the peak densities (a), (b) ρ1D /k ≃ 0.1 and (c), (d) ρ1D /k ≃ 1, and (e), (f) the atomic pair distributions.The light propagates parallel to the long trap axis.Normalized (e) nearest-neighbor and (f) all-pair distributions between the atoms.The lens NA 0.8.
FIG. 17 .
FIG. 17.Localization of excitations |P| [in units of DE/(ℏγ)] as a function of the distance from the most excited atom |ϱ − ϱ max | for 200 atoms in an oblate trap at the peak density ρ2D /k 2 ≃ 1 for (a) all data, for (b) 2% of the most localized cases (1000 realizations).The corresponding profile in an individual stochastic realization with R dip √ ρ2D ≃ 0.15 for (c) ρ2D /k 2 ≃ 0.04, (d) 1. | 2023-10-26T18:40:58.627Z | 2023-10-24T00:00:00.000 | {
"year": 2023,
"sha1": "29e5b7d9b66d2c4a312a7dfa73fc8ae4a717e9ad",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevResearch.6.013078",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "e5e3da60fccc6b75b395a281179c9cd807c47098",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
232099716 | pes2o/s2orc | v3-fos-license | Key Physical Factors in the Serve Velocity of Male Professional Wheelchair Tennis Players
The aim of this study was to identify the physical factors related to serve speed in male professional wheelchair tennis players (WT). Nine best nationally-ranked Spanish male wheelchair tennis players (38.35 ± 11.28 years, 63.77 ± 7.01 kg) completed a neuromuscular test battery consisting of: isometric handgrip strength; serve velocity; 5, 10 and 20 m sprint (with and without racket); agility (with and without racket); medicine ball throw (serve, forehand and backhand movements); and an incremental endurance test specific to WT. Significantly higher correlations were observed in serve (r = 0.921), forehand (r = 0.810) and backhand (r = 0.791) medicine ball throws showing a positive correlation with serve velocity. A regression analysis identified a single model with the medicine ball throw serve as the main predictor of serve velocity (r2 = 0.847, p < 0.001). In conclusion, it is recommended that coaches and physical trainers include medicine ball throw workouts in the training programs of WT tennis players due to the transfer benefits to the serve speed.
Introduction
Wheelchair tennis (WT) is the adapted modality of conventional tennis (CT) [1]. WT is one of the most popular Paralympic sports [2]. This progress has increased the competitive level of the players and has driven the professionalization of the best-ranked ones [3]. In order to assist WT players in their quest for performance improvement and professionalization, creating specific training situations that simulate the reality of the competition has been indicated as a key factor in the design of the sessions [4]. For this, it is vitally important for coaches and physical trainers to better understand the factors that specifically affect WT performance in order to achieve the best results.
WT competition is divided into two categories: Open and Quad. In the Open category, there are two draws: women and men [1]. In this category players have a wide range of disabilities, including spinal cord injury, single amputees, double amputees or spina bifida. In the Quad category, men and women play together and they also have a disability in their upper limbs as well [5].
Most of the studies conducted on WT match analysis have focused on the Open category. These research have concluded that WT rally length has been shown to last between six to ten seconds, with three to four shots per rally [6,7]. Serve and return of serve seem to be the most important strokes in a WT match. In conventional tennis (CT) the serve has been described as the most potentially dominant stroke in the modern game [8,9], although in WT it does not seem to have the same positive influence than in TC [10,11]. Serve velocity (SV) is undoubtedly one of the determining factors for standing players [12] and its relationships with other factors related to the physical condition of the players has been widely studied [13]. Some research has used different isometric tests, such as wrist, elbow or shoulder flexion-extension [14], whereas other studies have used dynamic strength tests such as the isokinetic shoulder test [15] to find the relationships between physical condition and service speed. On the other hand, studies have also used functional field tests related to medicine ball throwing as possible predictors of service speed [13,16,17]. In general, it seems that knee flexion before extension is a prerequisite for an efficient execution of the serve as well as to achieve a higher jump [18,19] aspects that cannot be considered in adapted tennis because it is played sitting on the wheelchair (Figure 1). Some studies conducted by national tennis associations have used a battery of physical tests to know the evolution of their athletes [20,21], as well as to establish relationships between the measurements [13,22,23]. In general, anthropometric measurements, strength, speed, agility, endurance and flexibility are usually included in these tests.
From a biomechanics perspective, the serve movement is commonly divided into three phases (preparation, acceleration and follow-through) including eight stages (start, release, loading, cocking, acceleration, contact, deceleration and finish) [24]. The loading stage of the lower body has been described as the 'loaded position' the dominant elbow adopts at its lowest vertical position, it coincides with the maximal knee flexion [24] and it occurs at the end of the eccentric phase of the movement. In the case of WT, the players have a lower hitting plane as compared to the standing players, as well as a lower force generation due to the deficit in the production of force of the lower body [25]. In addition, the functional limitation of the WT players implies that the Quad category players, who have a high functional limitation, impact the ball closer to the body and generate a lower hitting power than those of the Open category [26]. An increasing serve speed reduces the time for the opponent to successfully return the ball and increases the probability of the server's dominance in the rally or of winning a direct point [27]. Due to the fact that, in CT, there are studies that show a relationship between physical parameters and service speed [13][14][15][16], and that the service technique is similar between both disciplines, our hypothesis is that there will be a relationship between some physical parameters and the service speed in WT players. Despite this, to the authors' knowledge there is no research on how the serve speed is related to the WT athlete's physical parameters, where field tests have become a reliable option to establish the performance level of players [28]. Therefore, the objective of this research was to identify the physical factors related to the velocity of the serve in WT players using different reliable and valid field tests previously utilized in research.
Participats and Procedures
Nine of the top ten male wheelchair tennis players in the Spanish national ranking participated in this research (mean ± SD age: 38.35 ± 11.28 years, weight: 63.77 ± 7.01 kg). All of them played national and international competitions and were among the top 150 international WT ranking ITF (Open category). Eight of the nine players were right-handed, and one was left-handed. Players had 10.2 ± 6.2 years of playing experience and practised an average of 9.3 ± 4.8 hours of tennis per week. The characteristics of the participants are shown in Table 1. The players were informed of the characteristics of the study and signed an informed consent to participate in it. All the procedures followed in this study were in accordance with the ethical standards of the Declaration of Helsinki of 1975, revised in 2008, and were approved by the ethics committee of the Royal Spanish Tennis Federation (RFET_CE17.3). The players were summoned at the same time of day to perform the tests [29]. First, a standardized 10-min directed warm-up was performed consisting of joint mobility, linear movements with the chair, circular movements and turns simulating hitting, and low-intensity accelerations and decelerations [30]. The tests were carried out on two consecutive days in the following order: Day 1: Sprint test (5, 10 and 20 m), agility test (T-test), service speed test, and medicine ball throw test (forehand, backhand and serve); Day 2: Incremental resistance test (Hit and Turn Tennis Test) and manual dynamometry test. The scores of the different tests were collected during their development by the researchers themselves. All tests were conducted on an indoor hard tennis court.
Measurements Collected
Different reliable and valid field tests used previously in research were selected. The characteristics of each of the tests were the following [28,[30][31][32]: • Sprint test: Four gates at 0, 5, 10 and 20 m were used to measure the speed of the WT players. Subjects started from a line 0.5 m behind the first gate. Each participant performed the test three times without a racket, and three times with a racket, with a 2 min rest time between each repetition. The best value of the three attempts was recorded. The time was counted in seconds (s) and thousandths of a second (ms) with an error of ± 0.001 s through Chronojump photocell ® (Chronojump, Barcelona, Spain) and Chronojump software version 1.7.1.8 (Chronojump, Barcelona, Spain) for MAC. • Agility test (T-Test): This agility test is adapted for wheelchair sports [31] and has previously been used in WT players [33]. The test includes accelerations and decelerations, as well as turns for both sides. The participant started in the centre of the court behind the baseline, they had to move to the intersections of the singles line with the service line, always passing through the central area of the court (T) until returning to the starting area ( Figure 2). Each participant performed the test three times without a racket, and three times with a racket, with a 2 min rest time between each repetition. The best value of the three attempts was recorded. Time was measured using the Chronojump Photocell ® (Chronojump, Barcelona, Spain) and the Chronojump software version 1.7.1.8 for MAC with a gate located on the baseline to record the start and the end of the test. • Serve velocity test: A radar gun (Stalker Pro Inc., Plano, Texas, USA) was used to measure serve velocity. The player performed 10 services at maximum velocity directed to the wide area of the service box from the advantage side for the righthanded players, and from the deuce side for the left-handed player [34]. The radar was positioned behind the player at the same hitting height and oriented in the same direction as the ball. The average value of 10 serves in km·h −1 was recorded. Isometric handgrip strength: The hand dynamometry test was carried out to assess the maximum isometric force in the flexors of the fingers with a Smedley III T-18A dynamometer (Takei, Tokyo, Japan) and a range between 0 and 100 kg in 0.5 kg increments and an accuracy of ±2 kg. The test was carried out in the wheelchair sitting position with the arm extended and glued to the wheel without actually contacting it [30]. Each subject made three maximum attempts with each hand after a familiarization phase with the instrument with sub-maximum repetitions. The rest time between each attempt was 2 min. The best value of three attempts was recorded in N·kg −1 . • Anaerobic endurance test (Hit and Turn Tennis Test): This test is an adaptation of the one developed for conventional tennis to evaluate the specific anaerobic endurance of the player through the level reached [32]. The test consists of simulating a hit on top of a cone located at the intersection of the doubles line with the baseline line, coinciding with the sound signals emitted by the sound of the test. After that hit, the player must simulate another hit on the opposite side and so on until the end of the series. In this adaptation, the hitting had to be made close to a cone located between the intersection of the singles line with the doubles line, thus reducing the distance of displacement for the WT players. As an incremental test, it ended when the player was unable to reach the cone at the rate set by the sound signals. The period reached by each player was recorded when he was no longer able to simulate hitting in the designated area at the same time as the acoustic signal sounded. Table 2 shows the physical variables measured as well as the different tests used.
Data Analysis
Due to the small size of the sample, the Shapiro-Wilk and Levene tests were used to contrast the normality and homogeneity of variances for each variable (sprint, agility, strength and anaerobic endurance). All the variables obtained p-values > 0.05 except in the dynamometry with the dominant arm. A Pearson correlation analysis (Kendall's Tau-b for dominant dynamometry) was performed to identify those variables related to serve speed. Values were classified as trivial (0-0.1), small (0.1-0.3), moderate (0.3-0.5), large (0.5-0.7), very large (0.7-0.9), almost perfect (0.9) and perfect (1.0) [37]. Subsequently, a multiple linear regression analysis (stepwise) was performed to identify the parameters with the greatest influence on SV. The SV was used as a dependent variable, while the rest of the variables that had previously shown significance operated as independent. Significance was established at p < 0.05. All data were analyzed with the IBM SPSS 25.0 statistical package for Macintosh (IBM Corp, Armonk, NY, USA). Table 3 shows the descriptive analysis of test measurement in wheelchair tennis players. Table 4 shows the correlation coefficients of the different physical tests performed with the serve speed. Figure 4 shows the relationship between statistically significant variables and serve velocity. Significantly higher correlations were observed in medicine ball throws for service (r = 0.921), forehand (r = 0.810) and backhand (r = 0.791) showing a positive correlation. The 20-meter racket test showed significance (p = 0.012) and negatively correlated with serve velocity (r = −0.788). Table 5 shows the results of the multiple regression analysis. The medicine ball throw simulating a serve was shown as the main and only predictor measure of the speed of the serve (r 2 = 0.847, p < 0.001) with a positive relationship.
Discussion
Knowing how the physical demands of WT are related, as well as identifying which are the variables that determine any of them, can help coaches and physical trainers in the design of exercises adapted to the specific needs of the game. The aim of this research was to determine the relationship of different physical demands evaluated through a field test battery with the serve velocity in professional WT players. In general, it was observed that medicine ball throws simulating the forehand, the backhand and the serve strokes showed the highest correlation with SV, while the serve medicine ball throw was the test that best predicted the model.
Coaches and physical trainers often use batteries of tests (related to speed, agility, maximum strength or functional movements) to understand the evolution of their athletes and to prescribe training, among other goals [38]. Medicine ball throws both in the forehand and the backhand showed a positive and statistically significant correlation with service speed (Table 4). These throws have been useful to examine the rotational power of the trunk of athletes in general [39] and in tennis players in particular [38].
The tennis serve includes the activation of the abdominal muscles (rectum and obliques) to perform a trunk flexion with a rotation [25]. Furthermore, the forehand MBT had a higher correlation with serve velocity than the backhand MBT (r = 0.810 vs. r = 0.791). It is worth noting that the forehand MBT includes a rotation of the trunk towards the player's dominant side, as does the serve, which could explain the greater correlation between both. In addition, the 20-m sprint showed a negative correlation with service speed (r = −0.788). In this sense, a greater throw distance is related to a higher displacement speed (less time) over long distances (20 m) (Figure 4d). The abdominal muscles have a great implication in the generation of force in the service [25], and also in the stabilization of the trunk for the propulsion tasks [40], which could explain the observed correlation.
The biomechanical requirements of the serve have been specifically analysed using an 8-stage model (star, release, loading, cocking, acceleration, contact, deceleration and finish) [24]. Due to the considerably low contribution of the lower body to generate force in the kinetic chain of the service movement in WT players, it could be indicated that from the loading phase (semi-side position, elbow of the racket arm at its lowest position, free arm stretched up, etc.) the movement of the upper body in the serve is similar to the service MBT in both standing and chair players. This could explain that the medicine ball throw simulating a serve is the main variable predicting the speed of serve (Table 5) showing a high correlation (Figure 4a). In addition, this loading position in both shots has specific biomechanical implications that do not occur in the forehand and the backhand MBT (shoulder over shoulder work, action-reaction of free arm, line of force in the direction of throw/impact, asynchronous movements between both arms, etc.) [41,42].
In fact, it is known that one of the movements that generates greater power in the hitting action is the torque or rotation component of the trunk to increase the acceleration distance with respect to the point of impact, as well as to add a greater number of elements of the kinetic chain in the corresponding stroke action [43]. Therefore, this action is very clear in the forehand and backhand strokes, but it also has a very relevant role in the serve movement. In this study, it has been observed that from a kinematic point of view, actions that resemble the forehand, the backhand and the serve strokes have a high correlation with serve speed (Figure 4a-c). Specifically, and from a kinematic point of view, the action of MBT serve reproduces the serve movement in its setup and advance-impact phases (Figure 1). Even wheelchair tennis players who have a spinal cord injury and functional deficit in the trunk musculature, can use their non-dominant hand as a support in the hitting action, in a similar way to that which occurs in the serve movement of standing players. In fact, the serve requires multiarticular recruitment and a high rotation speed during the shot [41,42].
Due to the fact that the serve and, specifically, the SV have been reported as the most powerful and dominant stroke feature in tennis, involving various factors such as the strength and power of the upper body and the range of motion of the shoulder [24]; and without considering other elements of the kinetic chain such as the lower body (which also has its influence), it can be considered that this stroke is also decisive in WT. Therefore, and as per the results obtained, it seems reasonable to think that specific strength training prescription, simulating the kinematics of the serve gesture with medicine balls, can help to achieve speed increases especially in the serve, and also in the forehand and backhand strokes. Therefore, it is recommended that coaches and physical trainers incorporate this type of specific work in their training plans to assist players in improving these strokes. These workouts should be done in neuromuscular working conditions, in the absence of fatigue and at the beginning of the training sessions, to avoid the practice of serve stroke workouts in the final parts of the training sessions as usual.
The results obtained in this study present a series of limitations that should be considered. On the one hand, although the sample includes the top nine national players in the Open category, the small size of the sample does not allow for divisions according to their functional limitation, since the strength of the relationship between the variables would have been very light. Therefore, future research should include a larger sample of WT players, players from different WT categories (Quad and Open), as well as female and male players to compare this relationship (SV with physical tests), both due to functional limitation and between genders. In addition, other anthropometric variables, such as height or body segments, were not measured, which have shown a correlation with service speed in CT players. On the other hand, the SV test was carried out in a training environment and taking measurements in a competition setting using a radar gun could provide different data.
Conclusions
Medicine ball throws simulating the forehand, the backhand and the serve strokes showed a high correlation with SV in WT. The MBT serve is the test that best predicts SV in WT. Therefore, and given the similarity of the movements between the two gestures, coaches and physical trainers are encouraged to include medicine ball throws workouts as a service transfer exercises within the training programs of WT players. Likewise, it is advisable to work with medicine ball throws for the forehand and backhand, given the importance of trunk rotation for the serve. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Data Availability Statement:
The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy. | 2021-03-04T05:42:26.642Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "583b1fa999558db8a6282ad67bbd5894658ea81b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/18/4/1944/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "583b1fa999558db8a6282ad67bbd5894658ea81b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210324471 | pes2o/s2orc | v3-fos-license | Giant congenital melanocytic nevus of scalp: A rare case with dermoscopic findings
253 Sir, An 18‐year‐old female presented with an asymptomatic raised pigmented swelling over the scalp since birth. On inquiry, the family revealed that the lesion had focal hair loss until 5 years ago, when the mass started rapidly enlarging and covered whole of the back of the scalp, with appearance of hair over it. There was no history of trauma to head, spontaneous bleeding, or oozing from the lesion. None of the family members had any similar complaints.
Giant Congenital Melanocytic Nevus of Scalp: A Rare Case with Dermoscopic Findings
the occipital area, with thick folds separating it from uninvolved skin, and the presence of spare terminal hair, with a coarser consistency than those on uninvolved skin [ Figure 1]. There was the presence of multiple dark brown-to-blue hyperpigmented papules dispersed all over the lesion, as well as over the surrounding skin, covering the entire occiput posteriorly and crown up to the mid vertex transition point. The involved body surface area was around 2%-3%. The lesion was firm and nonpulsatile with no transillumination (ruling out neural tube defect). No developmental anomalies were detected. There was no local lymphadenopathy. Routine investigations were Letter to Editor Letter to Editor unremarkable. Computed tomography, magnetic resonance imaging, and ultrasonography scalp did not reveal any central nervous system or bony invasion. The differential diagnoses considered were congenital nevus, nevus lipomatosus superficialis, fibrolipomatous hamartoma, cutis verticis gyrata, and neurofibroma.
On trichoscopy, there was the presence of multifocal globules and dots along with central hyperpigmentation. Such patterned lesions were dispersed throughout the enlarged mass, as well as surrounding skin [ Figure 2]. Histopathological examination revealed abundant nevus cells arranged in sheets extending from reticular dermis to subcutis and few melanophages. There were mature Type-B lymphocytoid nevus cells arranged in nests with visible melanin pigment and pale nuclei. In deeper dermis and subcutis, there were sheets of spindle-shaped Type-C neuroid nevus cells which also surrounded the pilosebaceous strctures [ Figures 3 and 4]. No cellular atypia was detected. The lesion was, therefore, diagnosed as a giant congenital melanocytic nevus (GCMN). The patient was referred to the plastic surgery department and advised complete resection, which she denied to.
Congenital melanocytic nevus (CMN) is a neural crest disorder with varied size and macroscopic and microscopic pictures. They are often present since birth, either flat or raised, and may present with satellite nevi (tardive satellites), nodularity, or hypertrichosis. According to Zaal et al., Giant congenital melanocytic nevus (GCMN) is defined to cover 1% body surface area on the face and neck or 2% on rest of the body. [1] Keeping in view the expected growth rate, another definition defines a CMN measuring at least 6 cm on the trunk and 9 cm on the head in a neonate as GCMN. [2] Histologically, three types of dermal nevi cells can be seen: type-A (epitheloid) nevus cells mature into Type-B (lymhpocytoid) nevus cells which in turn mature into Type-C (neuroid) dermal cells during progressive downward migration. Type-C cells are often found with adipocyte or neural metaplasia. [3] The differential diagnoses considered in our patient were turban tumor, connective tissue hamartoma, pachydermoperiostosis, ectopic meningoepithelial hamartoma, and amyloidosis, all ruled out by histopathology. Complications of giant CMV include malignant melanoma in 2%-31% cases, while those on head and neck may be complicated by neurocutaneous melanosis (i.e., abnormal pigmentation of skin and meninges), which presents with seizures, developmental delay, or malignant melanoma of meninges. [4,5] Indications of excision in a giant CMN include age of patient (old age), proximity to vital structures, and presence of neurocutaneous melanosis (seen in 2.5%-45% cases of CMN). Owing to the premalignant potential coupled with additional neurological involvement, all giant CMN on the scalp should go a through investigative workup and surgical excision must be practiced in all cases. Although several cases of GCMN on the scalp have been described in the past, its dermascopic evaluation has been seldom reported. Presence of atypical or negative network, streaks, dots, and globules in three or more colors (i.e., brown, black, red, white, and/or blue-grsy) on dermoscopy could supplement histology in an early diagnosis and treatment of malignant transformation in CMN. [6] Declaration of patient consent The authors certify that they have obtained all appropriate patient consent forms. In the form the patient(s) has/have given his/her/their consent for his/her/their images and other clinical information to be reported in the journal. The patients understand that their names and initials will not be published and due efforts will be made to conceal their identity, but anonymity cannot be guaranteed.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest. | 2020-01-16T09:10:53.388Z | 2019-11-01T00:00:00.000 | {
"year": 2019,
"sha1": "1c1ebcc78de6d944ad68d93e76c48d428ddb5f13",
"oa_license": "CCBYNCSA",
"oa_url": "https://europepmc.org/articles/pmc6984042",
"oa_status": "GREEN",
"pdf_src": "WoltersKluwer",
"pdf_hash": "ebc040d54703f00f7cb5fac9930b84280739c225",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269431563 | pes2o/s2orc | v3-fos-license | NowIKnowMyABCD: A global resource hub for researchers using data from the ABCD Study
The Adolescent Brain and Cognitive Development (ABCD) Study, involving over 11,000 youth and their families, is a groundbreaking project examining various factors impacting brain and cognitive development. Despite yielding hundreds of publications and counting, the ABCD Study has lacked a centralized help platform to assist researchers in navigating and analyzing the extensive ABCD dataset. To support the ABCD research community, we created NowIKnowMyABCD, the first centralized documentation and communication resource publicly available to researchers using ABCD Study data. It consists of two core elements: a user-focused website and a moderated discussion board. The website serves as a repository for ABCD-related resources, tutorials, and a live feed of relevant updates and queries sourced from social media websites. The discussion board offers a platform for researchers to seek guidance, troubleshoot issues, and engage with peers. Our aim is for NowIKnowMyABCD to grow with participation from the ABCD research community, fostering transparency, collaboration, and adherence to open science principles.
Background
The Adolescent Brain Cognitive Development℠ ℠ Study is the largest and most comprehensive long-term study on brain development and child health in the United States (Volkow et al., 2018).The ABCD Study® was initiated by the National Institute of Health (NIH) Collaborative Research on Addiction (CRAN), is funded by the National Institute on Drug Abuse (NIDA), and has expanded to include support from many other federal funding agencies.To date, 11,876 children (including both singletons and twins) ages 9-10 have been enrolled across 21 data collection sites with the goal of investigating brain and behavioral development between late childhood and young adulthood.Participants will complete comprehensive behavioral and neuroimaging assessments longitudinally, making the ABCD Study well-positioned to answer questions about the developing brain and the many childhood experiences that shape social, emotional, intellectual, and physical growth and mental health.
Following open science practices, the ABCD Study provides regular curated data releases to the research community.However, there is no similarly centralized public platform to assist ABCD users with data analysis.The scope and diversity of data encompassed within the ABCD Study stand to benefit from establishing a centralized resource through which researchers may access comprehensive documentation and support.First, the ABCD Study comprises data from an extensive array of modalities, some of which may fall outside any individual researcher's existing methodological competencies.A centralized resource would provide researchers with accessible reference materials to aid them in developing new method competencies.Next, the longitudinal nature of the ABCD Study introduces novel challenges and considerations for researchers.Researchers must make analytical decisions about how to account for the interplay of variables over time, participant dropout and data missingness, evolving measurement tools, and the influence of global events on the data landscape.Finally, the multisite structure of the ABCD Study poses additional challenges to researchers.Even with extensive and careful documentation of ABCD methods, procedures, and variables, researchers may have remaining questions about how a certain measure was collected, whether a variable of interest exists in the data, or what approach they should use when analyzing the data.
Currently available outlets for ABCD data and analysis queries, including the National Data Archive (NDA) Help Desk and small-group knowledge repositories, provide invaluable assistance to researchers but suffer from a lack of scalability that hinders them from assisting a larger group of data users.The first line of assistance for ABCD questions is the NDA Help Desk, where researchers email NIH staff for assistance with ABCD data usage.Under the current model, all communications stay within direct email exchanges.However, as more researchers use the data, the strain on NDA Help Desk staff to assist individual users increases.Many of these researchers may have similar questions, which creates an opportunity to leverage open forums to reduce the need to send the same Help Desk answers to a wider audience.
The other primary method of ABCD-related resource sharing is through small peer-to-peer knowledge hubs, like informal communications between authors of published ABCD research and subsequent researchers attempting to build on their work.When faced with uncertainty about how to approach working with ABCD, the first place many researchers may turn is to the existing literature.A researcher seeking to use a particular analysis method might find relevant papers and then check for publicly available analysis code on a study-by-study basis.This ad hoc approach to sourcing helpful material can make it onerous for researchers to compare techniques across studies, choose the most appropriate method for their own work, or synthesize a new analysis technique by pooling analyses from multiple papers.
Meanwhile, other valuable resources remain accessible only to workshop participants or members of certain labs and research groups.Over the past few years, a number of workshops have been hosted that are designed to support those working with ABCD data.These workshops have included the ABCD-ReproNim Course, which focused on reproducible neuroimaging, and Modeling Developmental Change, which provided training in longitudinal analyses and approaches.ABCD-ReproNim (https://www.abcd-repronim.org)aimed to provide trainees with accessible, hands-on experience using ABCD data and encourage them to produce projects using rigorous neuroimaging and open-science methods.Therefore, in the specific context of ABCD data, much of the course material focused on an introduction to properly handling the data, appropriate research questions to ask when using the data, and specific tools to use to boost efficiency when working with the data.The Modeling Developmental Change workshop similarly targeted increasing trainees' familiarity with appropriate methods for assessing longitudinal associations, something the ABCD data is prime for.This workshop included discussions on how to best work within the context of ABCD data, interactive coding tutorials, and in-depth lectures and resources on important theoretical and methodological considerations.As such, these workshops have produced valuable resources (e.g., coding tutorials, recorded lectures, reading lists, etc.) to facilitate proper and efficient use of the ABCD Study data; however, they are currently inaccessible to the larger community of ABCD Study researchers.The siloed nature of these resources ultimately leads to an inconsistent understanding of the complexities when working with the data within the ABCD researcher community.Additionally, the inaccessibility of these resources creates more work for both those providing and seeking help with ABCD data, including excess effort spent troubleshooting common errors on an individual basis, analytical disparities between research groups, and project delays.
Current project
In this paper, we introduce NowIKnowMyABCD, the first centralized documentation and communication resource publicly available to anyone using ABCD Study data.Our goals for this resource are to spotlight trainee-led research innovations and perspectives, foster collaborations, streamline troubleshooting across research groups, pioneer an inaugural open forum, and harness existing expertise by placing a strong emphasis on community input and contributions.We demonstrate a functioning prototype of the NowIKnowMyABCD resource, comprising two main components: 1) a moderated community discussion board to support users in analysis, troubleshooting, and proper data sharing (https://github.com/orgs/now-i-know-my-abcd/discussions)and 2) a website for ABCD resources and tutorials (https://now-i-know-my-abcd.github.io/docs/).Finally, we invite our fellow readers and researchers to support open developmental science by engaging with a knowledgesharing and knowledge-building space that values transparency and collaboration.
Methods
As the foundation of this resource, our primary focus was to establish a platform conducive to code-sharing and tutorials while fostering an open forum for discussion.GitHub offers a centralized location to host all aspects of our resource, including both the website and discussion board.This choice was driven by its cost-effectiveness, widespread usage for code hosting, and its reputation as a sustainable software platform.Additionally, GitHub supports community contributions through pull requests, a feature that communicates potential changes made to a repository on GitHub.Pull requests are a familiar concept to many GitHub users, making it an ideal choice to help streamline the process for ABCD researchers to provide input on new resources to add, such as a coding tutorial.Furthermore, GitHub's intuitive message board format complemented our objectives.
We leveraged the version control, hosting, and collaboration features of GitHub to track our source code, host a website publicizing ABCD resources, and run a discussion board for community questions.
Github discussion board
We created a collaborative communication forum for the ABCD community (https://github.com/now-i-know-my-abcd/docs/discussions) to help researchers find answers to their questions about ABCD and connect with other researchers.We created this discussion board using GitHub Discussions, a tool for maintaining a forum associated with a specific GitHub repository.At the top of our discussion board, an announcement leads to our guidelines page.Here, you will find information about obtaining a Data Use Certificate (DUC), general comments and courtesy guidelines, information on how to find posts and use the discussion board, discussion categories and labels, and Q&A format.
To make the discussion page easier to use, we have a two-part organizational system.First, users can create their post within a category on the page.The discussion board combines features of platforms such as Slack (https://slack.com/)and Stack Exchange (https://stackexchange.com/), offering discussion threads that are categorized under broader "channels".Some channels were designed for open and ongoing discussion threads, while other channels have a Q&A format that allows users to mark responses as the answer to their question.Current categories listed on our GitHub discussion board with a brief description of each can be found in Table 1.
Second, we have a tag system for labeling individual posts.Each post can have multiple tags, and tags can be used across categories.These features can be used to quickly and easily find relevant posts.For example, someone could look for posts in the stats help channel tagged with "fMRI" and "mental health" or for all posts tagged with "python" and "imaging" across all channels.A list of our current tags is included in Fig. 1.
Additionally, an important innovation in NowIKnowMyABCD is the ability to host and moderate a discussion board for a project with limited data access, which requires a Data Use Certification (DUC).By implementing a system where moderators receive immediate notifications of new posts, we are able to efficiently screen new posts for content that potentially violates the agreements of the DUC or discussion board code of conduct.
While we have not encountered any violations of the DUC agreements or discussion board guidelines, GitHub Discussions does provide the functionality to hide, delete, or redact parts of a comment or post.
Once the GitHub discussion board design was complete, we wanted to find appropriate content to pilot our ABCD related discussions.Authors SAA and LBW previously attended the Modeling Developmental Change (MDC) ABCD Workshop in 2021 (https://abcdworkshop.github.io/).This 5 week intensive workshop was aimed at learning rigorous and reproducible analyses with ABCD data.The worked used a private Slack workspace where many conversations were housed in channels related to resources, code help, Q&A, etc. Coinciding with the creation of NowIKnowMyABCD, Slack announced a new policy of deleting messages older than 90 days.Therefore, we asked the workshop organizers and attendees for permission to populate the discussion board with exisiting Slack content, enabling us to salvage that content and prepopulate a number of discussion topics.The MDC Workshop team graciously accepted our request and allowed us administration rights to the Slack workspace and our GitHub discussion board was launched with the content from the MDC Slack workspace.To do so, we exported all the data from public channels.Both data and metadata were extracted as JavaScript Object Notatoon (JSON; https://www.json.org/json-en.html)files that then underwent text analysis to filter out categories.All JSON files were consolidated and transcribed into our GitHub discussion board, allowing users to familarize themseleves with how to navigate and make use of this feature.
Resource website
We also built a resource website (https://now-i-know-my-abcd.github.io/docs) to collect and publicize ABCD resources from across the web, as well as in-house tutorials for working with ABCD data.We built the website using Jupyter Book, an open-source library for building publication-quality books, websites, and documents from computational content (Executable Books Community, 2020).See Fig. 2 for an image from the resource website.
Our website houses three types of information: guides and resources aggregated from across the web, our own code tutorials for analyzing ABCD data, and a feed of X (Twitter; https://twitter.com/home)posts from the ABCD community.
First, we collected and organized existing ABCD online resources into a central location on our website.For example, we assembled pages from the NIH, as well as groups like the ABCD-ReproNim Course, into a unified set of instructions for gaining permissions to and accessing ABCD data.We also provided links to past session materials from ReproNim (https://www.repronim.org/)and other ABCD-relevant workshops.Further, we collected assorted resources across the web on topics related to statistics, neuroimaging, and data science tools.These topics, while not necessarily ABCD-specific, are likely to be useful to ABCD researchers.Next, we created worked code tutorials to assist ABCD researchers in exploring their data.These tutorials make full use of Jupyter Book's ability to render code and output into stylish, readable web pages.Each page's code is fully reproducible, with parallel examples written in R (https://www.r-project.org/about.html) and Python (https://www.python.org/) to accommodate users who prefer either language.For these "bilingual" tutorials, we augmented Jupyter Book's website-rendering engine with the IRkernel package (https://github.com/IRkernel/IRkernel), to allow Jupyter Book to execute source code from Jupyter and R Markdown (https://rmarkdown.rstudio.com/)notebooks in the same website.We note that the tutorials currently visible on the website are merely a starting point to demonstrate tutorial functionality.We plan to continue adding new tutorials and incorporating community-submitted ABCD analysis tutorials through pull requests and other GitHub collaboration features.
X (Twitter) pipeline
Given the importance of social media platforms for science communication and disseminating relevant information regarding publications and resources, we wanted to create a mechanism for identifying and archiving posts relevant to ABCD.
Our website's portal for ABCD on X (Twitter) consists of relevant ABCD tweets in the past few weeks through an RSS feed and since the ABCD Study was originally launched in 2015 using a Google spreadsheet.In relevant ABCD tweets (the RSS feed), users can see what ABCD researchers have been up to in recent weeks.Users may scroll through the feeds to find ABCD-related recent papers, opportunities, announcements, and troubleshooting.All tweets included in the relevant ABCD tweets since 2015-01-01 (the Google spreadsheet) are the result of an X (Twitter) pull that included the following keywords and hashtags: "ABCD Study", "Adolescent Brain Cognitive Development Study", "Adolescent Brain Cognitive Development (ABCD) Study", "ABCD sample", "ABCD Study Site", "#ABCDstudy", "#abcdstudy", "ABCD-BIDS Community Collection".
RSS feed: X (Twitter)
To maintain more frequent updates, we created an RSS feed to autopopulate the website with tweets from the past two to three months with the "#ABCDStudy" in the contents.We used the website RSS.app (https://rss.app/)to generate the feed and then linked the feed to our website.The RSS feed is interactive on the website, meaning if you want to reply, retweet, or like something, you can click on the specific tweet and the feed will take you to the respective tweet on X (Twitter).
Google spreadsheet: X (Twitter)
We created a X (Twitter) developer account to establish API tokens.By doing this, we were able to use the Python (v3.9) package (JustAnotherArchivist, 2018) and scrape X (Twitter) for any tweets since 2015-01-01 to the date of the scrape (most recent: December 2023).It is possible to pull tweets into a data frame format via Python using users, hashtags, searches, tweets, list posts, and trends.For example, we wanted to pull all relevant tweets that included common hashtags from the birth of the ABCD Study until the date we most recently ran the script.• Looking for a specific measure?
• Wondering how to score a measure?
• Need more information on how an analysis in an ABCD paper was done?
We converted the output from our scrape to a single Google spreadsheet.We reviewed the spreadsheet's contents and separated them into relevant categories.The categories are: • Announcements: any information shared concerning the study protocol, press releases, or reports.• Questions: specific inquiries about the data protocol or working with the actual data.• Issues: troubleshooting.
• Resources: any new methods or tools that can be used within the ABCD dataset.• Shout-outs: kudos from colleagues and friends about work with the ABCD dataset.• Opportunities: job and training opportunities.
• Papers: any pre-prints or publications using ABCD data.
• Posters: any posters presented at conferences.• Presentations: any talks or presentations given.
While the categories above are similar to those in our Discussion Board (see Table 1), these are specifically purposed to categorize relevant information pulled from X (Twitter).
The Google spreadsheet is interactive within the website, meaning you can scroll through the different categories and interact with the tweets through their respective links.We will update this spreadsheet every 6 months.
Discussion board
Our long-term goal is for the NowIKnowMyABCD Discussion Board to become a well-utilized resource that helps provide timely and accurate information for researchers working with ABCD data.To meet this goal, we have two primary paths for continued development.First, we plan to continue to update and refine our tag system, as well as the channel organization.We anticipate that updates to the tag system will be particularly important around the time of new annual data releases.As new types of data become available, we will ensure that these data and topics of interest are included as discussion tags, simplifying the process of sorting questions during an especially active time.
Additionally, we plan to increase our ability to moderate and respond to questions.In order to meet the needs of what we hope will be a growing resource, we will train additional moderators to manage the discussion board, focusing on moderators who can provide long-term Fig. 1.Current tags on GitHub Discussion Board.Tags are used to indicate the content of a post, and multiple tags can be used per post.Tags can also be used as a sorting feature, e.g.sorting for all posts tagged with "DTI".Each tag also has a short description indicating the focus or content of the tag.
support.Additionally, we are planning to include more moderators who have specific areas of expertise who can either be actively involved in answering questions or refer questions to those with the relevant knowledge, such as members of the ABCD Consortium.
Website
In the future, we are exploring the possibility of expanding our methods tutorials to address current areas of interest, which could potentially include conceptual topics (e.g., pubertal development), methodological topics (e.g., longitudinal modeling), and meta-science topics (e.g., promoting equity and diversity in analysis design).Moreover, we would like to promote collaborative science by increasing community contributions and involvement.Specifically, we hope to assist ABCD researchers who want to share their code either by publishing their tutorials through NowIKnowMyABCD or supporting them by self-publishing their tutorials and publicizing them through our external resource links.
Social media pipeline
In the future, we plan to include additional social media platforms in our pipelines.We are aware of the increase in usage of alternative microblogging platforms like Mastodon and Bluesky, both of which allow users to join decentralized posting communities specific to their interests (https://mastodon.social/explore;https://bsky.app).Therefore, we will use the same protocol as above for the X (Twitter) pipeline to integrate Mastodon and Bluesky content to our website.
We will also create NowIKnowMyABCD accounts on both X (Twitter) and Mastodon.These accounts can be used to boost publicity of the resource while also spreading any relevant information that comes up for the ABCD Study.
Continuing outreach efforts
Additionally, we plan to increase our outreach and publicity efforts.NowIKnowMyABCD is intended to be a community-driven resource, which is supported by community moderation and contributions.To date, the team has presented the project in a number of formats, including to working groups and labs and on social media.We plan to continue these outreach methods by continuing to present project overviews to interested groups and by advertising the project through more traditional methods, such as via publications and poster presentations at conferences attended by ABCD-interested researchers.Through these avenues, we aim to both reach potential new users and solicit feedback from existing users.
Conclusions
In summary, NowIKnowMyABCD is a resource-sharing model for ABCD researchers that both centralizes information for users and decentralizes demands placed on contributors.
The community participation structure of NowIKnowMyABCD has the potential to support trainee development and encourage collaboration.Both its contributed tutorials and discussion forum allow ABCD researchers to pool their knowledge and expertise.Novice ABCD users can seek support from more experienced users across the ABCD community, not just in their own lab or department.Furthermore, the community-focused design of NowIKnowMyABCD offers opportunities for researchers with diverse backgrounds and overlapping interests to connect and explore potential collaborations.By uniting researchers working on shared research questions and encouraging cross-pollination of ideas, NowIKnowMyABCD will enrich and accelerate research progress.
NowIKnowMyABCD's informal, user-driven format complements the information available through formal ABCD Study channels.The official release notes that accompany each ABCD data release provide key information to researchers that is vetted top-down by ABCD Study leaders.At the same time, the rigorously curated nature of these releases means that official ABCD documentation is only released on an annual basis.NowIKnowMyABCD fills an important niche by allowing users to obtain quick, albeit unofficial, answers to data use and analysis questions that may be too new or too specific to be covered by the ABCD Wiki and other official sources of documentation.While NowIKnowMyABCD is not officially affiliated with the ABCD Study or the NIH, it is nonetheless designed to function in compliance with the ABCD DUC, ensuring that users may communicate about analysis questions while maintaining data safety and privacy.Critically, we note that the content and opinions expressed by individual users in NowIKnowMyABCD posts may not always be validated or endorsed by the ABCD Consortium or the NIH.
Additionally, the openly available nature of resources such as NowIKnowMyABCD has the potential to reduce the burden on existing help resources and improve methodological reproducibility among ABCD researchers.Having numerous people contact a single resource, like the NDA Help Desk or a workshop help group, about the same question or issue creates pressure on help providers and a bottleneck for answer seekers.NowIKnowMyABCD's publicly visible links, tutorials, and forum posts allow those answers to scale, relieving the burden on individual helpers to repeatedly send answers to different researchers asking similar questions.NowIKnowMyABCD also helps to increase reproducibility by building a robust network of permanent, public institutional knowledge about ABCD data use and analysis.This reduces the chance of important knowledge and protocols or pipelines being lost when individual researchers move to new projects or institutions.Additionally, NowIKnowMyABCD and its back-end structure are all available at no cost to users, making our pipeline/framework ideal for use by early career researchers and projects without major funding.However, as the website grows, we plan to work with members of the ABCD Consortium to identify funding opportunities to researchers interested in maintaining the website and support costs as they may arise (e.g., X (Twitter) is now making developer accounts a paid resource).
NowIKnowMyABCD provides a model for community-based support for large, longitudinal datasets.The complications of working with these types of datasets are not unique to ABCD, and we hope that by making our code and processes available, researchers working on other largescale datasets can use these same strategies to improve the documentation and usability of those datasets.
NowIKnowMyABCD is a trainee-led resource for sharing knowledge and resources related to the ABCD Study.Through a combination of web-hosted tutorials and an interactive discussion board, we hope to support a network of researchers dedicated to open science and interdisciplinary collaboration.The ABCD Study is constantly evolving, and NowIKnowMyABCD will evolve alongside it.As new issues and developments arise, we will continue to update the resource with new tutorials, resources, and tags on the discussion board.We encourage any researchers interested in the project to participate, either through the community discussion board or via pull requests to the website, which can be used to contribute tutorials and other resources.Modeling Developmental Change Workshop Additionally, many of the questions and resources shared on the discussion board originated in the Slack Workspace of the 2021 Modeling Developing Change ABCD Workshop (R25MH125545; PI: Dr. Kathryn Mills).We thank the organizers and participants of the workshop for their engagement and permission to share the materials.
ABCD-ReproNim Course
The ABCD ReproNim Course provided training for reproducible analyses of the Adolescent Brain Cognitive Development Study data.ReproNim, a Center for Reproducible Neuroimaging Computation, aims to help researchers achieve more reproducible data analysis workflows and outcomes.ReproNim has developed a curriculum that will give researchers the information, tools and practices to perform repeatable and efficient research, and a map of where to find the resources for deeper practical training.For more information, visit: https://www.abcd-repronim.org.ABCD-ReproNim was supported by an award from the National Institute of Drug Abuse (R25-DA051675).
Stroke, and the NIH Office of Behavioral and Social Sciences Research.The National Institutes of Health (NIH) granted leading researchers in the fields of adolescent development and neuroscience $590 million in support to conduct this study across 21 research institutions.
Table 1
Current categories on GitHub Discussion Board. | 2024-04-29T13:22:15.353Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "2e0122a181173a7e149fafeac106fd77e009e11e",
"oa_license": "CCBYNCND",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "2e0122a181173a7e149fafeac106fd77e009e11e",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
11289935 | pes2o/s2orc | v3-fos-license | The Riemann problem with additional singularities
The Riemann problem is studied in the case when the unknown function has nonisolated singularities, concentrated on the real axis. The problem is used for the factorization of functions, holomorphic outside of the unit circle and the real axis, in the form of the product of two functions which have singulariries on the given set of the real axis.
The classical Riemann problem with zeroes is well known (see, e.g. [1]). It is required to construct two functions ψ 1 (z), ψ 2 (z), such that ψ 1 (z) is analytic inside the closed contour Γ and has inside the contour n zeroes λ 1 , . . . , λ n , and 1 ψ 2 (z) is analytic outside Γ and has outside the contour the zeroes µ 1 , . . . , µ n . In addition, on the contour Γ the following relation is required: where G(ξ) is a given complex-valued function on the contour. In this work the Riemann problem with additional singularities is proposed and solved in the case when the contour Γ is the unit circle and the zeroes and the singularities of the functions ψ 1 (z) and ψ 2 (z), including non isolated singularities, are concentrated on the real axis. A particular case of such a problem is used in [2].
Now we formulate the Riemann problem with additional singularities. Let the function G(e iθ ) = µ(θ)e iφ(θ) , −π < θ ≤ π, be given on the unit circle, where ln µ(θ) andφ(θ) are summable functions (µ(θ) > 0, ϕ(θ) =φ(θ)), andφ(θ) is bounded. Let ϕ(t) = ϕ(t) be a bounded summable function on the real axis, which vanishes at least on one interval ∆ ′ ⊂ (−1, 1) and on one interval ∆ ′′ ⊂ (−∞, −1) ∪ (1, ∞). It is required to construct a holomorphic function R(z), which do not have zeroes outside the unit circle and the real axis, and such that assuming that these limits exist. Here we require the exact equality of the arguments in (2) and (3): where arg R ± (ξ), |ξ| = 1, and arg R ± (t), −∞ < t < ∞, are defined in the following way. Let us fix points According to the assumption, the function R(z) has four connected components of holomorphy in which it does not vanish. It is the parts of the upper and lower halfplane restricted by the unit circle. Connecting the point z (Im z = 0, |z| = 1) with the point t ′ or t ′′ by a continuous curve, lying in one of the four components, we observe the continuous change of the argument of the function R(z) along this curve. Now the argument arg R(z) is defined uniquely. As z tends to the real axis (resp. to the unit circle), we find arg R ± (t), −∞ < t < ∞ (resp. arg R ± (ξ), |ξ| = 1).
It is easy to see that the Riemann problem with additional singularities is a generalization of the classical Riemann problem with zeroes when the zeroes are concentrated on the real axis. In fact, if on some interval ∆ ∈ R, ±1 ∈ ∆, ϕ(t) = kπ, k ∈ Z, then this means that R(z) is holomorphic on ∆. If in the right and left half-neighborhoods of the point t 0 the function ϕ(t) is constant and divisible by π, and at the point the function has a jump of the form kπ, this means that R(z) has at the point t 0 a pole (k > 0) or a zero (k < 0) of the order |k|. More complicated behavior of ϕ(t) implies more complicated character of the singularities of R(z).
Notation 2. Introduce
where γ(t) = γ(t), −∞ < t < ∞, andγ(θ) =γ(θ), −π < t < π, are bounded measurable functions. The function P (z, γ) is defined and holomorphic at least for non real z, and P (z,γ), resp., is defined and holomorphic inside and outside the unit disk. The formulae of Plemelj-Sokhotsky imply the following equalities, connecting the limit values of P (z, γ) andP (z,γ) on the real axis and the unit circle resp.: Let us define the functions It is easy to obtain from the formulae of Plemel-Sokhotsky that Thus, the following theorem is a simple consequence of the equalities (5), (6) and (8): We observe that the summability of the functions ln µ(θ),φ(θ), ϕ(t) being provided, we have the existence of the limits R ± (ξ) and R ± (t) almost everywhere on the unit circle and on the real axis. and the functions ln R((1 ∓ ε)ξ), |ξ| = 1, and ln R(t ± iε), t ∈ R, converge respectively to ln R ± (ξ) and ln R ± (t) with respect to the metric of L 1 (see, e.g., [3]). Without paying attention to the problems of convergence, we will apply the Riemann problem to the factorization of functions with singularities on the unit circle and the real axis.
Notation 3. We define on the real axis the map V of the symmetry with respect to the unit circle: For a set A ⊂ R\{0} and a function ρ(t), defined on R\{0} we denote The main result of the work is the following Theorem 2. Let the function N (z) be holomorphic and not vanishing outside the unit circle T and a certain closed set Σ ⊂ R of the real axis and satisfies the following conditions: 1) In the domain of holomorphy of the function N (z) we have 2) Their exists such positive constant C > 0, that in the upper semi-disk have the mutual positive distances, then there exists the function R(z), holomorphic outside the set Ω ≡ Ω 1 ∪ Ω 2 and outside the unit circle T, such that , where µ(θ) = µ(−θ) > 0, −π < θ < π, is an arbitrary even function with summable logarithm.
We remark that this theorem reduces to the Riemann problem with additional singularities. Essentially, we construct the function R(z) so that the limit values of the arguments of the function R(z)R(z −1 ) equal the limit values of the argument of N (z). We also require that the function R(z) only have singularities onto the unit circle T and onto the set Ω, and satisfy the additional condition (11) with practically arbitrary µ(θ). In order to prove the theorem, we will need two simple lemmas demonstrating the properties of the functions P (z, γ) andP (z,γ). Lemma 1. The functions P (z, γ) andP (z,γ) in their domain of holomorphy satisfy the properties: where C 1 is a positive constant and is the limit value of its argument from above on the real axis.
2. Let f 2 (z) be holomorphic function in the disk |z| < 1 of bounded argument. Then it can be represented in the form.
P r o o f of lemmas. We will only prove the first part of Lemma 2 (equalities (12)-(16) are obtained by direct calculation). The function f 1 (z) is holomorphic and does not vanish in the connected domain {Im z > 0}. Hence, we can define uniquely a logarithm ln f 1 (z), which is holomorphic in the upper halfplane function with bounded imaginary part: |Im ln f 1 (z) | < C 3 . Hence, the function ln f 1 (z) + C 3 i is the function of Nevanlinna (i.e. has positive imaginary part in the upper halfplane) and can be represented in the form (see, e.g., [4]) where the measure dρ(t) is defined by a nondecreasing function ρ(t) with where η 1 (t) is defined by formula (18). This means that for the function f 1 (z) the multiplicative representation (17) is obtained. The second part of the lemma is proven analogously.
P r o o f of Theorem 1. At first, we will present N (z) in the form of the product P (z, γ)P (z,γ). It follows from (9) that arg N (z) is bounded in the part of the upper halfplane that lies outside the unit disk. The function is holomorphic in the upper and lower haldplane, and its argument is bounded. According to lemma 2, where, according to (5), the functions M 0 (λ) and M 1 (λ) are holomorphic on R\[−2, 2] and (−2, 2), respectively. Let us define in the plane of the parameter z the functions It is evident that the functions N 0 (z) and N 1 (z) are holomorphic and positive on the real line and the unit circle, respectively (except possibly the points ±1). Taking into account (9), (19) and this representation, we have at first outside, then inside the unit disk where the functions N 0 (z) and N 1 (z), according to Lemma 2, can be represented in the multiplicative form N 0 (z) =P (z,ν 0 ), with an odd function on (π, π) and We remark that Lemma 2 guarantees representation (21) for N 0 (z) inside the disk (for N 1 (z) in the upper halfplane). However, the same representation is also true outside the disk (resp. in the lower halfplane), because of (16) and because N 0 (z) = N 0 (z −1 ) (because N 1 (z) = N 1 (z)). Thus, the problem of the factorization of the function N (z) = CN 0 (z)N 1 (z) is reduced to the problem of factorization of two functions N 0 (z) and N 1 (z), represented in the form (21). At first we factorize N 1 (z). Let where ∆ k = (α k , β k ) are mutually disjoint intervals and |α k | ≥ 1, |β k | ≥ 1. It follows from the definition of ∆ that the endpoints (α k , β k ) of the interval belong to one of the three disjoint sets Ω 1 , V (Ω 1 ), Ω 2 . Moreover, the number of the intervals, whose endpoints belong to different sets, is finite. In fact, if the endpoints of the interval (α k , β k ) belong to different sets, then If there were infinitely many of such intervals, then (except the case when the intervals are concentrated at the infinity) some of the intervals would have arbitrarily small length, so that one of the distances dist(Ω 1 , V (Ω 1 )), dist(Ω 1 , Ω 2 ) and dist(V (Ω 1 ), Ω 2 ) would vanish, which contradicts to the conditions of the theorem. (If these intervals were concentrated in the infinity, then, according to Ω 2 = V (Ω 2 ) and Ω 1 = V (Ω 1 ), we would have that the distance between the different sets and zero vanish, which contradicts to the conditions of the theorem, too.) Let us choose among the intervals ∆ k such intervals, that one of the endpoints belongs to V (Ω 1 ), and the other belongs to Ω 1 or Ω 2 , or is equal to ±1. Let us renumerate the intervals ∆ k so that ∆ 1 , . . . , ∆ k 0 are the chosen intervals (their number is finite). We divide the rest of the intervals in two groups: the intervals ∆ ′ k , whose both endpoints belong to V (Ω 1 ), and ∆ ′′ k (all the rest). Let Lemma 3 (factorization of N 1 (z)). The function N 1 (z) can be factored out as follows: the constant C > 0, and the numbers α * k , β * k , k = 1, ..., k 0 are defined by equality (24). Here the function R 012 (z) is holomorphic outside the set Ω and the points ±1. 1 P r o o f. The holomorphy of R 012 (z) outside the set Ω ∪ {1, −1} easily follows from its definition: the function R 0 (z)R ′ 0 (z)R ′′ 0 (z) may have singularities only on the boundary of the set Ω 1 ∪ Ω 2 ∪ {−1, 1}, the function R 1 (z) may have singularities only on the set Ω 1 , and R 2 (z) on Ω 2 . According to (21), (25), (12), Here, according to the properties (14), (12), and the property ν(t) = −ν(t −1 ) (see (23)), and, from definition (4), The exponential in the r.h.s is a constant. Further, it is evident that the right-hand side of the latest equality will not change if we replace α k , ϕ k to α * k , ϕ * k . That is why (with another constant C > 0). Thus, from the equalities (30)-(35) follows (26).
Having factorized N 1 (z), we now factorize N 0 (z). We define For |z| = 1, according to (15), (16), taking into account the oddness ofν 0 (θ) and representation (21), we have Let us define R µ (z) by formula (7). It satisfies property (8). So, from the definition of R 3 (z), the property (6) and the holomophy of R 012 (z) on the unit circle (possibly, excepting the points ±1), we have for the function the relation But, as it follows directly from the definition of R µ (z), with the use of the evenness of µ(θ), Thus, for the function R 0123µ (z), defined by (38), we have from (20), (26), (39), (39) Finally, taking we see that R(z) is the solution of our problem.
Certainly, the factorization (10) is not unique. As it will be seen from the next theorem, with some additional restrictions to the function N (z) we can require additional conditions to the behavior of the function R(z) near its singularities, for example, we can ask for the existence of the limits in the metric of L p , p ≥ 1, of the functions R(t ± iε), ε → +0.
Definition. Let A be a certain set on the real axis, U δ (A) be its δ-neighborhood, and f (z) be a holomorphic function in U δ (A)\A. We say that the function f (z) locally belongs to the Hardy class H p in the neighborhood of the set A, if for some δ > 0 the functions f (t ± iε), t ∈ R ∩ U δ (A), converge as ε → +0 in the metric of L p . Then the function N (z) can be factored out so that (10) and (11) hold, and the function R(z) locally depends to the Hardy class H 2 in the neighborhood of the set Ω 2 .
P r o o f. We will only show what changes should be done in the proof of Theorem 1 to apply it to Theorem 2. The set is introduced, where ∆ k = (α k , β k ) are mutually disjoint intervals. The set Ω 2 lie inside of these intervals. It follows from the additional condition of the Theorem 3 that on every set ∆ k \Ω 2 the function n(t) = n(k) = 1 π arg N (t) ∈ Z is constant. We extend this function to the whole interval ∆ k (possibly, including Ω 2 ), and also we extend it to V (∆ k ) by the equalityñ (t) = n(k), t ∈ ∆ k , −n(k), t ∈ V (∆ k ).
The rest of the proof does not change. We will explain how to prove that the function R(z) locally belongs to the Hardy space in the neighborhood of Ω 2 . The function R 2 (z) is represented in the multiplicative form R 2 (z) = P (z, 1 2 χ 2 (ν −ñπ)). Here, the oscillation of the function 1 2 χ 2 (t)(ν(t) −ñ(t)π) on the interval δ l is less than π 2 . This implies (see, e.g., [5]) 2 , that R 2 (z) locally belongs to the Hardy space in the neighborhood of any compact subset of the interval δ l . (Such intervals δ l cover Ω 2 according to the condition of the theorem). Simultaneously, the functions R 0 (z), R ′ 0 (z), R ′′ 0 (z), R 1 (z), R 3 (z) and R µ (z) are holomorphic on Ω 2 , from where we obtain that R(z) locally belongs to the Hardy class H 2 in the neighborhood of the set Ω 2 .
We remark that the functions R ′ 0 (z), R ′′ 0 (z) can also be presented in the form of a product (27) (generally speaking, this product is infinite). However, if the intervals ∆ ′′ k do not belong to a finite set of the real axis, then additional factors will occur in the expression for R ′′ 0 (z).
Remark. This paper is the translation from [6]. | 2014-10-01T00:00:00.000Z | 2001-10-22T00:00:00.000 | {
"year": 2001,
"sha1": "fad364b6c9a3f1f80e4d9ceaa553511769412726",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "01550be609716facd63ca9a98048023eb1e32826",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
246329103 | pes2o/s2orc | v3-fos-license | People’s perceptions and uses of invasive plant Psidium guajava in Vhembe Biosphere Reserve, Limpopo Province of South Africa
ABSTRACT Human perceptions and knowledge of invasive alien plant species are increasingly recognised as important in the management of biological invasions, but there is limited research focus on the social dimensions of plant invasion. Using household surveys, this study assessed the perceptions, knowledge, and uses of Psidium guajava Linn. to rural communities in Vhembe Biosphere Reserve, in the Limpopo Province of South Africa. Results showed that most respondents are aware of P. guajava and perceive it to be spreading in their locality but do not consider it an invasive alien plant species. Psidium guajava is perceived to have a dual purpose and most respondents are aware of its benefits including fruit consumption, medicinal purposes, shading and firewood provisioning and costs such as attraction of problematic animals, displacement of native plants, and reduction of grazing and agricultural space. The benefits associated with use of P. guajava are considered greater than the costs, therefore most participants do not implement any control measures. These results highlight the need to incorporate rural community perceptions, knowledge, and uses of P. guajava in developing effective management plans that avoid conflicts between stakeholders. To improve the efficacy of managing biological invasions more research is required to understand how communities relate to invasive alien plant species.
Introduction
Invasion of natural ecosystems by invasive alien plant species presents severe threats to global biodiversity (Rai and Singh 2020;Richardson et al. 2020). Invasive alien plant species are detrimental to the environment and considered a major driver of biodiversity loss due to their ability to alter natural ecosystem services through multiple mechanisms (Rai and Singh 2020). Recent studies have reported that approximately 17% of the global terrestrial area is vulnerable to invasion by invasive alien species, with current invasion threats being more in the economically developed as compared to less developed countries, owing to human movement (Early et al. 2016).
In South Africa, invasive alien plant species are estimated to have invaded millions of hectares , with Acacia, Eucalyptus, Lantana, Pinus, and Opuntia species being widespread ). Invasive alien plant species were introduced to South Africa for several reasons, including for agricultural and forest production (e.g. animal fodder and plantation timber), and for ornamental purposes (Richardson et al. 1997. Threats of invasive alien plant species on South African terrestrial ecosystems include loss of ecosystem goods and services such as water , transformation of invaded ecosystems, which results in the loss of native species (Richardson and van Wilgen 2004;Richardson et al. 2007), and severe threats to human wellbeing and livelihoods (Potgieter et al. 2020;Richardson et al. 2020). For example, invasion by Australian Acacias in South Africa has resulted in a wide range of negative impacts on terrestrial ecosystem resulting in transformed and altered ecosystems with reduced goods and services (Le Maitre et al. 2011). Similarly, A. mearnsii ), E. camaldulensisi (Hirsch et al. 2020) and P. radiata (Dzikiti et al. 2013) invasion in South Africa has resulted in severe water loss with negative effects on human livelihoods and the economy.
However, despite the negative effects of invasive alien plant species, some plant species contribute positively to ecosystems and people through provisioning of goods and services that are important for human wellbeing (Vaz et al. 2017;Shackleton et al. 2019Shackleton et al. , 2020. For example, recent studies in rural South Africa have shown that A. dealbata is an important source of fuelwood and fodder, whilst harvesting and selling of poles for fencing provide employment and cash income for rural people (Ngorima and Shackleton 2019). Similarly, E. camaldulensis provides multiple benefits to local communities such as timber, firewood, shelter, and nectar (Hirsch et al. 2020). Shackleton et al. (2011) highlighted that Opuntia ficus-indica (prickly pear) plays a key role in generating cash income for poor families in the Eastern Cape Province of South Africa. Given the contribution of some invasive alien plant species to human wellbeing, there is a need to understand the social dimensions of invasive alien plant species.
Relative to ecological studies, there is a limited focus on perceptions, knowledge, and role of invasive alien plant species on rural livelihoods . Human perceptions, defined as the process 'wherein people select, organise, interpret, retrieve, and respond to the information from the world around them' (Schermehorn et al. 2005), can produce mental expressions and constructions, which in turn, shape behavioral outcomes. Shackleton et al. (2019) suggest that knowledge and perceptions of invasive alien plant species are influenced by individual attributes, characteristics of the species, effects of the species, socio-cultural context, landscape context and institutional and policy context. For example, the invasive alien species O. ficus-indica (Shackleton et al. 2011) and A. dealbata (Ngorima and Shackleton 2019) are often perceived as native and, in most cases, local communities fail to distinguish them because they are part of the landscape. The positive benefits accrued from an invasive alien plant might contribute to positive perceptions by users of the plant . Similarly, the appearance of an invasive alien plant e.g. flower color or leaf shape might influence people's perception and knowledge about the plant. In addition to perceptions and knowledge about invasive alien plant species, limited work has been done to understand benefits and costs of invasive alien plants to rural local users in South Africa Ngorima and Shackleton 2019). Some invasive alien plant species such as Australian Acacias and Eucalyptus offer both benefits and costs resulting in conflict of interest regarding how such species should be managed (Dickie et al. 2014;van Wilgen and Richardson 2014;Zengeya et al. 2017). Therefore, differences in the perceived benefits and costs of invasive species can result in conflicts over their management due to opposing views on methods of control (Zengeya et al. 2017). Thus, understanding both the benefits and costs of invasive alien plant species can inform management options that minimize conflicts.
Although several frameworks have been developed to assess the impact of invasive alien species on livelihoods, the sustainable livelihoods framework (Scoones 1998) has been used by Shackleton et al. (2007) to develop a framework that helps interpret the impacts of invasive alien species on livelihoods.
The livelihoods framework is useful in guiding research questions towards understanding how invasive alien species influence people's choices, activities, and resource use for livelihood benefits (Siges et al. 2005;Shackleton et al. 2011;Kannan et al. 2014;Ngorima and Shackleton 2019). The livelihoods framework suggested by Shackleton et al. (2007) categorises invasive alien species into four categories based on the benefits and costs they supply to livelihoods. The four categories are (i) desirable and weakly competitive (species with high benefits and low costs), (ii) desirable and strongly competitive (species with both high benefits and costs), (iii) undesirable and weakly competitive (species with both low benefits and costs), and (iv) undesirable and strongly competitive (species with low benefits and high costs). In South Africa, Shackleton et al. (2011) categorised O. ficus-indica as desired and weakly competitive because it provides several socio-economic benefits (e.g. food and fodder) for rural livelihood sustenance, yet its population densities remain stable with low environmental effects. However, various communities can categorise invasive alien species differently given that socio-ecological and cultural contexts can influence benefits and costs. For example, Acacias e.g. A. mearnsii and A. dealbata are categorised as desired and weakly competitive in Madagascar because they are viewed as highly beneficial with low spread and impacts (Kull et al. 2007), yet in South Africa, they are categorised as desirable and strongly competitive due to high benefits to communities and high environmental costs since they create monospecific stands that displace native plants (Ngorima and Shackleton 2019). In applying the livelihoods framing to understand the role of invasive alien species in rural communities, it is important to note that (i) the effects of invasive alien species are not uniform in space, time, and among various users, (ii) the level of invasive alien species use is varied among users, (iii) local livelihoods are adaptive in response to invasive alien species availability, and (iv) the duration and abundance of invasive alien species in an area can influence its integration into livelihoods (Shackleton et al. 2011;Ngorima and Shackleton 2019). Given the varied role that invasive alien species play in rural livelihoods, understanding local people's perceptions and knowledge of invasive alien plant species is important.
Psidium guajava Linn., commonly known as guava, is a shrub that is native to tropical America (Naseer et al. 2018). The plant species belongs to the Myrtaceae family (Naseer et al. 2018). Psidium guajava is commercially grown in many countries for fruit consumption and medicinal purposes (Okamoto et al. 2009;Naseer et al. 2018). The evergreen shrub grows on a wide range of soils and environments, and reaches a height ranging from 1.8 m to 7.6 m (Naseer et al. 2018). Psidium guajava has a wide-spread network of curved branches and the leaves are oval with prominent veins (Rouseff et al. 2008;Naseer et al. 2018). In South Africa, P. guajava is categorised as an invasive plant and tends to invade watercourses, forest margins, and disturbed areas such as old agricultural fields, roadsides, and pastures (Ruwanza and Dondofema 2020). A recent study in South Africa reported that P. guajava invasion alters soil physico-chemical properties (increases soil P and makes soil underneath it repellent), which likely explains its invasion proliferation (Ruwanza and Dondofema 2020). In the Galapagos Islands, the plant is considered a problematic shrub because of its ability to effectively spread through seed dispersal by birds and humans, and to displace native species through outcompeting them for resources such as nutrients (Urquía et al. 2019). In the New Zealand archipelago, P. guajava is regarded as a naturalised plant that significantly reduces native species diversity (Sheppard et al. 2014). In Kenya, the species has displaced native plant species, although its presence in forests targeted for ecological restoration has been shown to support growth of shade-tolerant native tree species (Otuoma et al. 2020). In East Africa particularly in Tanzania and Kenya, P. guajava is regarded as one of the worst invaders that invades unused sites (Witt and Luke 2017). In South Africa, Zengeya et al. (2017) categorised P. guajava as a conflict-generating species due to its ability to provide both benefits and costs. Previous studies have shown that P. guajava has many uses ( Morais-Braga et al. 2017;Naseer et al. 2018). For example, fruits are consumed raw or in processed form such as fruit jam (Omayio et al. 2019), whilst roots, bark, and leaves contain phytochemicals that are used to treat numerous diseases such as dysentery, hypertension, diarrhea, gastroenteritis, and pain relief (Naseer et al. 2018). Orwa et al. (2009) reported the use of P. guajava wood for making fencing pools, axe handles, and charcoal for firewood. By-products of P. guajava fruits and leaves are used as animal feeds and fodder (Orwa et al. 2009).
Given the limited research focus on knowledge, perceptions, and benefits of the invasive alien plant species P. guajava to rural livelihoods in South Africa, this study aimed to broaden our knowledge base regarding the importance of the plant to rural communities. The objective of the study was to assess perceptions, knowledge, and uses of P. guajava to rural communities in Vhembe Biosphere Reserve. The research questions were: (i) what are the perceptions, benefits, and costs of P. guajava to rural livelihoods, and (ii) what management interventions are being implemented by rural communities to control P. guajava.
Study area
The study was conducted in four villages in Vhembe Biosphere Reserve in Limpopo Province of South Africa. The biosphere reserve has a surface area of approximately 30,700 km and a population of approximately 1.5 million people, of which more than 95% are rural residents. The four villages selected for this study were Duthuni ( (Figure 1). The abovementioned villages were selected following a reconnaissance study that showed a high distribution of P. guajava in the area and a high number of homesteads with the plant. Most people in the villages are dependent on communal agriculture and social grants from the government.
The four villages are in the Soutpansberg Mountain Bushveld vegetation type which is in the savanna biome (Mucina and Rutherford 2006). Vegetation in the study area is dominated by native trees and shrubs such as Berchemia zeyheri, Grewia ocidentalis, Podocarpus falcatus, Dombeya rotundifolia, Cussonia spicata, and Vachellia karoo and poorly developed grasses such as Setaria sphacelata, Coleochloa setifera, Trachypogon spicatus, and Melinis nerviglumis. However, recent studies in Vhembe Biosphere Reserve have identified invasion by alien plants like L. camara, Australian Acacias, Caesalpinia decapetala, and Eucalyptus species as some of the main drivers of land cover change and biodiversity loss (Evans 2017;Ruwanza and Mhlongo 2020). The study area has a tropical climate, with hot austral summer temperatures (approximately 28°C) and mild austral winter temperatures (approximately 18°C). Mean annual rainfall is approximately 1,050 mm, and soils are acidic with high clay content and are derived from basalt and quartzitic sandstone of the Soutpansberg formation (Mucina and Rutherford 2006).
Data collection and analysis
The research approach adopted face-to-face interview surveys with the aim of gathering firsthand involvement, knowledge, and experiences of the households. Within each of the four villages, 50 face-to-face interview surveys were conducted, representing about 3% in Duthuni, 13% in Ha Maelula, 12% in Matshavhawe, and 8% in Murunwa, based on existing total household records (Stats SA 2011). Households who participated in face-to-face interviews were purposively selected, and the inclusion criteria were (i) the presence of P. guajava in their homesteads and, (ii) knowledge of the plant that was assessed by showing households a picture panel of the plant that was also embedded in the questionnaire. Purposive sampling ensured that only households who have knowledge of the plant and are familiar with it were selected. We acknowledge that purposive sampling is prone to sample selection bias that minimizes the ability to generalize the findings. Despite this limitation, the value of the study lies in its transferability potential through provision of useful insights on the intersection of perceptions, uses, costs and management of P. guajava by rural communities.
At each household, the household head, and if absent a senior adult family member, was interviewed. Surveys were conducted between 08h00 and 16h00 during weekdays. With the assistance of a Venda-speaking research assistant, face-to-face household interviews were administered in the local language of Venda and took approximately 1 hour. The interviews gathered information on (i) presence and invasion extent of P. guajava in the area, (ii) knowledge and perceptions of P. guajava, (iii) benefits and costs of P. guajava to households, (iii) control and management interventions implemented by households, and (iv) the socio-demographic information of the household (see Appendix A for interview guide). The perceived occurrence of P. guajava in the area was assessed on a 4-point Likert scale [(1) Common, (2) Moderate, (3) Scarce, and (4) Very scarce)], based on previous work by Shackleton et al. (2015). Perceptions on the impact of P. guajava were assessed by asking the respondents to indicate P. guajava as either (1) beneficial, (2) harmful, (3) having no impact, or (4) both beneficial and harmful, following Shackleton et al. (2015).
The study was conducted following granting of ethics approval by the Rhodes University's Human Ethics Committee (Reference Number 2019-0823-898). Participation in the study was voluntary and all participants had the right not to answer any question and to withdraw at any given time without any negative repercussions. Informed consent was obtained from the participants following assurance of confidentiality and anonymity of responses. No personal information was collected, and all data were anonymised prior analysis.
Questionnaire responses were grouped into various categories describing villagers' knowledge, perceptions, uses, benefits, costs, and management interventions of P. guajava. Descriptive statistics was used to show the distribution of data including sociodemographics and proportion of responses and participants in the text or using tables and figures. Differences in responses between the four villages were examined using chi-squared tests. All statistical analyses were performed using TIBCO STATISTICA version 14.0 software (TIBCO Software Inc 2019).
Demographics of the sample population
Across all villages, the dominant respondents' age was above 70 years. Except for Murunwa, most respondents in Duthuni, Ha Maelula, and Matshavhawe were above 50 years (Table 1). There was a high representation of females (58%) across all villages. Most respondents in Ha Maelula, Matshavhawe, and Murunwa had primary education only (also dominant across all villages), while those in Duthuni had grade 10 (first-year secondary education level in South Africa) ( Table 1). Across all villages the mean number of people per household was 8.3 ± 0.5, with highest means in Duthuni and Matshavhawe (8.6 ± 0.5), and lowest means in Ha Maelula (7.7 ± 0.5). Most respondents across all villages were dependent on social grants (Table 1).
Knowledge and perceptions of P. guajava
All respondents were aware of P. guajava, and more than 70% of the respondents across all villages had P. guajava on their properties. Across all villages, less than a quarter of respondents (23%) knew P. guajava as an invasive alien plant. Comparisons by village showed no significant differences in the proportion of respondents who knew P. guajava as an invasive alien plant (Duthuni and Ha Maelula = 18%, Matshavhawe = 26%, Murunwa = 30%; χ 2 = 6.367; p > 0.05; Table 2).
Of all the respondents with the plant on their property, less than a quarter had planted P. guajava, with most of these being in Ha Maelula as compared to Murunwa, Duthuni, and Matshavhawe (χ 2 = 11.742; p < 0.01; Table 2). Those who did not plant P. guajava reported that it naturally occurred on their property with some indicating that they found it on the property upon arrival. Across the sample, less than half of the respondents reported that P. guajava was spreading on their property, with no significant differences among the villages. However, a substantial proportion of the respondents across all villages noted that the plant was spreading in the local environment, with significantly (χ 2 = 11.742; p < 0.01) more respondents in Murunwa (80%) than in Duthuni (52%), Ha Maelula (50%), and Matshavhawe (64%) reporting so (Table 2). When asked to rank their views on P. guajava occurrence on their property, 38% of all the respondents categorised P. guajava occurrence as moderate and there were no significant differences among the villages (Table 3).
Community management of P. guajava
Across the sample villages, just over a quarter of respondents would like to see a decrease in P. guajava population densities in their area, but significantly more respondents in Matshavhawe (36%) and Duthuni (32%) than in Ha Maelula (22%) and Murunwa (16%) felt so (χ2 = 14.308; P < 0.05; Table 5). Across all the villages, the proportion of respondents managing P. guajava is significantly lower (χ 2 = 27.508; P < 0.001) than those who do not implement any management intervention (Figure 2a). Most of the respondents manage P. guajava through cutting it, although this provides little management intervention as it coppices, whereas a handful prefer digging and hand pulling the plant (Figure 2a). Significantly more respondents in Duthuni (26%) than in Ha Maelula, Matshavhawe, and Murunwa (all less than 5%) indicated that they receive assistance from government through the Working for Water programme to manage P. guajava on their property (χ 2 = 29.891; P < 0.001; Table 5). Furthermore, many respondents in all villages expressed interest in getting assistance from the government to manage P. guajava (Duthuni (64%), Ha Maelula (52%), Matshavhawe (86%), and Murunwa (52%) (χ 2 = 16.632; P < 0.001; Table 5). The nature of management support preferred by respondents ranged from information on P. guajava clearing, management information, and provisioning of chemicals to kill the plant, with significant differences between the different villages (χ 2 = 35.644; P < 0.001; Figure 2b). Very few respondents in Ha Maelula (4%) received information or news about invasive alien plants in general (Table 5).
Discussion
Most respondents in all the villages had knowledge about P. guajava but very few were aware that it was an invasive alien plant. Low levels of knowledge about the plant as an alien species can be attributed to the invasion extent, history, and time the species has been present in the area (Shackleton et al. 2007Kull et al. 2014). In this study, a sizeable number of respondents found P. guajava naturally occurring on their properties and used it for various purposes meaning that the plant is already integrated in people's livelihood systems. This is consistent with findings by Ngorima and Shackleton (2019) in the Eastern Cape Province of South Africa who found that A. dealbata was so integrated into local people's livelihoods that it was regarded as native. While a substantial proportion of households reported socio-economic benefits of P. guajava, very few households perceived the benefits to outweigh the costs. This might be explained by the fact that the benefits of P. guajava are considered within the context of benefits from other trees, including natives that the respondents considered important. Therefore, the mixed perceptions of P. guajava reported in this study are linked to benefits (e.g. fruit consumption and medicinal purposes) and negative impacts (e.g. attraction of problematic animals and health effects linked to fruit consumption). People's knowledge and perceptions are shaped by plant traits that are useful to people such as provision of firewood (A. dealbata), edible fruits (O. ficus-indica and P. guajava), and fodder for domesticated animals (A. dealbata) (Shackleton et al. 2015Ngorima and Shackleton 2019). Even though few respondents in this study reported the negative impacts of P. guajava, negative impacts caused by invasive alien plants tend to increase with invasion expansion, which is likely to change people's perceptions towards the plant in the future. Invasive alien plants that portray both negative and positive effects, as is the case with P. guajava, tend to be viewed as conflicting species (Zengeya et al. 2017). Indeed, recent studies have shown that local people tend to be conflicted about the role of some invasive alien plants such as A. dealbata, E. camaldulensis, and P. guajava play in their lives and communities (Woodford et al. 2016;Ngorima and Shackleton 2019). Most villagers use P. guajava for fruit consumption and medicinal purposes as compared to other provisioning services such as shade and firewood, which were identified by a few participants. Our results collaborate findings by Daswani et al. (2017) which show use of P. guajava for food and medicinal purposes by rural communities in India, owing to its perceived medicinal properties e.g. anticancer and antimicrobial that are present in plant fruits and leaves. Similarly, Msomi (2008) reported that rural communities in KwaZulu-Natal, South Africa, use P. guajava for firewood, medicine (leaves and barks), fruit consumption, and for cash income generation through fruit and guava jam trade. In rural communities of Nhema, Zimbabwe P. guajava is widely cultivated in home gardens for fruit consumption and medicinal purposes including as a remedy for flu and fever especially when its leaves are mixed with other alien plants e.g. Eucalyptus leaves and Citrus limon fruits (Maroyi 2009). Indeed, several invasive alien plants, such as Australian Acacias, P. juliflora, O. ficus-indica, Eucalyptus species, and L. camara have been shown to support people's livelihoods, quality of life, and foster rural economic growth (Shackleton et al. 2011;Kannan et al. 2016;Constant and Tshisikhawe 2018), although negative impacts have also been reported (Ngorima and Shackleton 2019;Shackleton et al. 2019). The importance of P. guajava as a food source across all villages except for Murunwa could be attributed to the fact that (i) P. guajava fruits are nutritious with high vitamin C and fibre content (McCook-Russell et al. 2012), and (ii) the plant is widely distributed and near households thus making fruit collection easy.
Contrary to reports from other studies on invasive alien trees (Shackleton et al. 2015;Potgieter et al. 2019;Ngorima and Shackleton 2019), provisioning (firewood) and regulating (shade for temperature regulating) services provided by P. guajava were reported by very few participants across all villages. This could be because P. guajava is a shrub with a spreading network of curved branches, thus its canopy cover and morphology are not well suited for firewood and shade provisioning/regulating as compared to other invasive alien plants e.g. A. dealbata (Ngorima and Shackleton 2019) and Eucalyptus (Hirsch et al. 2020). It is also possible that other commonly occurring native woody species in the study area e.g. Brachylaena discolor, Prunus africana, and V. karroo provide provisioning/regulating services (Dingaan and du Preez 2018;Constant and Tshisikhawe 2018;Ramarumo and Maroyi 2020) better than P. guajava.
This study showed that the costs associated with P. guajava are not inconsequential, because participants reported both direct and indirect negative impacts on villagers and the environment. First, P. guajava was reported to attract problematic animals that negatively affect human wellbeing. Also, rotten P. guajava fruits were reported to attract mosquitos, well-known vectors for malaria (Müller et al. 2010). Recent studies have acknowledged that some invasive alien plants e.g. Pontederia crassipes provide breeding sites for mosquito larvae thus enhancing their vector capacity (Stone et al. 2018;Rai and Singh 2020). In East Africa, the invasive alien plant L. camara has been reported to attract tsetse flies (Glossina spp.) which are associated with human sleeping sickness (Mazza et al. 2014;Rai and Singh 2020). Second, P. guajava fruit consumption was reported as causing constipation, which contradicts reports on the use of P. guajava in treating digestive system related disorders such as diarrhoea, dysentery, and constipation (Dakappa-Shruthi et al. 2013;Morais-Braga et al. 2017). Third, participants noted that P. guajava displaced and outcompeted native plants, possibly through its ability to alter some soil physio-chemical properties e.g. soil P, total N, pH, and moisture as found by Ruwanza and Dondofema (2020). Indeed, the ability of most invasive alien plants to alter soil properties and outcompete natives through nutrient resource acquisition and utilisation has been shown to favour growth of invasive alien plant than native species ). Lastly, it was perceived that P. guajava invasion reduces grazing and agricultural space, which can be linked to the displacement of palatable grazing grass following alien plant invasion (Yapi et al. 2018;Ngorima and Shackleton 2019). In general, invasive woody species such as A. dealbata and L. camara produce large numbers of seeds, grow faster, outcompete other species for soil nutrients and moisture, thus giving them the ability to displace crops in agricultural areas (Bhagwat et al. 2012;Ngorima and Shackleton 2019;Richardson et al. 2020), resulting in reduced agricultural productivity.
Our results showed that very few households wanted P. guajava removal from the area, which could explain why most villagers were not managing the plant. The lack of community action with regard to managing invasive alien plants is very common and has been widely reported elsewhere e.g. for Acacias (Shackleton et al. 2007;Ngorima and Shackleton 2019), O. ficus-indica (Shackleton et al. 2007), Prosopis (Shackleton et al. 2015), L. camara , and E. camaldulensis (Hirsch et al. 2020). The lack of desire by rural communities to manage invasive alien plants could be linked to several factors. First, some invasive alien plants are beneficial to communities as shown in this study, hence they are viewed as an economic asset (Shackleton et al. 2007;Ngorima and Shackleton 2019). Second, for those who may want to remove the plant, most of them lack the human and financial resources to manage invasive alien plants. In this study all households reported not receiving any financial or resource support from government through the WfW programme, although they wanted it. Another potential explanation is that the costs of P. guajava are not huge enough to warrant investment into clearing. Third, some rural communities might lack general information regarding invasive alien plants and specific information on control methods for invasive alien plants. In this study, only villagers in Ha Maelula (4%) acknowledged receiving information or news about invasive alien plants. Lastly, most rural communities view management of invasive alien plants as the responsibility of external agencies e.g. government and non-governmental organisations (Shackleton et al. 2007Ngorima and Shackleton 2019).
Overall, there were mixed and somewhat contradictory perceptions of benefits, costs, and the need to clear the plant within and between the sampled villages, which points to the importance of carefully considering both benefits and costs to individual households and villages as a basis for avoiding conflict related to P. guajava management. A key point to note when considering perceptions in designing management interventions is that perceptions can change especially when negative impacts start to outweigh benefits , which is likely to happen as P. guajava invasion extends. Our observations extend those of Gaertner et al. (2017) and Potgieter et al. (2019) who concluded that mixed perceptions of invasive alien plants among respondents and stakeholder negatively affect local based management interventions. Given that P. guajava is a potentially conflict species, its management should be informed by both the benefits and costs associated with its presence in socio-ecological landscapes.
Conclusion
The results of this study show that P. guajava has both benefits and costs in Vhembe Biosphere Reserve. Therefore, P. guajava is potentially a conflict generating species which should be a key consideration for management options. Given that P. guajava provides both benefits and costs, any management intervention for controlling or reducing invasion in the area should recognise the diversity of perceptions regarding the plant, noting that the perceptions can be dynamic depending on socio-economic conditions. Crafting of control/clearing interventions should be informed by consultative process with local communities to avoid potential conflicts with and between local users. Perhaps a feasible control option for P. guajava will be to limit invasion expansion through promoting utilisation by communities as suggested by others (Ngorima and Shackleton 2019;Potgieter et al. 2019). This approach can balance the needs of communities dependent on P. guajava while minimising the costs perceived or experienced by other community members. However, more research to examine the efficacy of utilisation as a control method is required given the dynamic nature of perceptions. | 2022-01-28T16:58:27.729Z | 2022-01-23T00:00:00.000 | {
"year": 2022,
"sha1": "b583f37485e545f6f7e671d8862982b1e60d6f60",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/26395916.2021.2019834?needAccess=true",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "dc9f6e95a6d11c418b6f8efdac3cb3ea13df91be",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
261590511 | pes2o/s2orc | v3-fos-license | FRANCISELLA TULARENSIS : A ZOONOTIC PATHOGEN AMONG WILD RODENTS AND ARTHROPODS - A POSSIBLE THREAT IN FUTURE
SUMMARY: Francisella tularensis is a Gram-negative coccobacillus and an aerobic bacterium. It causes a zoonotic disease called tularemia in humans. Four subspecies have been found in F. tularensis as F. tularensis subsp. Tularensis (Type A strains), F. tularensis subsp. Holarctica (Type B strains), F. tularensis subsp. mediasiatica, and F. tularensis subsp. Novicida. Rearing rabbits and different kinds of rodents as pets are becoming popular in Sri Lanka, veterinarians need to be knowledgeable on emerging pathogens such as F. tularensis , to diagnose the disease within a short time. Therefore, the objective of this paper is to update veterinarians on possible emerging infections to improve the health of pets and to minimize possible zoonotic infections. The clinical outcome caused by Francisella is a debilitating febrile disease in humans. Francisella has been isolated from hundreds of animal species in the world. Being a diverse host range, associated ecological factors relating transmission of Francisella in the environment is largely unknown. F. tularensis type A was reported to be common in North America while occasionally found in Europe. Type B was found commonly in the Northern hemisphere and in Australia. Tularemia is a sporadic disease, and a small infectious dose is required for an infection in humans. The clinical signs and symptoms of tularaemia depend on the route of infection. Six types of clinical forms were identified as ulceroglandular, glandular, oropharyngeal, oculoglandular, pneumonic and typhoidal in humans. Diagnosis of tularemia in humans is based on epidemiology, clinical findings and laboratory confirmation. Microagglutination test, indirect immunofluorescence assay (IFA) and Enzyme-linked immunosorbent assay ELISA are widely used as diagnostic tests. Several conventional and qPCR have been optimized to detect the organism in clinical samples. Antimicrobials such as aminoglycosides, tetracycline, quinolones, and chloramphenicol were used to minimize clinical complications. Utilization of treated water, usage of gloves on handling wild rabbits and rodents, thorough cooking of bush meat, usage of insect repellents, protection of stored food from rodents, wearing masks, ticks-free clothes, keeping away from weeds, cleaning pets from external parasites have been identified as the main preventive strategies against tularaemia in human. No commercial vaccine is found in the market yet against F. tularensis. This can be an emerging and threatening disease in the future with ongoing changes in arthropod parasites in the ecosystem followed by climatic changes in the world.
INTRODUCTION
Francisella tularensis is a Gram-negative coccobacillus, an aerobic bacterium and grown well at 35-37 0 C (Ramakrishnan, 2017;Freudenberger Catanzaro and Inzana, 2020). It is a non-spore forming, non-motile organism, encapsulated and a facultative intracellular microorganism (Freudenberger Catanzaro and Inzana, 2020). The bacterium is a fastidious organism and cysteine is required for the optimum growth under 5% CO2 in laboratory conditions (Caspar and Maurin, 2017;Telford and Goethert, 2020). Francisella requires supplementation of sulfhydryl compounds and cysteine enriched media for the optimum growth in the laboratory (Kinkead and Allen, 2016). A gray colony in 4 mm diameter green coloured medium is the unique feature of glucose cysteine blood agar (Kinkead and Allen, 2016). However, different strains may have different colony morphology in the same medium (Kinkead and Allen, 2016). The recommended incubation temperature is 35 0 C and colonies appear after 2-4 days of incubation (Kinkead and Allen, 2016).
It is a highly infectious agent that spreads through aerosol, requires a low infectious dose and results in a high degree of virulence in human (Hennebique et al., 2019). Therefore, It is categorized as "A" potential biological agent by Centre for Disease Control and Prevention (CDC), USA (Hennebique et al., 2019). However, F. tularensis subsp. Tularensis causes severe pulmonary infection, specially in people who go for hunting frequently in North America (Hennebique et al., 2019). In addition, F. philomiragia is a closely related species of bacteria which causes disease in immune compromised patients occasionally (Celli and Zahrt, 2013). As rearing of rabbits and rodents as pets has become popular and emerging trend in Sri Lanka, veterinarians need to be aware of emerging potential pathogen such as F. tularensis, especially in imported rodents. Therefore, the objective of this review was to update the knowledge of local veterinarians on possible emerging infections in rodent pets in order to diagnose them accurately and minimise potential zoonotic transmission to the owners.
EPIDEMIOLOGY
Tularemia is a debilitating febrile disease in human (Celli and Zahrt, 2013). F. tularensis is first reported in Tulare county of California in 1911. This disease is common among hunters of rabbits and hares (Yeni et al., 2021). F. tularensis type A has been reported to be common in North America while occasionally found in Europe (Zellner and Huntley, 2019). Type B was found to be common in Northern hemisphere and in Australia. Type A was reported to cause severe clinical diseases in humans than type B (Gunnell et al., 2016). F. tularensis sub species mediastica has caused high degree of virulence in experimental mice models while it was shown to be causing less virulence in humans (Gunnell et al., 2016). Only the high virulent strains were capable of fermenting glycerol and citrulline in Europe and North America (Pilo, 2018). However, the low virulent isolates in Asia also have shown similar characteristics of fermenting glycerol and citrulline (Pilo, 2018). Therefore, fermentation technique is not a good test to evaluate the degree of virulence of the organism in the laboratory. In addition, F. tularensis sub species novicida has shown high virulence in mice but it rarely causes disease in human (Gunnell et al., 2016).
As mentioned previously, Type A infection of F. tularensis was found in North America. Type B of F. tularensis has been reported in Eurasia, North America, Scandinavia, Russia and Japan (Gunnell et al., 2016;Yeni et al., 2021). The reported prevalence of the disease is widely varied with different factors such as environment, social habits, reservoirs, vectors, pathogen and host-associated factors. Tularemia is an endemic disease in France with compulsory notification to the health authorities, over 99 human cases were reported in the period of 2006-2010 (Gaci et al., 2017). The mosquito-based infection had been reported in Sweden and Finland in Scandinavians region and both Aedes and Culex mosquitoes were identified as vectors (Abdellahoum et al., 2020). Over 50% of total clinical cases in Sweden and Finland were reported in the month of August which is the peak season of mosquitoes in the region. Most patients (74%) were shown ulceroglandular form of tularemia in the region (Abdellahoum et al., 2020). Furthermore, the lesions were limited to lower limb with inguinal lymphadenopathy, while lesions in arms, face, or neck were found with axillary or cervical lymphadenopathy (Abdellahoum et al., 2020). These findings implied that the lesions had been mainly found in exposed part of the body in humans where mosquitoes have ready access. However, the role of mosquitoes on transmission of F. tularensis was reported minimum in other parts of the Europe (Abdellahoum et al., 2020). Importantly, the disease has not been reported in some part of the Europe such as Greece, Iceland, Ireland, Luxembourg, Malta, and the United Kingdom other than in few people visited endemic countries (Seiwald et al., 2020). The organism has been isolated among ticks in Japan and the ticks may thought to have a significant role in dissemination of the disease in Japan (Suzuki et al., 2016). In addition, serological evidence has been observed in ranches in Iran although no sign of infection was reported (Ahangari Cohan et al., 2021). Livestock farming communities who rear small ruminants had high sero prevalence of F. tularensis in Jordan hence, livestock farmers have been described as risk category for the diseases (Obaidat et al., 2020). The same study highlighted that the age, region of residence and practising horticulture as risk factors for acquiring serological evidence for tularensis in humans (Obaidat et al., 2020). In addition, tularensis has been reported in tropical countries including India although not a single case has been reported in Sri Lanka (Nirkhiwale et al., 2015). Furthermore, F. tularensis has been isolated in bed bugs in Madagascar although the role of bed bugs on transmission of diseases is not well known (Peta et al., 2022).
Francisella has been isolated from hundreds of animal species in the world (Pilo, 2018). Therefore, veterinarians must always be on alert of tularemia as an emerging infection in livestock, pets, especially the exotic pets and wild animals.
The disease has also been reported in, birds, fish, amphibians, arthropods, and protozoa other than in mammals (Seiwald et al., 2020;Yeni et al., 2021). Although both mice and guinea pigs were shown to be susceptible to F. tularensis in experimental conditions, the natural infection has not been investigated (Santic and Abu Kwaik, 2013;Kingry and Petersen, 2014). Furthermore, sudden death was noticed in rodents with severe degree of infection (Appelt et al., 2020;Seiwald et al., 2020). Interestingly the white rats appeared to be less susceptible to the disease in experimental conditions (Kingry and Petersen, 2014). The presence of the organism is not an indicator of a disease in wild animals and domestic animals, while some strain or lineage were limited to unique mammalian host (Telford and Goethert, 2020). Therefore, diagnosis and interpretation of the clinical disease is a challenging task in veterinary medicine.
Two cycles of Francisella have been identified in North America as a terrestrial or sylvatic cycle and an aquatic cycle. In addition, lagomorphs and ticks play vital role in sylvatic cycle (F. tularensis). In contrast, Semi aquatic rodents involve in aquatic cycle (American beaver, Castor canadensis and the muskrat Ondatra zibethicus) involving F. holarctica (Pilo, 2018). Conversely, several rodents, ticks, and mosquitoes play a vital role in Eurasia in the life cycle of F. hallartica (Telford and Goethert, 2020). Rabbits (Sylvilagus spp.) and hares (Lepus spp.) were considered as carriers/reservoirs of F. tularensis subsp. Tularensis (Yeni et al., 2021). F. tularensis has been identified with multiple reservoirs including lagomorph and small rodents, whereas Ixodidae ticks are the main vector for the bacterium (Hennebique et al., 2019). In addition, mosquitoes and deer flies are considered as vectors for F. tularensis in this region (Zellner and Huntley, 2019). Furthermore, waterborne infection of F. tularensis has also been reported (Hennebique et al., 2019). The pathogenic bacterium could survive in water and environment for several months (Caspar and Maurin, 2017). Furthermore, the organism had multiplied in a protozoan, amoeba in water (Caspar and Maurin, 2017). Humans get the infection through multiple routes directly or indirectly through infected animals, carcasses, ticks, mosquitoes, contaminated water, soil and food (Appelt et al., 2020). Majority of humans infections were associated with direct or indirect contact with infected animals (Seiwald et al., 2020). Lung infection was mostly associated with farming activities (Seiwald et al., 2020). The periprosthetic joint infections have been reported in many occasions (Steiner et al., 2014;Chrdle et al., 2019).
Basically, three main types of animals have been categorised by World Health Organization (WHO) based on the high diversity of host range of F. tularensis; and identification of incidental or reservoir host are also challenging (Pilo, 2018). The classification is based on susceptibility and sensitivity or severity of the infection such as acute disease after 1-10 organism inoculation with rapid mortification within blood and tissue (Pilo, 2018). The second class or category is reported cause fatalities after inoculation of 10 8 -10 9 organism and animal may survive with low dose of infection with development of immunity (Pilo, 2018;WHO, 2007). The class three host are anyway resistant to the infection (Pilo, 2018). The disadvantage of this classification is that the hosts are classified only based on challenge experiments though blood or lymphatic routes (Pilo, 2018). However, other natural routes of infection and accidental host or reservoirs host have not been evaluated. Furthermore, tularaemia has been reported in companion animals such as dogs and cats in North America (Pilo, 2018). According to the recent finding, 50% of human tularemia infection were cat associated while only 3% were associated with canine infection in humans in USA (Seiwald et al., 2020). Tularaemia is also a disease in wild and captured animals and identification of the organism in animals and humans have been reported simultaneously (Faber et al., 2018). Neurological and respiratory signs are found common in infected animals. Meningoencephalitis, pneumonia and myocarditis were observed with unusual mortalities in fox squirrels in USA (Vincent et al., 2020). The seropositive cases had been encountered in wild animals as hares, foxes (Vulpes vulpes), raccoon dogs (Nyctereutes procyonoides), wild boar (Sus scrofa), bank voles (Myodes glareolus), water voles (Arvicola terrestris), field voles (Microtus agrestis), common voles (Microtus arvalis), yellow-necked field mice (Apodemus flavicollis) and zoo animals in Europe previously (Faber et al., 2018). The feline tularaemia has only been reported in North America and canine tularaemia has been observed in Europe (Pilo, 2018). In addition, tularaemia has also been noticed in sheep as livestock spices (Pilo, 2018).
The first genome of F. tularensis had been sequenced and published in 2005, genetic similarity between A and B types were observed as 97.63% (Gunnell et al., 2016). The size of genome is round 1.7 to 2.0 Mb and 16S rDNA gene does not have sufficient discriminative power to differentiate each sub-species (Gunnell et al., 2016). Some of the tools cannot be applied in molecular epidemiology with F. tularensis, especially in outbreak investigation. Repetitive extragenic palindromic element PCR (REP-PCR), enterobacterial repetitive intergenic consensus sequence PCR (ERIC-PCR), random amplified polymorphic DNA (RAPD), pulsed-field gel electrophoresis (PFGE), and restriction fragment length polymorphism (RFLP) assays were with limited success (Pilo, 2018). However, the multiple loci variable number tandem repeats (VNTR) markers have been devolved and proven success in molecular epidemiology. Geographically specific clades have been identified with VNTR makers in Europe and North America (Pilo, 2018). In contrast, the lack of standard databases is the limiting factor in VNTR analysis. However, VNTR markers may not work in phylogenetic analysis, while canonical single nucleotide polymorphism (canSNP) and canonical insertions/deletions (INDELs) have shown success in such analysis (Pilo, 2018).
Pathogenesis, immune responses and virulence mechanism
The average incubation period of the infection caused by Francisella was as low as 3-5 days in humans, and maximum time reported was two weeks (Abdellahoum et al., 2020). Humans show nonspecific signs such as flu-like symptoms including fever, lymphadenopathy, headache, chills, myalgia, and arthralgia (Seiwald et al., 2020). Tularemia is a sporadic disease with a low infectious dose required on pathogenesis (Celli and Zahrt, 2013). The severity of infection depends on portal of entry, infectious dose, and subspecies (biovar) of the infecting strain (Celli and Zahrt, 2013). Six clinical forms were identified as ulceroglandular, glandular, oropharyngeal, oculoglandular, pneumonic and typhoidal form (Celli and Zahrt, 2013). The first two forms (ulceroglandular, glandular) were mostly associated with skin inoculation of bacteria either by arthropod bites or contact with the skin (hair) which are the main routes of infection through infected animals (Caspar and Maurin, 2017;Yeni et al., 2021). The contaminated hand may touch conjunctiva of the eye and organism enter the body through conjunctival route (Yeni et al., 2021). The contaminated food and water are also suggestive sources of infection in animal and human through which a number of outbreaks have been reported (Yeni et al., 2021). Inhalation of infected aerosol is considered another important method of infection especially contaminated dust which had caused clinical disease in human and animals (Yeni et al., 2021). Furthermore, untreated cases can result up to 60% mortality in tularemia while infectious dose was as low as 10 3 cfu in F. tularensis subsp. Holarctica (Type B strains) (Celli and Zahrt, 2013). However, it can be as far low as 10 cfu in F. tularensis subsp. Tularensis (Type A strains) (Celli and Zahrt, 2013). Conversely, F. tularensis is named as potential weaponized bacterium or type A agent by CDC due to the characteristic of severe form of clinical disease, low infectious dose and having a life-threatening outcome (Celli and Zahrt, 2013). However, human to human transmission has not been reported (Caspar and Maurin, 2017).
Innate immune responses against F. tularensis have not been fully understood (Kubelkova and Macela, 2019). The innate immunity is the first line defense mechanism against an infection, F. tularensis represses the activation of inflammasome at the cellular level to bypass innate immune responses (Krocova et al., 2017). In addition, Francisella had enhanced protein secretion in the infected cells (Krocova et al., 2017). The pathogen had invaded different mammalian cell types such as macrophages, dendritic cells, polymorphonuclear neutrophils, hepatocytes, endothelial, and type II alveolar lung epithelial cells (Celli and Zahrt, 2013;Krocova et al., 2017). The serum opsonisation was important in the process of uptaking of the organism into the phagocytic cells (Celli and Zahrt, 2013). Furthermore, the scavenger receptor A, Fcr receptors, neucliolin, lung surfactant protein A also play a vital role on uptaking serum opsonised Francisella into the macrophages (Celli and Zahrt, 2013). Francisella may interfere with the host metabolism including glycosylation pathway of human macrophages Ziveri et al., 2017). In addition, the pathogen utilises host cells substrates as nutritional requirements of the organism (Ziveri et al., 2017). The bacteria survive and reside in early phagosome and interact with early and late endocytic compartment except lysosome (Celli and Zahrt, 2013). Bacteria disrupt early phagosome cell membrane and rapid replication occurs in cytosol followed by cell death, bacterial release, and subsequent infection. In addition, Francisella inhibits the NADPH oxidase activity, limiting activation of polymorphonuclear cells and the pathogen halters oxidative burst by reactive oxygen in macrophages (Celli and Zahrt, 2013).
O antigen found in LPS has been identified as main virulence factors in Francisella species (Nicol et al., 2021). Simultaneously, O antigen has shown negative activity of IgM and compliment mediated mechanism within the host (Rowe and Huntley, 2015). The capsule like complex/CLC protein and high molecular weight carbohydrate have shown to induce pathogenesis (Nicol et al., 2021). In addition, ability to LPS alteration and presence of pili in the cells are considered as protection mechanism against host immune mechanism (Rowe and Huntley, 2015). Pathogenicity Island consists of 17 open reading frames which is believed to be essential for the pathogenesis (Steiner et al., 2014). The reactive oxygen species (ROS) and nitrogen species is also an essential component of Francisella virulence mechanism, enzyme KatG deactivates these two-protective mechanisms of the host cell (Steiner et al., 2014). SodB and SodC which encoded for superoxide dismutase are also required for the resistance against superoxide radicals (Steiner et al., 2014). Francisella enter into the macrophages with a specific mechanism called "looing phagocytosis" which consists of a large volume of space around the bacterium, surface receptors playing a vital role in phagocytosis such as mannose receptors, Fc receptors and complement receptors (Steiner et al., 2014). In addition, Iron is also an essential element for survival of Francicella in a host cell (Steiner et al., 2014). Importantly, bacterial sepsis and inflammation cause death than pneumonic condition in infected human with F.tularensis (Steiner et al., 2014). In addition, high levels of inflammatory cytokines and chemokines including IL-6, macrophages inflammatory protein, chemokine ligand 2 released in lung and spleen lead to the death (Steiner et al., 2014). Excessive neutrophil recruitment is also an important factor in pathogenesis (Steiner et al., 2014).
The clinical infection of F. tularensis had been reported in dogs and lethargy, pyrexia, anorexia, lymphadenopathies were found as the common clinical signs (Kwit et al., 2020). The prognosis was shown good in hospitalised canine hosts (Kwit et al., 2020). Some of the dogs had fever and lethargy 2 to 4 days after a hunting session and had recovered automatically (Kwit et al., 2020). In addition, the disease had been reported in cats, F. tularensis had been isolated in a number of feline post-mortem carcasses in Europe (Stidham et al., 2018). Domestic cats also play a vital role in spreading Francisella infection in humans in Europe (Frischknecht et al., 2019).
Diagnosis and laboratory tests
Diagnosis of tularemia in humans is based on clinical findings, epidemiology and serological testing (Kinkead and Allen, 2016). The fever with lymphadenopathy and proven contact with animals can be suspected as tularemia in humans (Kinkead and Allen, 2016). Q fever, Plague, Psittacosis has been noted in the top of the differential diagnosis (Kinkead and Allen, 2016). Micro agglutination test (MAT), indirect immunofluorescence assay (IFA) and ELISA are being used widely in the field as serological tests to detect tularaemia in humans (Kinkead and Allen, 2016;Kubelkova and Macela, 2019). However, cross reactivity has been observed common with a number of bacterial organisms such as Salmonella, Brucella, Legionella and Yersinia species (Kinkead and Allen, 2016). As a rule of thumb, four-fold rising of antibody titters against Francisella within a period of 2-4 weeks period was identified as suspected tularaemia infection in human (Kinkead and Allen, 2016). MAT and IFA were proven the ability to test specific antibodies at 2-3 weeks of post infection against F. tularensis. In addition, tularaemia can be diagnosed at a stage as early as 2 weeks of post infection by ELISA (Maurin, 2020). However, high percentages of false positive were also observed in ELISA (Maurin, 2020). The cross reactivity with other bacterial pathogen and long-term persistence of natural antibodies from previous infections resulted in a negative impact on serological tests of acute infections in humans (Maurin, 2020). Furthermore, Immunochromatography and immunoblot test have also being used in research laboratories with variable success (Maurin, 2020).
The conventional isolation and identification of Francisella is not practiced since it can be done only in special laboratory facility such as Bio Safety Level III (BSL-III) (Kinkead and Allen, 2016). Therefore, Conventional methods cannot be performed in local context in Sri Lanka since such BSL-III facility is not found in many laboratories except one or two in the country. A number of conventional and qPCR protocols have been optimized to detect the organism in clinical submission including multiplex PCR assays (Gunnell et al., 2016). In addition, Matrix-Assisted Laser Desorption/Ionization Time-of-Flight Mass Spectrometry (MOLDI TOF MS) is used for rapid identification of the bacterial organism with high level of accuracy (Kinkead and Allen, 2016).
Treatment
The efficacy of antimicrobial treatment in human infections depends on time of initiation of therapy; it had greatly been reduced in the clinical cases in which antimicrobial therapy had been started 24-48 hours of post infection (Caspar and Maurin, 2017). Antimicrobials are used for a comparatively long time as 10-21 days to minimize the clinical complications.
According to literature, Aminoglycosides, tetracycline, quinolones, and chloramphenicol had been used in the past (Kinkead and Allen, 2016). Antimicrobial susceptibility testing (AST) is challenging since BSL-III facilities is required and thus, automated AST facilities are recommended. In addition, beta lactams, rifampicin and linezolid have also been used (Caspar and Maurin, 2017). Conversely, antimicrobial treatment is based on a number of other factors such as physiological condition of the patient, degree of dehydration, presence of metabolic or immunecompromised diseases well.
According to Caspar and Maurin, recovery rate of human tularaemia was 60-100% and it depends on the type of antimicrobial, time of commencement of the treatment, duration of treatment, and presence of other clinical complications (Caspar and Maurin, 2017). Information regarding antimicrobial resistance was scarcely found in Francisella due to the limited studies. However, the resistance had been reported for penicillin, cephalosporins, carbapenems, macrolides, and clindamycin (Kinkead and Allen, 2016). In contrast, moxifloxacin had been proven successful against the cases of delayed diagnosis in humans (Caspar and Maurin, 2017). A combination of aminoglycoside with tetracycline, quinolone and chloramphenicol are used for meningitis and endocarditis while pediatric cases were often treated with gentamicin (Kinkead and Allen, 2016). Ciprofloxacin and doxycycline are widely used in mild to moderate cases of tularaemia in adults while azithromycin is used widely in pregnant women.
Prevention and control
A number of guidelines have been developed by different authorities including WHO to prevent the infection in humans. Utilization of treated water for daily activities, usage of gloves when handling wild rabbits and rodents, thorough cooking of bush meat, using of insect repellent when traveling outside, protection of stored food from rodents and wearing masks have been recommended to prevent tularemia in humans. In addition, protection from ticks, minimum contact with weeds (when traveling in natural trails), controlling external parasites in pets are considered as strategies to prevent Francisella infection in the endemic regions (Kinkead and Allen, 2016).
Vaccination is practised as an alternative method to control the infection in endemic regions. However, there is no commercial vaccine found in the market. Killed, live attenuated and subunit vaccine have been developed in different countries (Kinkead and Allen, 2016) and live attenuated vaccine had been tried in Russia with variable success (Celli and Zahrt, 2013;Kinkead and Allen, 2016). A number of studies are underway to determine the efficacy and potency of Francisella vaccine around the world (Elkins et al., 2016;Mulligan et al., 2017). Importantly, live attenuated vaccine has been developed with significant reduction of clinical incidence in the experimental models (Mulligan et al., 2017). Being an intracellular parasite, studies are targeting on T cell responses in experimental models (Elkins et al., 2016). A number of animal models have been suggested and the efficacy of live vaccine has been evaluated including rats, rabbits, mice and nonhuman primates (Roberts et al., 2018). Although extensive studies are required for a solid conclusion, variable degree of immune responses have been resulted from different routes of infection in human (Nicol et al., 2021).
CONCLUSION
F. tularensis is considered as a potential risk to humans and animals in the future due to the trend of rearing rodents as a habit in the country. In addition, the varied host range, low infective dose and severity of the infection may aggravate the risk together with emerging antimicrobial resistance. On the other hand, existence of a wide gap between the information on pathogenesis of Francisella species and virulence mechanism in humans and animal is a limiting factor which warrants further research.
ACKNOWLEDGEMENT
Author would like to acknowledge the Fleming Fund Fellowship programme, UK, University of Hong Kong, Mentor Dr. Wu Peng for tremendous support, encouragement, and facilities for the fellowship programme. | 2023-09-08T15:20:03.512Z | 2023-09-06T00:00:00.000 | {
"year": 2023,
"sha1": "4c05214309a326689378767bdf4a66c51129508d",
"oa_license": "CCBY",
"oa_url": "https://storage.googleapis.com/jnl-sljo-j-slvj-files/journals/1/articles/76/64f6faf922214.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4e00262b99224ed89696d11a83299470b7c13152",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
246165710 | pes2o/s2orc | v3-fos-license | ROSIE, a database of reptilian offspring sex ratios and sex-determining mechanisms, beginning with Testudines
In contrast to genotypic sex determination (GSD), temperature-dependent sex determination (TSD) in amniotic vertebrates eludes intuitive connections to Fisherian sex-ratio theory. Attempts to draw such connections have driven over 50 years of research on the evolution of sex-determining mechanisms (SDM), perhaps most prominently among species in the order Testudines. Despite regular advancements in our understanding of this topic, no efforts have been published compiling the entirety of data on the relationships between incubation temperature and offspring sex in any taxonomic group. Here, we present the Reptilian Offspring Sex and Incubation Environment (ROSIE) database, a comprehensive set of over 7,000 individual measurements of offspring sex ratios in the order Testudines as well as SDM classifications for 149 species. As the name suggests, we plan to expand the taxonomic coverage of ROSIE to include all non-avian reptiles and will regularly release updates to maintain its comprehensive nature. This resource will enable crucial future research probing the ecology and evolution of SDM, including the presumed sensitivity of TSD to rapid environmental change.
Investigations of aspects of the ecology and evolution of TSD in chelonians are published routinely, and the state of our understanding of SDM in reptiles more broadly is regularly summarized every few years 6,[17][18][19][20][21] . However, despite the metronomic publication of knowledgeable reviews, limited effort has been made to compile and publish the data from which this knowledge is drawn. In particular, only two efforts have attempted to organize chelonian offspring sex-ratio data, each with their own shortcomings. Paukstis & Janzen 25 represents the first effort, which includes offspring sex-ratio data spanning the diversity of non-avian reptiles, but only includes results from constant-temperature incubation experiments and can no longer be considered up to date. The more recent compilation 24 likewise spans non-avian reptiles while also including measurements of additional phenotypes beyond sex that are influenced by incubation temperatures. However, this database excludes studies on natural incubation and exogenous hormone application, two topics often investigated in the ecological/evolutionary literature 18,[26][27][28] . In addition, the authors' methods largely excluded data outside the scope of Web of Science (e.g., unpublished theses/dissertations, select journals).
Such offspring sex-ratio data are necessary to characterize TSD, but a complete understanding of SDM evolution is impossible without comprehensive data on the taxonomic distribution of both TSD and its counterpart GSD as well. Several sources present these data for turtles but suffer from shortcomings much like those described above. For example, the Tree of Sex Database 4 has, to the best of our knowledge, not been directly updated since its initial release in 2014. In addition, both it and subsequent publications that indirectly expand its taxonomic coverage (e.g. 29 ,) have not taken advantage of SDM classifications presented in gray literature, such as publications from conservation breeding programs or unpublished theses/dissertations.
Here, we present the Reptilian Offspring Sex and Incubation Environment (ROSIE) database, a comprehensive compilation of offspring sex ratios and SDM in chelonians, with future plans to include data from all non-avian reptiles. Our database is easily updatable, and can be used to address a variety of key questions, including: 1) What is the ancestral SDM in chelonians, and how often have transitions between mechanisms occurred? 2) How does the relationship between incubation temperature and offspring sex (i.e., the sex-ratio reaction norm) evolve within and among species? 3) To what extent does TSD vary geographically? Temporally? Among species? Within species? Among clutches? Across generations?
Methods
We obtained hatchling sex-ratio data in turtles using Web of Science (v.5.35) to search for research published since the discovery of TSD (1966 12 ) until 31 December 2020. On two separate occasions (17 June 2020 and 7 January 2021), we searched all databases for topics including the following terms: sex AND determin* AND incubat*, along with either turtle* or a wildcard version of each species' taxonomic name to account for suffix variation (e.g., apalon* mutic* for both Apalone muticus and Apalone mutica). Taxonomy followed the 356 chelonian species identified in Turtle Taxonomy Working Group 30 . We reviewed additional publications and gray literature known to contain hatchling sex-ratio data, as well as research referenced within sources from the systematic search. Altogether, these methods returned 910 sources for evaluation. We evaluated sources obtained in the literature search based on the full text, and exclusions fell into the following categories: (1) inaccessible (n = 14), (2) study species was not a turtle (n = 116), (3) hatchling sex www.nature.com/scientificdata www.nature.com/scientificdata/ ratios were not reported (n = 269), (4) hatchling sex ratios were estimated based on incubation temperatures or durations (n = 48), and (5) hatchling sex-ratio data were previously reported elsewhere (n = 63). After exclusion, 400 sources remained for data extraction (Fig. 1).
From each source, we extracted data on incubation conditions and offspring sex measurements, including additional variables such as hatching success, incubation duration, and sexing methodology (Online-only Table 1) 31 . When variable values (mean incubation duration, sex ratio, etc.) were not provided in the text, tables, or figure legends, we extracted values from figures using WebPlotDigitizer (v4.4 32 ). For a number of sources (n = 42), we contacted the corresponding authors to request relevant materials to clarify sample sizes or other questions about the data. We examined all data and exclusions twice to ensure accuracy and, to avoid data replication, we examined data and manuscripts from lab/author groups to determine whether multiple sources analyzed the same information. When sources shared data, we excluded measurements from the more recent source(s) unless (1) additional samples were included, or (2) data were presented in a different format (e.g., sex ratio per shelf in each incubator vs. sex ratio per whole incubator).
We gathered chelonian SDM information in a stepwise manner. First, we compiled classifications from existing databases 4,29 , which were next evaluated for accuracy and supplemented with SDM classifications based on the offspring sex-ratio data collected as described above. Finally, we performed extensive online searches to identify sources supplying SDM classifications for additional species. Where possible for each species, we include relevant citations, SDM classifications, and classification confidence based on a combination of available data, data in closely related species, and author expertise.
Our offspring sex-ratio compilation contains data on 32.9% (117/356) of recognized chelonian species (Fig. 2), though the taxonomic distribution is highly skewed; just 7 species from 3 families (Chelydridae: Chelydra serpentina; Cheloniidae: Caretta caretta, Chelonia mydas, Lepidochelys olivacea; Emydidae: Trachemys scripta, Chrysemys picta, Emys orbicularis) were the focus of nearly half of all studies (45.2%; 241/533; note that some sources contain data on multiple species, hence the difference between total sources [n = 400] and total studies [n = 533]). The geographic distribution is likewise biased with most studies on wild populations of North www.nature.com/scientificdata www.nature.com/scientificdata/ American species, starkly contrasting the sampling of African populations ( Fig. 3; but note that several African species are represented in captive colonies located outside the continent). Study design is likewise biased, with most (269/400) sources focusing solely or partly on the results of constant-temperature incubation, whereas 85 employ other forms of controlled or semi-controlled incubation conditions (e.g., fluctuating temperatures, temperature switch experiments, room temperature incubation), 124 contain results from natural regimes, and 16 do not define incubation conditions. In addition, 71 studies investigate the influence of chemical applications on offspring sex ratios. In all, the database contains over 7,000 individual measurements of offspring sex ratios, ranging from data on individual eggs to a whole population's nesting season and representing the sexing of nearly 200,000 turtle hatchlings and embryos.
Our SDM database contains confident SDM classifications for 149 chelonian species (Fig. 2) and unsupported or unlikely classifications for an additional 13 species. Of those with confidently assigned SDM, 24% (36/149) exhibit GSD, 19 of which are also represented in the sex-ratio database (Fig. 2). Besides one species with an unconfident SDM classification (Chitra chitra), the remaining species in the sex-ratio database (n = 97) comprise a subset of the 113 species confidently known to exhibit TSD. Overall, our collection of chelonian SDM represents a near 50% increase in taxonomic coverage relative to recently published summaries (149 vs 101 species 29 ).
As indicated by the name, we plan to expand ROSIE to encompass all non-avian reptile species. In our next update, we will incorporate data from the remaining reptilian orders (Crocodylia, Rhynchocephalia, and Squamata) following the methods described herein, including all data published through the end of 2020. Once ROSIE has reached this final taxonomic scope, we will push updates every other year to include newly available data and maintain the up-to-date nature of this resource.
Data Records
This database is hosted by GitHub (https://github.com/calebkrueger/ROSIE), and the raw data can be accessed via a unique, stable DOI through Zenodo 31 . The database consists of csv files of (1) extracted offspring sex ratio and incubation environment data with complete references, (2) SDM classifications, (3) excluded sources with complete references and exclusion criteria, and (4) metadata.
Technical Validation
The data have been thoroughly checked for accuracy by C.J.K. prior to release. The authors urge users to report errors or submit additional data and updates by emailing the corresponding author. Any errors identified can readily be corrected in future updates, which will occur biannually.
Code availability
No custom code was used in the creation of this database. | 2022-01-22T14:32:08.195Z | 2022-01-21T00:00:00.000 | {
"year": 2022,
"sha1": "ef571d136ba20cb098911cebe3be3e66c47663f7",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41597-021-01108-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "acf32346a1518216ca4b9d8a7f3ddde595413e06",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
119304913 | pes2o/s2orc | v3-fos-license | Strong cohomological rigidity of toric varieties
Every cohomology ring isomorphism between two non-singular complete toric varieties and quasitoric manifolds, respectively, with second Betti number $2$ is realizable by a diffeomorphism and homeomorphism, respectively.
Introduction
A toric variety is a normal algebraic variety of complex dimension ℓ with an action of the algebraic torus (C * ) ℓ having an open dense orbit. A typical example of a non-singular complete toric variety is the projective space CP ℓ of complex dimension ℓ with the standard action of (C * ) ℓ .
The cohomological rigidity problem for toric varieties asks whether two non-singular complete toric varieties are diffeomorphic or not if their cohomology rings are isomorphic as graded rings. Although the cohomology ring is known to be a weak invariant even under homotopy equivalence, no example providing the negative answer to the problem has been found yet. On the contrary, many results support the affirmative answers to the problem. We refer the reader to a survey paper [3] on this problem. One of remarkable results on this topic is that two non-singular complete toric varieties with second Betti number 2 (or Picard number 2) are diffeomorphic if and only if their cohomology rings are isomorphic as graded rings (see [5]).
On the other hand, one can ask a stronger version of the cohomological rigidity problem for toric varieties as follows. Throughout this paper, H * (X) denotes the integral cohomology ring of a topological space X.
Strong cohomological rigidity problem for toric varieties. Let M and M ′ be non-singular complete toric varieties. If ϕ is a graded ring isomorphism from H * (M ) to H * (M ′ ), is there a diffeomorphism which induces the isomorphism ϕ?
A realizability problem of a given cohomology ring automorphism by a diffeomorphism is classical and important in algebraic topology. A projective space CP 1 is the only non-singular complete toric variety of complex dimension 1, and it is easy to show that every cohomology ring automorphism is realizable by a diffeomorphism. We note that since any toric variety of complex dimension ℓ admits an action of (C * ) ℓ , it also admits a canonical action of the ℓ-dimensional compact torus T ℓ = (S 1 ) ℓ ⊂ (C * ) ℓ . For the complex dimension ℓ = 2, Orlik and Raymond [15] showed that real 4-dimensional compact manifolds which admit well-behaved T 2 -actions can be expressed as connected sums of copies of CP 2 , CP 2 and CP 1 × CP 1 , and they are classified by their cohomology rings up to diffeomorphism, where CP 2 denotes CP 2 with reversed orientation. By Wall [17], any cohomology ring automorphism of such a manifold of dimension 4 with second Betti number β 2 ≤ 10 is induced by a diffeomorphism. Hence, one can conclude that the answer to the strong cohomological rigidity problem is affirmative for complex 2-dimensional non-singular complete toric varieties with β 2 ≤ 10.
However, the negative answer is also known. For instance, not all cohomology ring automorphisms can be realizable by a diffeomorphism for a complex 2-dimensional non-singular complete toric varieties with β 2 > 10, see [9]. Furthermore, this implies that the answer to the strong cohomological rigidity problem for toric varieties of arbitrary dimension with sufficiently large β 2 may be negative. Hence, it is reasonable to ask the strong cohomological rigidity problem for toric varieties of arbitrary dimension ℓ with small β 2 . We note that since a non-singular complete toric variety with β 2 = 1 is a complex projective space CP ℓ and any automorphism of H * (CP ℓ ) is induced by a diffeomorphism on CP ℓ , the strong cohomological rigidity holds for non-singular complete toric varieties with β 2 = 1.
Motivated by the above, in this paper, we study the strong cohomological rigidity problem for non-singular complete toric varieties with β 2 = 2. The algebraic and smooth classifications of non-singular complete toric varieties with β 2 = 2 are well-studied in [14] and [5]. In order to do, we show that any cohomology ring automorphism of them is realizable by a diffeomorphism. Combining it with the fact that non-singular complete toric varieties with β 2 = 2 are smoothly classified by their cohomology rings [5], we have the following theorem. Theorem 1.1. Any cohomology ring isomorphism between non-singular complete toric varieties with second Betti number 2 is realizable by a diffeomorphism.
The notion of a quasitoric manifold was introduced in [8] as a topological analogue of a non-singular projective toric variety * . A quasitoric manifold M is a 2ℓ-dimensional compact smooth manifold with a locally standard T ℓaction whose orbit space can be identified with a simple polytope P . Every complex ℓ-dimensional non-singular projective toric variety with a restricted action of (C * ) ℓ to T ℓ is a quasitoric manifold of dimension 2ℓ. One remarks that any non-singular complete toric varieties with β 2 = 2 is projective, and hence, it is a quasitoric manifold. However, not all quasitoric manifolds can be toric varieties. For example, an equivariant connected sum CP 2 #CP 2 * The authors would like to indicate that the notion of quasitoric manifolds originally appeared under the name "toric manifolds" in [8]. Later, it was renamed in [1] in order to avoid confusion with smooth compact toric varieties. As far as the authors know, there has been a dispute about the terminology. The authors have no preference; however, in this paper, they follow the terminology used in their previous papers.
of two CP 2 's with an appropriate T 2 -action is a quasitoric manifold with an orbit space a square ∆ 1 × ∆ 1 , although it is not a toric variety since it does not admit an almost complex structure. Hence, the class of quasitoric manifolds is larger than that of non-singular projective toric varieties † .
Note that quasitoric manifolds with β 2 = 2 are topologically classified by their cohomology rings [7]. In this paper, we also investigate the strong cohomological rigidity for quasitoric manifolds as follows.
Theorem 1.2. Any cohomology ring isomorphism between two quasitoric manifolds with second Betti number 2 is realizable by a homeomorphism.
The remainder of this paper is organized as follows. In Section 2, we review the properties of quasitoric manifolds and the topological classification of quasitoric manifolds with β 2 = 2. In Section 3, we introduce a weighted projective space CP n+1 a and obtain quasitoric manifolds over ∆ n × ∆ 1 by doing an equivariant connected sum CP n+1 a #CP n+1 a or CP n+1 a #CP n+1 a . By using this, we show that any cohomology ring automorphism of such a quasitoric manifold is realizable by a diffeomorphism. In Section 4, we show the realizability of a cohomology ring automorphism for a non-singular complete toric variety with β 2 = 2. In Section 5, we consider quasitoric manifolds over the product of simplices ∆ n × ∆ m which are not non-singular complete toric varieties. Finally, we complete the proofs of Theorems 1.1 and 1.2 in Section 6.
Quasitoric manifolds with second Betti number 2
In this section, we first review general properties of quasitoric manifolds from [8], [1] and [4]. We partially focus on the case that the second Betti number is 2. In addition, we recall the classification results in [5] and [7].
Let M be a 2ℓ-dimensional quasitoric manifold over an ℓ-dimensional simple polytope P with d facets (codimension-1 faces). Let F be a kdimensional face of P . Note that for the orbit map ρ : M → P and for a point x ∈ ρ −1 (F • ), the isotropy subgroup at x is independent of the choice of x and is a codimension-k subtorus of T ℓ , where F • denotes the interior of F . If F is a facet of P , then the ρ −1 (F ) is fixed by a circle subgroup of T ℓ . We define a function λ : {F 1 , . . . , F d } → Hom(S 1 , T ℓ ) ∼ = Z ℓ , called the characteristic function of M , such that λ(F i ) fixes the characteristic submanifold M i := ρ −1 (F i ) for i = 1, . . . , d, where {F 1 , . . . , F d } is the set of facets of P . We note that λ satisfies the following non-singularity condition; λ(F i 1 ), . . . , λ(F iα ) form a part of an integral basis of Z ℓ whenever the intersection F i 1 ∩ · · · ∩ F iα is non-empty.
Conversely, let us consider a function λ : {F 1 , . . . , F d } → Z ℓ satisfying (2.1) and its matrix representation Λ = λ(F 1 ) · · · λ(F d ) , called a characteristic matrix. For a characteristic matrix Λ and a face F of P , we denote by T (F ) the subgroup of T ℓ corresponding to the unimodular subspace of Z ℓ spanned by λ(F i 1 ), . . . , λ(F iα ), where F = F i 1 ∩ · · · ∩ F iα . † In general, a non-singular non-projective complete toric variety may fail to be a quasitoric manifold. However, no such example has been known.
Given a characteristic matrix Λ on P , we construct a manifold where (t, p) ∼ (s, q) if and only if p = q and t −1 s ∈ T (F (p)), where F (p) is the face of P containing p ∈ P in its relative interior. Then, the standard T ℓaction on T ℓ induces a locally standard T ℓ -action on M (P, Λ), and M (P, Λ) is indeed a quasitoric manifold over P whose characteristic function is λ.
Note that if Λ ′ is a matrix obtained from Λ by changing the signs of some columns, then M (P, Λ ′ ) is equal to M (P, Λ). Define a map Θ : where e i is the i-th coordinate vector. Using Θ, we get a T d -manifold Z P = T d × P/ ∼ as in construction (2.2). Then the dimension of Z P is equal to d + ℓ. The T d -manifold Z P is called a moment-angle manifold of P . For instance, Z ∆ ℓ is the (2ℓ + 1)-dimensional sphere S 2ℓ+1 . Note that for two simple polytopes P and Q, we have that Z P ×Q = Z P ×Z Q . Hence, Z ∆ n ×∆ m = S 2n+1 ×S 2m+1 . Let us consider the map Z d → Z ℓ which commutes the following diagram Then this map can be regarded as a homomorphism defined by x → Λx for any x ∈ Z d . Throughout this paper, we denote this homomorphism by λ if there is no confusion. Let K be the subtorus of T d corresponding to ker λ. Then K acts freely on Z P , and the orbit space of K on Z P is the quasitoric manifold M (P, Λ). Two quasitoric manifolds M and M ′ over P are said to be equivalent if there is a θ-equivariant homeomorphism f : M → M ′ , i.e., f (t · x) = θ(t) · f (x) for t ∈ T ℓ and x ∈ M , which covers the identity map on P for some automorphism θ of T ℓ . One can see that M (P, Λ) and M (P, Λ ′ ) are equivalent if there is an element G in the general linear group GL(ℓ, Z) of rank ℓ over Z such that Λ ′ = GΛ.
There is a well-known formula for the cohomology ring of a quasitoric manifold with Z-coefficients. Let M be a quasitoric manifold over P with the characteristic matrix Λ = (λ ij ) 1≤i≤ℓ 1≤j≤d . Then, where x i is the degree-two cohomology class dual to the characteristic submanifold M i , I P is the homogeneous ideal generated by all square-free monomials x i 1 · · · x iα such that F i 1 ∩· · ·∩F iα is empty, and J is the ideal generated by linear forms λ i1 Let M be a quasitoric manifold with second Betti number β 2 = 2. Then the orbit space of M is a polytope of dimension ℓ with ℓ + 2 facets. Hence, the orbit space is a product of two simplices ∆ n × ∆ m (see [11]) for some n and m satisfying n + m = ℓ. Let {F 1 , . . . , F n+1 } and {F ′ 1 , . . . , F ′ m+1 } be the sets of facets of ∆ n and ∆ m , respectively. Then, each facet of ∆ n × ∆ m is of the form either F i × ∆ m or ∆ n × F ′ j . We may give an order to the facets of ∆ n × ∆ m by . Since the first n + m = ℓ facets meet at a vertex, up to equivalence, we may assume that the first ℓ columns of the characteristic matrix Λ corresponding to M form an identity matrix. Furthermore, by the non-singularity condition (2.1), one can see that . . , n and j = 1, . . . , m (see more details in [7]). From now on, we denote such M by M a,b for a = (a 1 , . . . , a m ) and A generalized Bott tower of height h, or an h-stage generalized Bott tower, is a sequence where C is the trivial line bundle, ξ i,j is a complex line bundle over B i−1 for each i = 1, . . . , h, and P (·) stands for the projectivization. We call B i an i-stage generalized Bott manifold. We remark that a 2-stage generalized Bott manifold provided by n = m = 1 is known as a Hirzebruch surface [8]. Note that h-stage generalized Bott manifolds are non-singular projective toric varieties with β 2 = h, and are quasitoric manifolds over a product of h simplices. Moreover, by [4], a quasitoric manifold over a product of simplices has a non-singular complete toric variety structure if and only if it is equivalent to a generalized Bott manifold. Hence, every non-singular complete toric variety with β 2 = 2 is a two-stage generalized Bott manifold.
For the simplicity, we denote L a by a-times tensor bundle of L for any complex line bundle L over a base B. If b = 0, then M a,0 is equivalent to a two-stage generalized Bott manifold P (C ⊕ m j=1 γ a j ), where γ is a tautological line bundle over CP n . Furthermore, in (2.5), the generator , the negative of the first Chern class of γ, and the generator x 2 of H * (M a,0 ) is the negative of the first Chern class of the tautological line bundle over P (C ⊕ m j=1 γ a j ). On the other hand, if a = 0, then a quasitoric manifold M 0,b is equivalent to a two-stage generalized where η is a tautological line bundle over CP m , see [4]. Similarly, in (2.5), the generator and the generator x 1 of H * (M 0,b ) is the negative of the first Chern class of the tautological line bundle over P (C ⊕ n i=1 η b i ). The following theorem gives a smooth classification of two-stage generalized Bott manifolds.
, where γ denotes the tautological line bundle over B 1 = CP n . The following are equivalent.
(1) There exist ǫ = ±1 and w ∈ Z such that 2 ) are isomorphic as graded rings. If neither a nor b is a zero vector, then M a,b cannot be equivalent to a two-stage generalized Bott manifold. Moreover, by (2.4), we have either that the nonzero entries of a are ±2 and the nonzero entries of b are ±1, or that the nonzero entries of a are ±1 and the nonzero entries of b are ±2.
The following theorem gives a topological classification of quasitoric manifolds with β 2 = 2. where # denotes an equivariant connected sum and CP m+1 denotes CP m+1 with reversed orientation.
Weighted projective spaces and their connected sum
It is well-known that the quasitoric manifold M 1,0 over ∆ n × ∆ 1 is the connected sum CP n+1 #CP n+1 , and the quasitoric manifold M 1,(2,0,...,0) is the connected sum CP n+1 #CP n+1 . In this section, we shall show that quasitoric manifolds M 2,0 and M 2,(1,0,...,0) over ∆ n × ∆ 1 can be expressed as the equivariant connected sum of weighted projective spaces, and then consider the realizability of the automorphism of H * (M a,b ) when a = 1 or a = 2.
Let us first look at the definitions and properties of weighted projective spaces.
Alternatively, CP ℓ q can be realized as the quotient of the unit sphere S 2ℓ+1 ⊂ C ℓ+1 by the S 1 -action obtained by the restriction of the above C * -action to the unit circle S 1 .
Note that CP ℓ q is equipped with an action of the ℓ-dimensional torus T ℓ q = (S 1 ) ℓ+1 /j q (S 1 ), where j q : S 1 → (S 1 ) ℓ+1 is the embedding defined by j q (ζ) = (ζ q 0 , . . . , ζ q ℓ ). It is well-known that CP ℓ q with the action of T ℓ q is a toric Kähler orbifold, see [10] for more details.
We can also consider the real weighted projective space as follows, and it is the fixed set of the conjugation action on the weighted projective space. Hence, if all q i 's are odd, then RP ℓ q is the ordinary real projective space RP ℓ .
Let D (respectively, D ′ ) be a closed ball in CP n+1 a (respectively, CP n+1 a ) containing the singular point, which is a sub-orbifold with boundary diffeomorphic to D 2(n+1) /µ a , where D 2(n+1) is the closed unit ball in C n+1 . By removing D and D ′ from CP n+1 a and CP n+1 a respectively, and gluing them along the boundary ∂D ∼ = S 2n+1 /µ a ∼ = ∂D ′ via the diffeomorphism, we obtain a smooth manifold CP n+1 a #CP n+1 a ‡ . As we seen in the introduction, a quasitoric manifold is a topological generalization of a non-singular projective toric variety. In fact, there is a notion of a topological generalization of a projective toric varieties not necessarily to be non-singular, which is called a quasitoric orbifold, introduced by several literatures such as [8], [12], and [16].
Each vector λ(F i ) is the rational characteristic vector corresponding to F i . Let K be the subtorus of T d corresponding to the kernel of λ. Then K acts on Z P with finite isotropy groups. We denote the orbit space of K on Z P by Q(P, λ) and call it the quasitoric orbifold corresponding to (P, λ). By giving an ordering on the set of facets of P , it is convenient to represent the rational characteristic function λ by the rational characteristic matrix Λ = λ(F 1 ) · · · λ(F d ) . For simplicity, we sometimes use the notation Q(P, Λ) instead of Q(P, λ). We note that a singular projective toric variety is a quasitoric orbifold. Since Z ∆ ℓ is S 2ℓ+1 and the subtorus K corresponding to the kernel of a rational characteristic function on ∆ ℓ is a circle with a suitable weight, a weighted projective space CP ℓ q is a quasitoric orbifold over ∆ ℓ .
Let us find the characteristic function λ corresponding to CP n+1 a . Note that for each i = 0, . . . , n + 1, the sub-orbifold Q i of CP n+1 a described by z i = 0 is fixed by the quotient of the (i + 1)-th coordinate circle of T n+2 . We identify T n+1 a and T n+1 via the map which sends the (i + 1)-th coordinate circle of T n+2 to the i-th coordinate circle of T n+1 for i = 1, . . . , n + 1. ‡ Note that if we do a connected sum at the non-singular point, then the connected sum Since [ζ, 1, . . . , 1] = [1, ζ −1 , . . . , ζ −1 , ζ −a ] in T n+1 a , the first coordinate circle of T n+2 is identified with the circle subgroup of T n+1 generated by (ζ −1 , . . . , ζ −1 , ζ −a ). Moreover, the torus T n+1 acts on CP n+1 a as follows: Then, for each i = 1, . . . , n + 1, Q i is fixed by the i-th coordinate circle of T n+1 , and Q 0 is fixed by the circle generated by (−1, . . . , −1, −a) ∈ Z n+1 = Hom(S 1 , T n+1 ). Let us denote each facet of ∆ n+1 corresponding to Q i by F i , and its characteristic function by λ : {F 0 , F 1 , . . . , F n } → Z n+1 . Then, the rational characteristic matrix corresponding to CP n+1 In particular, a fan of CP n+1 a as a projective toric variety is obtained by taking cones generated by all proper subsets of {−e 1 − · · · − e n − ae n+1 , e 1 , . . . , e n+1 }. Now, consider a two-stage generalized Bott manifold P (C ⊕ γ a ), where γ is the tautological line bundle over CP n . The relationship between the weighted projective space CP n+1 a and the projective bundle P (C ⊕ γ a ) over CP n is provided by the following lemma. We get a fan of CP n+1 a by taking the cones generated by all proper subset of the set S \ {−e n+1 }, see Figure 1. Note that the cone generated by {−e 1 − · · · − e n − ae n+1 , e 1 , . . . , e n } is corresponding to the singular point [0, . . . , 0, 1] of CP n+1 a , and −ae n+1 = (−e 1 − · · · − e n − ae n+1 ) + e 1 + · · ·+ e n . Hence, P (C ⊕ γ a ) is the blow-up of CP n+1 a at the singular point. Consider (n + 1) × (n + 2) matrices of the form Then Λ is a rational characteristic matrix on ∆ n+1 . Since the row operation of Λ whose determinant is ±1 is corresponding to an automorphism of T n+1 , and the sign change of the column vector does not affect the subgroup generated by column vectors, we can see that Q(∆ n+1 , Λ) is equivalent to the weighted projective space CP n+1 a with a suitable T n+1 -action.
We can show the statement (1) in another way. Note that Λ + in (3.1) is a rational characteristic matrix corresponding to CP Note that the orientation of CP n+1 a is associated with the orientation of ∆ n+1 ⊂ R n+1 . Then the characteristic matrix of CP n+1 Since the characteristic vector is determined up to sign, CP n+1 a #CP n+1 a is equivalent to M a,0 . Now let us prove the statement (2). Let r be the number of nonzero components of r. Then, as in (2.4), the characteristic function of M 2,r is We note that its orbit space ∆ n × ∆ 1 can be identified with ∆ n+1 #∆ n+1 , where # is a polytopal connected sum. Then, two subsets become the set of facets of each simplex ∆ n+1 . Hence, by an appropriate re-ordering of facets of each simplex, M 2,r is equivalent to the equivariant connected sum . Therefore, M 2,r is equivalent to either Note that X is homotopy equivalent to the characteristic sub-orbifold Q n+1 = CP n described by z n+1 = 0 in CP n+1 a , and so is Y . Since ∂D = S 2n+1 /µ a is a lens space, the cohomology of S 2n+1 /µ a is: otherwise, see [13] for the cohomology of lens spaces. Consider the Mayer-Vietoris sequence for the pair (X, Y ): Since H odd (CP n ) = 0 and H even (CP n ) = Z, from (3.2) and (3.3), we can see that ,(b,0,...,0) . Hence, by using the cohomology formula (2.5), we compute their cohomology rings as follows: by (3.3), we have that the images of ax 1 + x 2 and x 2 are au and av, respectively. Proof. If n = 1, then CP 2 a #CP 2 a is a Hirzebruch surface, and CP 2 2 #CP 2 2 is diffeomorphic to CP 2 #CP 2 . By [2] or [17], all ring automorphisms on their cohomology rings are realizable by diffeomorphisms. From now on, let us assume that n > 1.
We first compute the ring automorphism groups of H * (CP n+1 a #CP n+1 a ) and H * (CP n+1 a #CP n+1 a ) as subgroups of GL(2, Z) (see Remark 2.3). For each case, since there is only one relation x 2 (ax 1 + x 2 ) = 0 such that a product of two degree-two elements is zero up to scalar multiplication, an automorphism should send {x 2 , ax 1 +x 2 } to {x 2 , ax 1 +x 2 } up to sign. Hence, there are at most 8 automorphisms.
Note that the cohomology rings of CP If n is odd, then We consider an involution s on
Two-stage generalized Bott manifolds
In this section, we restrict our attention to two-stage generalized Bott manifolds. We shall show that any cohomology ring automorphism of a two-stage generalized Bott manifold is realizable by a diffeomorphism. In order to do that, we prepare the following two lemmas. Proof. Since any cohomology ring automorphism of a product of complex projective spaces is induced by a diffeomorphism [6], we may assume that P (E) is a non-trivial fiber bundle.
Note that P (E) is a Hirzebruch surface if n = m = 1. Any cohomology ring automorphism of a Hirzebruch surface is realizable by a diffeomorphism by [2] or [17].
Let ϕ be a ring automorphism of H * (P (E)). By the above claim, ϕ(x 1 ) = ±x 1 . Since any automorphism of H * (CP n ) is induced by a diffeomorphism, we may assume that ϕ(x 1 ) = x 1 .
(I) We assume that ϕ(x 2 ) = x 2 + Ax 1 . Since ϕ(x 2 m j=1 (a j x 1 + x 2 )) = 0 in H * (P (E)), we have as a polynomial of a variable x 2 and with coefficients in H * (CP n ). Comparing the coefficients of x m 2 in both sides of (4.1), one can see that A = 0. Hence, ϕ is the identity which is obviously induced from the identity map of P (E).
(II) Now assume that ϕ( as a polynomial. By substituting x 2 = 1 into (4.2), we obtain in H * (CP n ). Since E possesses a Hermitian metric, its dual bundle E * = Hom(E, C) is canonically isomorphic to the conjugate bundle C ⊕ γ −a 1 ⊕ · · · ⊕ γ −am . By Lemma 4.1, the equation (4.3) implies that be the bundle isomorphism as fiber bundles which is induced by a map h : E → E * by h(u) =< u, · >, where < , > is a Hermitian metric on E. If y is the negative of the first Chern class of the tautological line bundle over P (E * ), then h * (y) = −x 2 . For each q ∈ CP n , we choose a non-zero vector v q from the fiber of γ −A over q and define a map g : E * → E * ⊗ γ −A by g(u q ) = u q ⊗ v q , where u q is an element of the fiber of E * over q. The map g depends on the choice of v q 's but the induced map g : P (E * ) → P (E * ⊗ γ −A ) does not because γ −A is a line bundle. Then the map g : P (E * ) → P (E * ⊗ γ −A ) = P (E) preserves the complex structures on each fiber. Therefore, it induces a complex vector bundle isomorphism T f P (E * ) → T f P (E * ⊗ γ −A ) between their tangent bundles along the fibers. According to the Borel-Hirzebruch formula, their total Chern classes are respectively (1 + y)(1 − a 1 x 1 + y) · · · (1 − a m x 1 + y) and (1 − Ax 1 + x 2 )(1 − Ax 1 − a 1 x 1 + x 2 ) · · · (1 − Ax 1 − a m x 1 + x 2 ).
By (I) and (II), any ring automorphism ϕ is induced by a diffeomorphism.
5.
Quasitoric manifolds over ∆ n × ∆ m In this section, we are going to show that any cohomology ring automorphism of a quasitoric manifold with second Betti number 2 is realizable by a homeomorphism. As we seen in the previous section, any cohomology ring automorphism of a two-stage generalized Bott manifold is realizable by a diffeomorphism. Hence, we only need to consider quasitoric manifolds over ∆ n × ∆ m which are not equivalent to a two-stage generalized Bott manifold.
Proof. Before the proof, the authors would like to inform that the detailed computation of Aut(H * (M s,r )) can be found in the proof of Theorem 6.2 in [7]. Even though it is one of key parts of this proof, in order to avoid the repeated elementary computation, we just use the result here without detailed calculation.
If n = 1 or m = 1, any automorphism of H * (M s,r ) is realizable by a homeomorphism by Corollary 3.7. Now, assume that both n and m are greater than 1.
Then f preserves the orbits of K s,r -action defined in Section 2. Hence, f induces a homeomorphism from M s,r = S 2n+1 × S 2m+1 /K s,r to itself. Let f be the homeomorphism induced from f . Then f * is represented by a matrix −1 0 0 −1 , and hence, {f * } generates Aut(M s,r ). | 2015-11-11T12:16:38.000Z | 2013-02-01T00:00:00.000 | {
"year": 2013,
"sha1": "003359d453d2f75cafeb6140edb6c4fdd3a445a7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1302.0133",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "003359d453d2f75cafeb6140edb6c4fdd3a445a7",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
85535934 | pes2o/s2orc | v3-fos-license | Chirality Construction from Preferred π-π Stacks of Achiral Azobenzene Units in Polymer: Chiral Induction, Transfer and Memory
The induction of supramolecular chirality from achiral polymers has been widely investigated in composite systems consisting of a chiral guest, achiral host, and solvents. To further study and understand the process of chirality transfer from a chiral solvent or chiral molecules to an achiral polymer backbone or side-chain units, an alternative is to reduce the components in the supramolecular assembled systems. Herein, achiral side-chain azobenzene (Azo)-containing polymers, poly(6-[4-(4-methoxyphenylazo) phenoxy] hexyl methacrylate) (PAzoMA), with different Mns, were synthesized by atom transfer radical polymerization (ATRP). Preferred chirality from supramolecular assembled trans-Azo units of PAzoMAs is successfully induced solely by the neat limonene. These aggregates of the polymers in limonene solution were characterized by circular dichroism (CD), UV-vis spectra, and dynamic light scattering (DLS) under different temperatures. The temperature plays an important role in the course of chiral induction. Meanwhile, supramolecular chirality can be constructed in the solid films of the achiral side-chain Azo-containing polymers that were triggered by limonene vapors. Also, it can be erased after heated above the glass transition temperature (Tg) of the polymer, and recovered after cooling down in the limonene vapors. A chiroptical switch can be built by alternately changing the temperature. The solid films show good chiral memory behaviors. The current results will facilitate studying the mechanism of chirality transfer induced by chiral solvent and improve potential application possibilities in chiral film materials.
Introduction
Chirality is a common phenomenon in nature and organisms, such as spiral amino acids, vines, conches, and even the galactic system [1]. The study of chirality in terms of the elemental distribution and molecular structure has always attracted the curiosity of scientists [2]. At the supramolecular level, supramolecular chirality refers to chirality generated by non-covalent interaction between building units, such as hydrogen bonding [3], π-π stacking [4,5], electrostatic interaction [6], and host-guest interaction [7]. The production of supramolecular chirality from achiral polymer building blocks has very important significance in theoretical research and practical application. Many strategies, such as chiral liquid-crystal field [8], chiral solvent [9][10][11], circularly polarized light (CPL) [12][13][14], interfacial interaction [15], and gelation [16] have been reported to produce supramolecular chirality from achiral polymer systems.
The chiral induction of achiral substances by chiral solvent is a flexible and effective method, which is suitable for organic small molecules [17], oligomers [18], and polymer systems [19,20]. The first example of a chiral-solvation-induced chirality of an achiral polymer was demonstrated by Green et al. [21]. The dynamically interconverting helical senses can be found in the circular dichroism (CD) spectrum when dissolving achiral polyisocyanate in several chiral chlorinated solvents. Motivated by this pioneering study, optically active polymers or polymer aggregates, such as polysilanes [22], polyfluorenes [23][24][25], polyacetylenes [26], and azobenzene (Azo)-containing polymers [9,27] have been shown to achieve chirality in aggregation states induced by chiral solvation. Azo-containing polymers are very promising materials in the field of photoswitchable molecular systems and liquid crystalline materials. Due to the dramatic changes in the polarity, shape, and size of the Azo units during the process of photoisomerization, Azo-containing polymers showed unique behaviors of chiroptical switching, which was observed by UV-vis and circular dichroism (CD) spectra [28]. In our group, we have introduced Azo groups into the main chain [9] and side-chain [27] of the achiral polymers, and successively achieved the supramolecular chirality from the achiral building blocks induced by chiral limonene. However, this induction and assembly process of main chain Azo-containing polymers requires a complex system consisting of the good solvent, a chiral solvent, and a weak solvent. Recently, we further simplified the assembled components of a side-chain Azo-containing polymer by using limonene as the weak and chiral solvent simultaneously [27,29,30]. On the one hand, although much effort has been paid to simplifying the assembled components and studying the inducing mechanism, we still have a problem remaining to be solved of whether the construction of supramolecular chirality in the achiral side-chain Azo-containing polymer can be achieved by pure limonene under the controlled temperature. The neat chiral solvent will avoid the interactions between the other solvents (good and poor) and building blocks.
On the other hand, the chirality transfer and amplification phenomena have been generally observed in polymer solution [22,31,32]. However, the chiral transfer, chiral switching, and memory of supramolecular chirality in solid polymer films are more promising in practical applications, such as chiral nonlinear optics and data storage. Guerra et al. [33] presented that the chiral s-PS (syndiotactic polystyrene) films can be obtained through inducing by chiral limonene molecules and then replacing them with their achiral counterparts. The use of circularly polarized light (CPL) has been demonstrated as a method for the chiral induction [34] and chiral regulation [13] of polymer films. Wu et al. reported that the photoinduced circular dichroism of polymer liquid crystals on the thin films can be erased by heating the films above the clearing temperature or by annealing the films in the liquid-crystalline phase [35]. However, the chiral induction of side-chain Azo-containing polymer films by chiral solvent vapors and the modulation of chirality by variable temperature was rarely reported.
In this work, we present the construction of supramolecular chirality from the aggregation of an achiral side-chain Azo-containing polymer in the solution state induced by pure limonene, and in the solid films triggered by chiral limonene vapor. Through a heating-cooling treatment of the polymer solution, the chiral aggregation of achiral Azo-containing polymers can be effectively generated. In the solid film, the chiral limonene vapors can also induce well-organized stacks of Azo units, forming the supramolecular chirality. Meanwhile, the CD signal of the polymer films can be readily erased after heating above the glass transition temperature (T g ) of the polymer, and recovered after cooling it down in the chiral solvent vapors. This regular pattern of chiroptical switch can be repeated effectively at least five times. Besides, the supramolecular chirality of the polymer film can be memorized well. phenoxy]hexyl methacrylate (AzoMA 6 ) were synthesized as described previously [36], which was confirmed by the 1 H and 13 C NMR data ( Figure S1). 1 AzoMA (0.5 g, 1.26 mmol), EBiB (12.3 mg, 0.063 mmol), PMDETA (10.94 mg, 0.063 mmol), CuBr (9.05 mg, 0.063 mmol), and THF (1.5 mL) were added to a 5-mL ampoule. Then, the ampoule was flame-sealed after being deoxygenated with three standard freeze-pump-thaw cycles. The polymerization under argon atmosphere was carried out at 70 • C for 1.5 h. The mixture was diluted with 1 mL of THF, passing a column of neutral Al 2 O 3 , and then precipitated into an excess of methanol (50 mL) twice. After collection by filtration, the polymer was dried in a vacuum oven overnight at 30 • C (0.348 g, 69.5%). Another polymer with different M n was prepared by adjusting the molar ratio of the monomer and initiator with the similar procedures. The molecular weights and molecular weight distributions of the obtained two polymers are listed in Table 1.
Preparation of the Optically Active Polymer Aggregates in Solution
A small amount of polymer solid (0.1 mg) was added to 3 mL of (R)-(+)-limonene (1R) in a quartz cell. The suspension was stirred under 100 • C for 1 h to make sure the polymer was dissolved completely. The concentration of polymer repeating units is 8.42 × 10 −5 mol L −1 . The optically active polymer aggregates were prepared after the solution was cooled down to room temperature. The other polymer aggregates were prepared in a similar way. This yellowish turbid solution of PAzoMA aggregates was employed for CD/UV-vis measurements.
Preparation of the Polymer Solid Films
The 10 mg/mL polymer solution was obtained by solving 10 mg of polymer solid in 1 mL of CHCl 3 . A thin film sample of PAzoMA was prepared by spin coating about 0.1 mL of the polymer solution onto a clean quartz plate at a speed of 0.5 rpm for 6 s and a speed of 1.9 rpm for 20 s. After being dried and annealed under vacuum at 90 • C for 12 h to drive off residual solvent, the film was stored in darkness for further study.
Chiral Induction Process of the Films
The films obtained by the above method were measured with UV-vis and CD spectra. First, the film was fixed in the cell, and suspended above the surface of limonene (0.1 mL). Then, the vapors of limonene would be produced with the temperature increasing, and the optically active polymer film was achieved at the same time. The results of chiral assembly were recorded by UV-vis and CD spectra when the temperature was alternately changed between 70 • C and 60 • C.
Characterization
Gel permeation chromatograph (GPC) measurements were conducted on the TOSOH HLC-8320 gel permeation chromatogragh (GPC) (Tokyo, Japan), which was equipped with a refractive index and UV detectors using two TSKgel SuperMultiporeHZ-N (4.6 × 150 mm, 3.0 µm beads size) columns (Tokyo, Japan) arranged in a series. It can separate polymers in the molecular weight range of 500-190k Da. THF was used as the eluent with a flow rate of 0.35 mL/min at 40 • C. The values of the average molecular weight (M n ) and molecular weight distribution (M w /M n ) of the samples were calculated with polymethyl methacrylate (PMMA) standards. 1 H NMR spectra of the polymers were recorded on a Bruker nuclear magnetic resonance instrument (300 MHz, Brucker, Kalsruhe, Germany) using CDCl 3 as the solvent and tetramethylsilane (TMS) as the internal standard at 25 • C. The UV-vis spectra were recorded on a UV-2600 spectrophotometer (Shimadzu (Nakagyo-ku, Kyoto, Japan)). The CD spectra were recorded on a JASCO J-815 spectropolarimeter equipped with a Peltier-controlled housing unit using a SQ-grade cuvette, a single accumulation, a path length of 10 mm, a bandwidth of 2 nm, a scanning rate of 200 nm min −1 , and a response time of 1 s. The samples were measured at different temperatures. The magnitude of the circular polarization at the ground state was defined as g CD = 2 × (ε L − ε R )/(ε L + ε R ), where ε L and ε R denoted the extinction coefficients for left and right circularly polarized light, respectively. Experimentally, the g CD value was defined as ∆ε/ε = [ellipticity/32,980]/absorbance at the CD extremum. Elemental analyses (C, H, and N) were measured with an EA1110 CHNO-S instrument. The thermal behavior and glass transition temperatures of PAzoMA were measured using a TA-Q100 DSC instrument (New Castle, DE, USA). Dynamic light scattering (DLS) measurements were performed with a Zetasizer Nano ZS instrument (Brookhaven, Holtsville, TX, USA) at different temperatures.
Synthesis and Characterization of Side-Chain Azo-Containing Polymers
The homopolymers of AzoMA (PAzoMA) (Scheme 1) were prepared by atom transfer radical polymerization (ATRP) [36] with a controlled molecular weight (M n ) and relatively low molecular weight distribution (M w /M n ) ( Figure S2). As presented in Table 1, the side-chain Azo-containing polymers with different M n s (7400 g mol −1 and 12,400 g mol −1 ) and relative low molecular weight distribution (M w /M n ) (1.14 and 1.20) were successfully prepared by changing the molar ratio of the monomer and initiator during the polymerization process.
The Chiral Aggregation of the Azo-Containing Polymer in Neat Limonene Solution
As was previously reported, typically a good solvent, chiral solvent, and poor solvent are required to produce supramolecular chiral aggregation from achiral conjugated polymers [27,29]. Indeed, the chiral-solvation-induced chirality by single solvent can greatly simplify the research system. In this work, the limonene (1R and 1S in Scheme 1) was simultaneously chosen as the chiral solvent, poor solvent, and good solvent. The polymer solid can be dissolved in limonene completely after stirring for 1 h at relatively high temperature (100 °C). Then, the supramolecular aggregation of the polymer occurred after cooling down the solution. The temperature is a very important factor in the supramolecular assembly of Azo-containing polymers. When the temperature is higher than 70 °C, the well-dissolved polymer chains in limonene may result in a relatively weak π-π stacking of Azo units in polymer side chains. The optically active polymer aggregates would be produced due to a stronger π-π stacking of Azo units after the temperature drops below 70 °C. This result can be revealed by DLS, UV-vis, and CD spectra.
For the side-chain Azo-containing polymers (PAzoMA1 and PAzoMA2), the UV-vis spectra (Figure 1b,d and Figure S3) of the polymer aggregates in limonene (1R or 1S) consists of two absorption bands. The absorption bands ranging from 320 nm to 400 nm are attributed to the π-π* electronic transition of the trans-Azo, and the others from 400 nm to 500 nm are attributed to the nπ* electronic transition of the cis-Azo. With the temperature decreasing, absorption of the π-π* band became lower and wider, indicating that the strong π-π stacking of Azo units in the polymer side chain occurred.
The intense and mirror-image cotton effects can be found in the CD spectrum. The CD signals are related to the π-π* electronic transition of the trans-Azo, demonstrating that the supramolecular chirality could be successfully introduced to the side-chain of the Azo-containing polymers by the neat chiral solvent (limonene). As shown in Figure 1a,c,e,f, with the temperature decreasing from 70 °C to 20 °C, the CD and gCD values of the aggregates began to gradually increase. The relatively poor solubility of the polymers in limonene increased the degree of aggregation, which resulted in the increasing intensity of the CD signals. These results confirm our conjecture. Compared with the chiral assembly in the mixed solvents [27], the maximum CD and gCD values of the aggregates in neat limonene are much higher. Furthermore, PAzoMA2 with a high molecular weight gives higher CD and gCD absolute maximum values (Figure 1e,f), resulting from the much higher degree of chiral aggregation. The stronger chiral aggregation behavior of PAzoMA2 compared with PAzoMA1 can be observed in their UV-vis spectra (Figure 1b,d). The relatively lower UV-vis absorption intensity and wide spectra demonstrated the stronger aggregation by π-π stacking of Azo units in the polymer side chain.
The Chiral Aggregation of the Azo-Containing Polymer in Neat Limonene Solution
As was previously reported, typically a good solvent, chiral solvent, and poor solvent are required to produce supramolecular chiral aggregation from achiral conjugated polymers [27,29]. Indeed, the chiral-solvation-induced chirality by single solvent can greatly simplify the research system. In this work, the limonene (1R and 1S in Scheme 1) was simultaneously chosen as the chiral solvent, poor solvent, and good solvent. The polymer solid can be dissolved in limonene completely after stirring for 1 h at relatively high temperature (100 • C). Then, the supramolecular aggregation of the polymer occurred after cooling down the solution. The temperature is a very important factor in the supramolecular assembly of Azo-containing polymers. When the temperature is higher than 70 • C, the well-dissolved polymer chains in limonene may result in a relatively weak π-π stacking of Azo units in polymer side chains. The optically active polymer aggregates would be produced due to a stronger π-π stacking of Azo units after the temperature drops below 70 • C. This result can be revealed by DLS, UV-vis, and CD spectra.
For the side-chain Azo-containing polymers (PAzoMA 1 and PAzoMA 2 ), the UV-vis spectra (Figure 1b,d and Figure S3) of the polymer aggregates in limonene (1R or 1S) consists of two absorption bands. The absorption bands ranging from 320 nm to 400 nm are attributed to the π-π* electronic transition of the trans-Azo, and the others from 400 nm to 500 nm are attributed to the n-π* electronic transition of the cis-Azo. With the temperature decreasing, absorption of the π-π* band became lower and wider, indicating that the strong π-π stacking of Azo units in the polymer side chain occurred.
The intense and mirror-image cotton effects can be found in the CD spectrum. The CD signals are related to the π-π* electronic transition of the trans-Azo, demonstrating that the supramolecular chirality could be successfully introduced to the side-chain of the Azo-containing polymers by the neat chiral solvent (limonene). As shown in Figure 1a,c,e,f, with the temperature decreasing from 70 • C to 20 • C, the CD and g CD values of the aggregates began to gradually increase. The relatively poor solubility of the polymers in limonene increased the degree of aggregation, which resulted in the increasing intensity of the CD signals. These results confirm our conjecture. Compared with the chiral assembly in the mixed solvents [27], the maximum CD and g CD values of the aggregates in neat limonene are much higher. Furthermore, PAzoMA 2 with a high molecular weight gives higher CD and g CD absolute maximum values (Figure 1e,f), resulting from the much higher degree of chiral aggregation. The stronger chiral aggregation behavior of PAzoMA 2 compared with PAzoMA 1 can be observed in their UV-vis spectra (Figure 1b,d). The relatively lower UV-vis absorption intensity and wide spectra demonstrated the stronger aggregation by π-π stacking of Azo units in the polymer side chain. The above results can also be proved by dynamic light scattering (DLS). With the temperature decreasing from 70 °C to 20 °C, the size of the polymer aggregates becomes bigger, ranging from 0 nm to 810 nm in R-limonene, and from 0 nm to 784 nm in S-limonene ( Figure 2). The above results can also be proved by dynamic light scattering (DLS). With the temperature decreasing from 70 • C to 20 • C, the size of the polymer aggregates becomes bigger, ranging from 0 nm to 810 nm in R-limonene, and from 0 nm to 784 nm in S-limonene ( Figure 2).
Supramolecular Chirality of Polymer Solid Films
The liquid-crystalline phase transition of side-chain Azo-containing polymers has been intensively investigated in many groups [37][38][39]. According to the DSC results ( Figure S5), the glass transition temperature is around 65 °C for both PAzoMA1 and PAzoMA2. In the case of a polymer solid film, a small amount of 1R or 1S limonene (0.1 mL) was added to the quartz cell. After that, the polymer solid film was fixed in the cell and suspended above the limonene surface. When 1R was added, the intense negative CD signal related to the π-π* electronic transition of the trans-Azo chromophore was observed at 60 °C, indicating that the optically active polymer film was successfully produced by limonene vapor. The absorption of the π-π* band began to decrease in the UV-vis spectra ( Figure S6) when the temperature reached 70 °C beyond Tg. The chiral signal in the CD spectra disappeared immediately at the same time. The reason may be that the chain segment of the polymer would begin to move randomly after heating above its Tg temperature, which would destroy the well-organized helical π-π stacking of the Azo units induced by limonene vapor. Similar results were found in the thin films of achiral liquid crystal polymers induced by CPL [35]. Interestingly, when the polymer films were cooled to 60 °C again, the polymer chains cooled down in the environment of the chiral vapors, and the chiral aggregates could re-form again. Chiral signals similar to the previous intensity were observed in the CD spectrum. We successfully built a chiral switch of the polymer solid film by alternately changing the temperature between 70 °C and 60 °C. Five switching cycles were tested, and the obvious decline of the maximum absolute CD amplitude was not found, as presented in Figure 3. Almost intense mirror-image CD spectra were obtained when 1R was replaced by 1S.
Supramolecular Chirality of Polymer Solid Films
The liquid-crystalline phase transition of side-chain Azo-containing polymers has been intensively investigated in many groups [37][38][39]. According to the DSC results ( Figure S5), the glass transition temperature is around 65 • C for both PAzoMA 1 and PAzoMA 2 . In the case of a polymer solid film, a small amount of 1R or 1S limonene (0.1 mL) was added to the quartz cell. After that, the polymer solid film was fixed in the cell and suspended above the limonene surface. When 1R was added, the intense negative CD signal related to the π-π* electronic transition of the trans-Azo chromophore was observed at 60 • C, indicating that the optically active polymer film was successfully produced by limonene vapor. The absorption of the π-π* band began to decrease in the UV-vis spectra ( Figure S6) when the temperature reached 70 • C beyond T g . The chiral signal in the CD spectra disappeared immediately at the same time. The reason may be that the chain segment of the polymer would begin to move randomly after heating above its T g temperature, which would destroy the well-organized helical π-π stacking of the Azo units induced by limonene vapor. Similar results were found in the thin films of achiral liquid crystal polymers induced by CPL [35]. Interestingly, when the polymer films were cooled to 60 • C again, the polymer chains cooled down in the environment of the chiral vapors, and the chiral aggregates could re-form again. Chiral signals similar to the previous intensity were observed in the CD spectrum. We successfully built a chiral switch of the polymer solid film by alternately changing the temperature between 70 • C and 60 • C. Five switching cycles were tested, and the obvious decline of the maximum absolute CD amplitude was not found, as presented in Figure 3. Almost intense mirror-image CD spectra were obtained when 1R was replaced by 1S.
Supramolecular Chirality of Polymer Solid Films
The liquid-crystalline phase transition of side-chain Azo-containing polymers has been intensively investigated in many groups [37][38][39]. According to the DSC results ( Figure S5), the glass transition temperature is around 65 °C for both PAzoMA1 and PAzoMA2. In the case of a polymer solid film, a small amount of 1R or 1S limonene (0.1 mL) was added to the quartz cell. After that, the polymer solid film was fixed in the cell and suspended above the limonene surface. When 1R was added, the intense negative CD signal related to the π-π* electronic transition of the trans-Azo chromophore was observed at 60 °C, indicating that the optically active polymer film was successfully produced by limonene vapor. The absorption of the π-π* band began to decrease in the UV-vis spectra ( Figure S6) when the temperature reached 70 °C beyond Tg. The chiral signal in the CD spectra disappeared immediately at the same time. The reason may be that the chain segment of the polymer would begin to move randomly after heating above its Tg temperature, which would destroy the well-organized helical π-π stacking of the Azo units induced by limonene vapor. Similar results were found in the thin films of achiral liquid crystal polymers induced by CPL [35]. Interestingly, when the polymer films were cooled to 60 °C again, the polymer chains cooled down in the environment of the chiral vapors, and the chiral aggregates could re-form again. Chiral signals similar to the previous intensity were observed in the CD spectrum. We successfully built a chiral switch of the polymer solid film by alternately changing the temperature between 70 °C and 60 °C. Five switching cycles were tested, and the obvious decline of the maximum absolute CD amplitude was not found, as presented in Figure 3. Almost intense mirror-image CD spectra were obtained when 1R was replaced by 1S.
Chiral Memory Property of Azo-Containing Polymer Film
Due to the strong molecular interaction occurring in the polymer chains in the solid state, the polymer films always present a superior chiral memory performance [40]. In this case, the supramolecular chirality was firstly induced in thin films of the achiral side chain Azo-containing polymers by limonene vapors. After that, the residual limonene molecular was removed at 40 • C under vacuum for 24 h, and then stored in a fume hood for several months. The content of limonene was measured by 1 H NMR and described in Figure S7. There would be a slight attenuation of the CD signals in the PAzoMA films at the beginning, and then, the signals could maintain a stable intensity for several months. The results can be proved in Figure 4.
Chiral Memory Property of Azo-Containing Polymer Film
Due to the strong molecular interaction occurring in the polymer chains in the solid state, the polymer films always present a superior chiral memory performance [40]. In this case, the supramolecular chirality was firstly induced in thin films of the achiral side chain Azo-containing polymers by limonene vapors. After that, the residual limonene molecular was removed at 40 °C under vacuum for 24 h, and then stored in a fume hood for several months. The content of limonene was measured by 1 H NMR and described in Figure S7. There would be a slight attenuation of the CD signals in the PAzoMA films at the beginning, and then, the signals could maintain a stable intensity for several months. The results can be proved in Figure 4.
Chiral Amplification
By varying the enantiomeric excess (ee) of chiral limonene, the possibility of chiral amplification in the aggregation of Azo units were investigated. While keeping the temperature of the system as a constant (60 °C), the linear plot of maximum CD and gCD values ( Figure 5) demonstrated that the chiral aggregation of Azo units was linearly controlled by the respective molar ratio of enantiomers (1R/1S). No obvious chiral amplification occurred while changing the enantiomeric excess (ee) of the chiral solvent. A similar result was also found in the polymer solid films. The chiral signals of the aggregates in the thin films can be adjusted by changing the ee of the chiral limonene vapors. Similar results were also reported in the limonene-induced supramolecular chirality of π-conjugated mainchain polymers [9,40] and side-chain polymers [27,29].
Conclusions
In summary, the supramolecular chirality was successfully introduced to achiral side chain Azocontaining polymers by heating-cooling treatment in pure limonene solution. With the decrease of
Chiral Amplification
By varying the enantiomeric excess (ee) of chiral limonene, the possibility of chiral amplification in the aggregation of Azo units were investigated. While keeping the temperature of the system as a constant (60 • C), the linear plot of maximum CD and g CD values ( Figure 5) demonstrated that the chiral aggregation of Azo units was linearly controlled by the respective molar ratio of enantiomers (1R/1S). No obvious chiral amplification occurred while changing the enantiomeric excess (ee) of the chiral solvent. A similar result was also found in the polymer solid films. The chiral signals of the aggregates in the thin films can be adjusted by changing the ee of the chiral limonene vapors. Similar results were also reported in the limonene-induced supramolecular chirality of π-conjugated main-chain polymers [9,40] and side-chain polymers [27,29].
Chiral Memory Property of Azo-Containing Polymer Film
Due to the strong molecular interaction occurring in the polymer chains in the solid state, the polymer films always present a superior chiral memory performance [40]. In this case, the supramolecular chirality was firstly induced in thin films of the achiral side chain Azo-containing polymers by limonene vapors. After that, the residual limonene molecular was removed at 40 °C under vacuum for 24 h, and then stored in a fume hood for several months. The content of limonene was measured by 1 H NMR and described in Figure S7. There would be a slight attenuation of the CD signals in the PAzoMA films at the beginning, and then, the signals could maintain a stable intensity for several months. The results can be proved in Figure 4.
Chiral Amplification
By varying the enantiomeric excess (ee) of chiral limonene, the possibility of chiral amplification in the aggregation of Azo units were investigated. While keeping the temperature of the system as a constant (60 °C), the linear plot of maximum CD and gCD values ( Figure 5) demonstrated that the chiral aggregation of Azo units was linearly controlled by the respective molar ratio of enantiomers (1R/1S). No obvious chiral amplification occurred while changing the enantiomeric excess (ee) of the chiral solvent. A similar result was also found in the polymer solid films. The chiral signals of the aggregates in the thin films can be adjusted by changing the ee of the chiral limonene vapors. Similar results were also reported in the limonene-induced supramolecular chirality of π-conjugated mainchain polymers [9,40] and side-chain polymers [27,29].
Conclusions
In summary, the supramolecular chirality was successfully introduced to achiral side chain Azocontaining polymers by heating-cooling treatment in pure limonene solution. With the decrease of
Conclusions
In summary, the supramolecular chirality was successfully introduced to achiral side chain Azo-containing polymers by heating-cooling treatment in pure limonene solution. With the decrease of temperature, the chiral signals of aggregates tended to increase. Besides, the supramolecular chirality of the polymer solid films could be induced by limonene vapors. The chiral supramolecular structure can be destroyed by heating the film above the glass transition temperature (T g ) of the polymer due to the irregular movements of the polymer chains. Meanwhile, it can be recovered by cooling down the film in the environment of limonene vapors. This reversible chiral-achiral switching process can be repeated more than five times. Furthermore, the supramolecular chirality can be perfectly memorized in the solid state. We successfully achieved the supramolecular chirality of the achiral side chain Azo-containing polymers in the neat chiral limonene and the thin films induced by chiral limonene vapor. | 2019-02-06T04:02:47.473Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "d8b2c58da3f90398b8d214da100f3b9011dc214d",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/polymers/polymers-10-00612/article_deploy/polymers-10-00612.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d8b2c58da3f90398b8d214da100f3b9011dc214d",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
231968097 | pes2o/s2orc | v3-fos-license | The G1 phase optical reporter serves as a sensor of CDK4/6 inhibition in vivo
Visualization of cell-cycle G1 phase for monitoring the early response of cell cycle specific drug remains challenging. In this study, we developed genetically engineered bioluminescent reporters by fusing full-length cyclin E to the C-terminal luciferase (named as CycE-Luc and CycE-Luc2). Next, HeLa cell line or an ER-positive breast cancer cell line MCF-7 was transfected with these reporters. In cellular assays, the bioluminescent signal of CycE-Luc and CycE-Luc2 was accumulated in the G1 phase and decreased after exiting from the G1 phase. The expression of CycE-Luc and CycE-Luc2 fusion protein was regulated in a cell cycle-dependent manner, which was mediated by proteasome ubiquitination and degradation. Next, our in vitro and in vivo experiment confirmed that the cell cycle arrested by anti-cancer agents (palbociclib or 5-FU) was monitored quantitatively and dynamically by bioluminescent imaging of these reporters in a real-time and non-invasive manner. Thus, these optical reporters could reflect the G1 phase alternation of cell cycle, and might become a future clinically translatable approach for predicting and monitoring response to palbociclib in patients with ER-positive breast cancer.
Introduction
Extensive clinical evidence warrants targeting cell cycle as a therapeutic approach for cancer. Discovery of cell cycle-specific anti-cancer drugs involves several major processes including target identification and validation, high-throughput screening, lead optimization, and finally preclinical and clinical trials [1]. One of the key objectives of drug discovery is to evaluate whether a compound can hit the target if it alters downstream molecular pathways in cultured cells and living animals. Traditionally, in animal studies, target validation is performed invasively by immunohistochemistry and/or molecular profiling after dissection of targeted organs/tissues [2]. However, non-invasive reporter imaging approaches not only provide a longitudinal and temporal pharmacodynamic readout in the same group of animals but also measure real-time dynamic changes in drug targets. Among the MR imaging, nuclear imaging, and optical imaging technologies [3], optical imaging is based on quantitative or qualitative changes in light emission by fluorescent or bioluminescent proteins.
Ivyspring International Publisher
Many pharmaceutical companies are currently developing cell cycle-specific drugs as potential anti-cancer agents such as 5-FU and potent and selective CDK (2, 4/6, 7, and 9) inhibitors. Treatment of cells with 5-FU leads to the accumulation of cells in the S-phase, and treatment of cells with CDK2 or CDK4/6 inhibitors prevents the phosphorylation of tumor suppressor RB, thereby invoking cancer cell cycle arrest in the G1 phase. Recently, due to the striking clinical trial results of the CDK4/6 inhibitors demonstrating substantial improvements in progression-free survival [4][5][6][7][8], these inhibitors have transited rapidly from preclinical studies to the clinical arena, and three of them (palbociclib [6], abemaciclib [9,10] and ribociclib [11]) have already been approved for the treatment of advanced, estrogen receptor (ER)-positive breast cancer patients. In the clinic, the effect of cell cycle-specific anti-cancer drugs on decreasing tumor burden has been assessed by the response evaluation criteria in solid tumors (RECIST) [12]. However, the reduction in tumor size can take several weeks to become manifest and, in some cases, may not occur at all. Thus, scientists have been focused on the development of molecular imaging methods to distinguish responders versus non-responders at early time points.
The cell cycle is regulated by both intracellular and extracellular signals. The transition from M to G1 phase during cell division could be determined by morphological changes. Mostly, G1/S transition is observed after nuclear bromodeoxyuridine (BrdU) staining.
Additionally, fluorescent protein engineering and live-cell imaging techniques have fueled the concurrent development and application of genetically encoded fluorescent reporters for tracking the different phases of the cell cycle. For example, fluorescent indicators for S phase and the subsequent transition to G2 in live cells have been developed by fusing a fluorescent protein to proliferating cell nuclear antigen (PCNA) [13] or the C terminus of helicase B (GE healthcare) [14]. Moreover, in the widely used Fucci reporter system, G1-phase cells are labeled with a fluorescent protein fused to Cdt1, while cells in S, G2, or M are labeled with another fluorescent protein fused to Geminin [15,16]. However, since the identification of cell-cycle transitions requires the detection of subtle and often minute changes in the distribution pattern and intensity of fluorescence signals, these markers cannot track phase transitions with high contrast. On the contrary, novel bioluminescent imaging systems based on luciferase could modify the substrate in vivo and in so doing produce light, which can be detected using sensitive cooled charge-coupled device cameras. The advantage of bioluminescence over fluorescence imaging is that the sensitivity for detecting signal is very low (10 -17 -10 -15 M) with no imaging background. Furthermore, this technique could offer a non-invasive way for rapid, real-time monitoring of biological events in living cells [17] and animals [18,19]. Thus, as an alternative to fluorescence assay, bioluminescence imaging is capable of ironing out the flaw of tissue autofluorescence resulting in high signal to noise ratio and provides complimentary advantages for preclinical applications in vivo.
The firefly luciferase (Luc) protein is the most widely used bioluminescent imaging reporter for monitoring the status of proteins i.e., p53 protein [20] or NFκB activity [18,21] or the proteasome inhibition [22][23][24]. The key proteins that drive cell cycle progression have been selected and developed as optical reporters for different cell cycle phases based on their specificity, sensitivity, and versatility, in combination with the non-invasive and non-destructive nature of the bioluminescent imaging and the power of genetic encoding. These proteins are mainly being studied to determine whether they are uniquely suited for monitoring the therapeutic response. So far, our research group has developed a series of bioluminescence reporters to visualize cell cycle changes by detecting the accumulation of cell cycle proteins and used these tools to monitor the treatment response of cell-cycle specific drugs. For example, p27-Luc was downregulated when cell cycle was arrested in late G1 or S phase, and the non-invasive bioluminescent imaging of this reporter was observed as it accumulated after treatment of CDK2 inhibitors (flavopiridol and R-roscovitine) [25]. Similarly, cyclin A2-luciferase was used to monitor S-phase arrest by the drug 10-hydroxycamptothecin (HCPT) [26]. Thus, the bioluminescence imaging reporters are helping to bridge the gap between our understanding of critical biologic events and the clinical applications of specific cell cycle inhibitors.
Cyclin E is expressed from the mid G1 phase through late G1 phase and degraded in S phase by proteasomal ubiquitin-dependent proteolysis. Cyclin E promotes the cell cycle progression and drives cells into the S phase by activating CDK2. In the present study, we harnessed this feature of cyclin E to develop the cyclin E and Luc fusion proteins as genetically encoded indicators for the G1 phase. The application of the G1 phase reporter for monitoring or predicting the early response of cell-cycle-specific drugs might help to identify patients who are most likely to benefit from these drugs alone or in combinational-therapy. The preclinical study described herein was designed to test the feasibility of this approach.
Construction of plasmids
CycE-Luc plasmid: Human full-length cyclin E was amplified by PCR using a primer pair (Forward: 5'-CAGGATCCCCAAGCTTCCATGAA GGAGGAC GGCGGCGC-3'; Reverse: 5'-CCGGAATGCCAAGC TTG CGCCATTTCCGGCCCGCTGC-3'). The PCR fragment of cyclin E was digested with Hind III restriction enzyme and ligated into an expression plasmid 10-4 cyclin E promotor (Addgene, Sidney St., Cambridge, MA, #8458), in which firefly luciferase cDNA is under control of the cyclin E promoter. The cloned expression construct for cyclin E and Luciferase fusion protein was verified by automated DNA sequencing (Sangon Co. Ltd, Shanghai, China).
CycE-Luc2 plasmid:
Similarly, human full-length cyclin E was amplified by PCR using a primer pair (same forward and reverse primers as CycE-Luc plasmid construction). The CycE cDNA fragment was cloned into the pcDNA3.1(+)/Luc2=tdT vector (Addgene, Sidney St., Cambridge, MA), in which firefly luciferase 2 cDNA is under control of the CMV promoter. The cloned expression construct for cyclin E and luciferase 2 fusion protein was verified by automated DNA sequencing (Sangon Co. Ltd, Shanghai, China).
Transfection of cyclin E and Luc fusion plasmids in cell lines
HeLa cells and MCF-7 cells were maintained in Dulbecco's Modified Eagle Medium supplemented with 10% fetal bovine serum and 1% penicillin/ streptomycin. Plasmid transfections were performed with Lipofectamine 2000 (Invitrogen, Camarillo, USA) according to the manufacturer's instructions. To establish stable cell lines expressing cyclin E and Luc fusion proteins, cells were transfected with expression vectors. Forty-eight hours later, cells were exposed to 1000 μg/mL G418 (Gibco, USA). After 14 days of selection, G418-resistant clones were randomly picked and cultured in the medium containing 500 μg/mL G418. Single clonal cell lines were established with single-cell deposition into each well of the 96-well plate.
The transfection procedure of siRNA
Cells were seeded in 6-cm dishes and transfected with siRNA targeting Fbw7 using Lipofectamine 2000 transfection agent (Invitrogen, Camarillo, USA) according to the manufacturer's recommendations. Scramble siRNA was used as a negative control for comparison. All siRNAs were transfected at 100 nM final concentration. The transfection reagent was replaced by complete medium after incubation for 6 h, and cells were collected 48 h post transfection for western blot analysis.
Cell cycle synchronization
Nocodazole block: Cells were cultured in regular media for 24 h to achieve cell adherence before fresh culture medium containing 0.4 μg/mL nocodazole (Sigma, St. Louis, MO, USA) was added, and then cells were further incubated for 18 h.
Double thymidine block: Following overnight adherence, the culture medium on cells was replaced with fresh medium supplemented with 2 mmol/L thymidine (Sigma, St. Louis, MO, USA) and incubation continued for 14 hours. The monolayers were then washed with PBS and incubated for 12 hours in fresh medium. At this point, the medium was again replaced with fresh medium containing 2 mmol/L thymidine and incubation continued for a further 14 hours.
Cell cycle analysis
Cells were fixed with ice-cold 70% ethanol at 4 °C overnight in dark, then centrifuged and incubated in 0.5ml staining buffer containing 10 µl propidium iodide and 10 µl RNase A (YEASEN, Shanghai, China) for 30 min at 37 °C in dark. DNA contents were analyzed using a FACScan flow cytometer. DNA histograms were obtained using ModFit LT 3.1 (Verity Software House, Shanghai, China).
Bioluminescent imaging in vitro and in vivo
For in vitro studies, D-luciferin was added to tissue culture medium, to a final concentration of 150 µg/ml. Five minutes later, photons were counted using the IVIS imaging system (Xenogen) according to the manufacturer's instructions. Data were analyzed using Living Image software (version 4.5.5, Xenogen).
Western blot analysis
Proteins from cultured cells were extracted by lysis in Reporter Lysis Buffer (Promega) and quantitated by BCA protein assay (Pierce Biotechnology). Equal amounts of protein per lane (30 µg) were subjected to SDS-PAGE, transferred to nitrocellulose membranes, and immunoblotted with antibodies against cyclin E, β-Actin, GAPDH and Fbw7. Protein bands were developed using horseradish peroxidase-conjugated secondary antibodies (Santa Cruz Biotechnology) and visualized using ECL detection reagents (Pierce Biotechnology) according to the manufacturer's instructions. Cells were lysed with lysis buffer (Promega). The supernatants of cell lysates were separated by 10% SDS-PAGE and proteins were transferred to polyvinylidene difluoride membrane. CycE-Luc2 and CycE-Luc fusion protein expression were detected by using antibodies against Cyclin E (Santa Cruz) using β-actin as a control.
Luciferase assay in vitro
Luciferase activity in cell extracts was assessed using the Luciferase assay system (Promega) according to the manufacturer's instructions. Briefly, cells were lysed by rocking in passive lysis buffer (Promega) for 15 min at room temperature. For each assay, 10 microliters of cell extract were used for measuring luminescence intensity after administration of D-luciferin using a Lumat LB9507 luminometer (Berthold Technologies).
In vivo mouse imaging experiments
All experimental procedures with animals used in this study were approved by the Association for Assessment and Accreditation of Laboratory Animal Care International (AAALAC) and the Institutional Animal Care and Use Committee (IACUC) of Shantou university medical college, China (SUMC2018-306). Approximately 1×10 7 monoclonal HeLa cells producing CycE-Luc in 0.2 mL PBS were injected subcutaneously into sites on the right flank of Nu/Nu nude mice under anesthesia (Sodium pentobarbital). For in vivo studies, at 0, 24, and 48 h after intraperitoneal injection of PBS, MG132 (2 mg/kg per mouse), and 5-FU (25 mg/kg per mouse), mice were administered D-Luciferin (150 mg/kg) and imaged using the Xenogen IVIS Lumina imaging system (Xenogen/Caliper) as described before.
Alternatively, Nu/Nu nude mice were randomized, implanted s.c. with CycE-Luc2 expressing monoclonal cells in the right axilla. Palbociclib treatment was initiated when tumors reached ~ 100 mm 3 by oral gavage daily at 150 mg/kg for 8 days twice a week after implantation, and the mice were imaged using the Xenogen IVIS Lumina imaging system. In the end, mice were necropsied with tumor flash-frozen for molecular and histological studies. Data were analyzed using Living Image software.
Immunohistochemical analyses
Mice were euthanized and tumor tissues were collected 48 h after intraperitoneal administration of PBS, MG132 (2 mg/kg per mouse), and 5-FU (25 mg/kg per mouse). Samples of tissues processing and immunohistochemistry staining were performed as previously describe [27]. The expression of cyclin E in tumors was detected using goat polyclonal antibody for cyclin E (R & D, 1:100 dilution).
Statistical analysis
The data significance was evaluated by SPSS 11.5 software. All values were presented as mean ± SD. Statistical significance among various groups was calculated by one-way ANOVA using post hoc multiple comparisons, when p<0.05 was considered statistically significant.
Construction of CycE-Luc reporter
We fused luciferase protein to cyclin E, a tightly regulated cyclin that is expressed in the G1 phase and subsequently degraded during G1/S transition, to develop fluorescent probes that indicate whether individual live cells are in the G1 phase. First, an expression vector encoding the fusion protein of cyclin E linked to firefly luciferase under the control of cyclin E promoter was generated and named CycE-Luc ( Figure 1A). As shown in Supplementary Figure
Expression of CycE-Luc fusion protein is regulated in a cell cycle-dependent manner
To investigate whether CycE-Luc is regulated in a cell cycle-dependent manner, HeLa cells stably expressing CycE-Luc were blocked with nocodazole and then released into the cell cycle. Subsequently, cells were harvested for the luciferase activity assay or flow cytometric analysis at various post-release time points. Determination of the DNA content by flow cytometry revealed that the expression of CycE-Luc fusion protein ( Figure 1C) and luciferase activity ( Figure 1D) were the lowest at 0 h, increased to the highest (18×10 4 ) during 3 to 9 h release time when cells entered the G1 phase, and then reduced when cells entered into S and G2/M phases ( Figure 1E and 1F).
Establishment of CycE-Luc2 monoclonal HeLa cells for highly sensitive monitoring of the G1 phase
As shown in Figure 1, due to the low cyclin E promoter activity, the luciferase signal of CycE-Luc was relatively low. To increase the abundance of this reporter, we constructed another expression vector encoding a fusion protein of cyclin E linked to firefly luciferase 2 under the control of CMV promoter, named CycE-Luc2 (Figure 2A and Supplementary Figure S1B). As displayed in Figure 2B, Western blotting showed that CycE-Luc2 fusion protein was at a size of 110 kDa similar to CycE-luc. Through 96 orifice screening, 4 clones of stable CycE-Luc2expressing HeLa cell lines were established of which the biological fluorescence signal intensity of clone 2 was significantly higher than the other 3 clones ( Figure 2C and 2D). When CycE-Luc2 HeLa clone 2 was serially double diluted from 16×10 4 cells ( Figure 2E), the luciferase activity was correspondingly decreased with a high correlation between the intensity of the bioluminescent signal and the cell number (R 2 =0.9925, Figure 2F). Importantly, the luciferase activity in CycE-Luc2 HeLa clone 2 cells increased from 3 to 9 h release-time after nocodazole blocking ( Figure 2G). Flow cytometry analysis showed that during this period, the cells entered the G1 phase ( Figure 2H and 2I), suggesting the successful establishment of a sensitive CycE-Luc2 reporter for monitoring the G1 phase.
CycE-Luc2 reporter monitors the G1 phase arrest in MCF-7 breast cancer cells in vitro
To test if CycE-Luc2 reporter could monitor cell cycle change in other cells, an ER positive breast cancer cell line MCF-7, which is more sensitive to CDK4/6 inhibitors, was applied. CycE-Luc2 overexpressed MCF-7 cells were synchronized with either nocodazole or double thymidine blocking. When cells were blocked with nocodazole, the luciferase activity was the lowest (4.4×10 6 ), and increased steadily with the highest at 12 h (8.49×10 6 ) after release ( Figure 3A). Flow cytometry analysis verified that the cells was blocked at G2/M phase and re-entered the G1 phase after release ( Figure 3B and 3C). When blocked with double thymidine method, MCF-7-CycE-Luc2 cells showed higher luciferase activity (1.17×10 6 ) at 0 h, and decreased to the lowest (0.76×10 6 ) at 6 h after release, then increased eventually thereafter ( Figure 3D). CycE-Luc2 fusion protein expression detected by Western was consistent with luciferase activity and cell cycle distribution (Supplementary Figure S2A, S2B). Flow cytometry analysis demonstrated that the cells entered the S and G2/M phases from 0 to 6 h after release, and cells entered G1/S phase from 6 to 12 h ( Figure 3E and 3F). All these results confirmed that the CycE-Luc2 was a sensitive reporter for monitoring the G1 phase in breast cancer cells.
CycE-Luc and CycE-Luc2 degradation is mediated by the ubiquitination-proteasome system
Previous research had shown that SCF/Fbw7 mediates degradation of cyclin E [28]. To investigate whether SCF/Fbw7 regulated the turnover of CycE-Luc, it was depleted from CycE-Luc cells using SCF/Fbw7 siRNA. Forty-eight hours after siRNA transfection, Fbw7 was effectively depleted ( Figure 4A). The expression of CycE-Luc was significantly increased as compared with mock-transfected cells by the immunoblot assay as well as luciferase activity analysis ( Figure 4A and 4B). We further identified whether CycE-Luc was eliminated via proteasome degradation. After HeLa cells were treated with the proteasome inhibitor MG132, we observed a significant increase in CycE-Luc protein as well as the luciferase activity ( Figure 4C and 4D). The degradation of CycE-Luc2 was also controlled by the proteasome complex ( Figure 4E and 4F). Thus, these results revealed the degradation of CycE-Luc and CycE-Luc2 was mediated via the proteasome complex.
CycE-Luc reporter can monitor cell cycle inhibition by 5-FU in vitro and in vivo
To evaluate the effect of the anti-tumor drug on cell cycle using this reporter, stably CycE-Luc expressing cells were treated with 5-FU at a dose of 0.5 g/L and 1.0 g/L. Cells were collected for analysis of relative luciferase activity and immunoblotting after 20 h incubation with 5-FU. The CycE-Luc level as well as the luciferase activity was significantly decreased after 5-FU treatment in a dose-dependent manner ( Figure 5A and 5B). The 5-FU treatment also resulted in increased accumulation in S-phase and decreased accumulation in the G1 phase in a dose-dependent manner ( Figure 5C and 5D). Thus, CycE-Luc reporter system could be used to monitor the antitumor efficacy of 5-FU. Next, we evaluated the effect of 5-FU on the cell cycle using bioluminescent CycE-Luc reporter in a subcutaneous mouse tumor model. First, HeLa-CycE-Luc cells were implanted subcutaneously in nude mice. Three weeks after implantation, the tumor nodules became noticeable and were imaged using the Xenogen IVIS Lumina imaging system. After 48 h treatment of tumors with 5-FU (25 mg/kg per mouse, n=5), the bioluminescent signal of CycE-Luc was significantly decreased as compared to the increased signal observed by MG132 treatment (2 mg/kg per mouse, n=5) and no change in the PBS control group (Figure 5E and 5F). To assess whether the increase in bioluminescence activity was concomitant with the expression of CycE-Luc, the level of cyclin E was determined by immunohistochemistry. The results showed that MG132 treatment significantly increased and 5-FU treatment significantly decreased cyclin E protein levels in the tumor tissue ( Figure 5G). Thus, these results demonstrated that the CycE-Luc reporter was suitable for monitoring the cell cycle status affected by anti-tumor drugs in a dynamic, non-invasive way.
CycE-Luc2 reporter can monitor G1-phase arrest by palbociclib treatment in vitro and in vivo
Palbociclib, a highly selective CDK4/6 inhibitor, which blocks the replication cycle and proliferation of tumor cells, is a promising anticancer drug for breast cancer patients. MCF-7 cells transiently transfected with CycE-Luc2 were treated with palbociclib for 24 h, and palbociclib at 100 nM or above caused G1 arrest (Supplementary Figure S3). Palbociclib treatment also increased cycE-Luc2 detected by luciferase assay and in vitro imaging. MCF-7-CycE-Luc2 cells treated with 1 μM palbociclib for 24 h or 48 h, and we demonstrated an increase of CycE-Luc2 in a time dependent manner ( Figure 6A). There results were consistent with G1 phase arrest ( Figure 6B, 6C). Moreover, the luciferase activity was also increased in a time-dependent manner ( Figure 6D, 6E).
To further explore whether CycE-Luc2 reporter could monitor G1-phase arrest by palbociclib in vitro and in vivo, HeLa-CycE-Luc2 clone 2 cells were used to perform in vitro and in vivo experiment. Luciferase activity analyses on synchronized cells revealed a significant 3-to 4-fold increase in bioluminescent signal of HeLa-CycE-Luc2 clone 2 cells after 24 or 48 h incubation with 8 μM palbociclib ( Figure 7A and 7B). Furthermore, flow cytometric analyses showed that cells arrested in the G1-phase after 24 or 48 h incubation with palbociclib ( Figure 7C). To validate the ability of CycE-Luc2 reporter for monitoring G1-phase arrest by palbociclib in an in vivo model. Implanted HeLa-CycE-Luc2 clone 2 cells were subcutaneously injected into nude mice. Three weeks after implantation, the mice were treated with palbociclib (150 mg/kg) and imaged using the Xenogen IVIS Lumina imaging system once a day. Our results showed that the bioluminescence signals from HeLa-CycE-Luc2 tumors increased significantly in response to palbociclib ( Figure 7D and 7E), while the tumor volume has no significant difference between palbociclib and PBS group ( Figure 7F).
Discussion
Bioluminescence reporter proteins have been widely used in the development of tools for monitoring biological events in living organisms in real-time [19]. Many aspects of drug development can be facilitated using bioluminescence reporters as an indicator to discover new targets, identify novel drug candidates, and validate their potency [29]. In the present study, we developed two optical reporters consisting of genetically encoded cyclin E fused to luciferase (Luc or Luc2), which reflected the change in G1 phase of the cell cycle. We then used the two G1 phase reporters to evaluate cell cycle-specific anti-cancer drugs, 5-FU and palbociclib, in vitro and in vivo using the IVIS imaging system and demonstrated their ability to provide the pharmacodynamic readout of cell cycle-specific anti-cancer effect. We showed that these fusion proteins accumulated in the G1-phase and were degraded during S/G2/M phase similar to endogenous cyclin E. Importantly, altered luciferase activity of the reporters matched with cell cycle progression, indicating their potential application for monitoring the response to cell cycle-specific anti-cancer drugs.
As the bioluminescence signal is amplifiable through an enzymatic reaction, these optical reporters can be applied to cell lysates, live cells in culture, and in small animals at limited depths. For example, p27-luciferase-expressing tumor cells were used to monitor CDK2 activity in vivo [25], and stably NFκBresponsive element-luciferase-expressing tumor cells were evaluated to monitor the response of LPS or TNF-α stimulation in mice [18]. In the present study, we generated two constructs driven by either native cyclin E or artificial CMV promoter. In cyclin E promoter driven reporter, luciferase expression is too weak to perform in vivo study. We observed high luciferase activity in the CMV promoter driven CycE-Luc2 reporter, and further demonstrated cell cycle dependent regulation of CycE-Luc2 protein, especially, its degradation through ubiquitination dependent manner. Especially, when we employed these reporters in an in vivo setting, we observed that the CycE-Luc2 reporter provided a stronger bioluminescent signal than CycE-Luc. Thus, the high sensitivity, simplicity, cost-effectiveness, and wide dynamic range of bioluminescent reporters are valuable features for drug-screening, toxicological testing, and therapeutic response, making them highly attractive for the development of novel anticancer drugs. Directly targeting the aberrant cell-cycle progression represents a potential therapeutic modality such as chemotherapy and radiotherapy. In this respect, over the past two decades, efforts have been focused on developing inhibitors of the cyclin-dependent kinases (CDKs). Active kinase complexes of CDK4/6-D-type cyclins promote G1/S phase transition through phosphorylation of Rb, p107, and p130 [30,31]. Palbociclib was reported in 2004 as a specific, high affinity inhibitor of CDK4 and 6 with no appreciable activity against 36 additional kinases [32]. Treatment with palbociclib led to profound G1-phase arrest and cytostasis, especially in mice bearing human breast and colorectal xenografts. Oral administration of palbociclib has been shown to induce tumor regression with inhibition of RB phosphorylation [32,33]. In subsequent randomized trials, palbociclib, in combination with endocrine therapy, has been reported to prolong progressionfree survival [5,7,8], leading to its approval in patients with metastatic HR-positive/HER2-negative breast cancer [34]. In the clinics, except RECIST as a standard method to assess the effect of anti-cancer drugs, 3′-deoxy-3′-[ 18 F]fluorothymidine ( 18 F-FLT) and [ 18 F]fluoro-2-deoxy-D-glucose ( 18 F-FDG) positronemission tomography (PET) imaging provides a non-invasive approach to monitor the early response to palbociclib. Decreased 18 F-FLT accumulation and S-phase depletion after palbociclib treatment were observed in MCF-7 cells and MCF-7 xenografts [35].
Also, several clinical trials of surrogate markers for palbociclib verified that early decrease of RB phosphorylation and Ki67 correlated with the effect of the drug on cell proliferation [36,37]. Recently, 18 Fpalbociclib have been developed as a promising positron emission tomography (PET) imaging agent for CDK4/6 activation in MCF-7 xenograft [38]. Thus, the molecular imaging methods for monitoring the alteration of molecular targets of palbociclib allow fast evaluation of drug efficacy and minimize its toxicity, ultimately reducing cost.
In the present study, based on our understanding of molecular events that follow palbociclib treatment, we have developed a molecular imaging method to monitor the drug response using CycE-Luc2 reporter. Intriguingly, G1-phase arrest caused by palbociclib, as evidenced by the induction of bioluminescence, correlated with flow cytometry result. In particular, our experiments in vivo showed that bioluminescence can circumvent tissue auto-luminescence resulting in a better signal to noise ratio. The luciferase signal increased at 24 h, and reached peak levels lasting up to 96 h. It is clear that the development of these reporters opens up the possibility of monitoring the pharmacological effects of G1 phase-specific anti-cancer drugs. Thus, this G1 phase-specific reporter has provided evidence for its potential use to determine the fraction of tumor cells in the G1 phase after treatment with palbociclib. Bioluminescence imaging has its ability to enhance the analysis of drug response in the patient-derived xenografts (PDX) [39]. Thus, PDX could also be engrafted into our model to open new avenues for developing personalized therapeutic approaches in the future. However, these imaging biomarkers need to be extensively evaluated, particularly for late-stage clinical studies, to determine whether they can be used as primary measures of the effectiveness of a drug and translated into a clinically meaningful benefit. Currently, there are hardly any FDA-accepted imaging biomarkers, and especially there are no validated imaging-based surrogate endpoints.
Concluding remarks
CDK4/6 is required for cell cycle entry, and is also attractive target for new developments in cancer therapeutics. The clinical use of CDK4/6 inhibitors require a robust validation for physiological and metabolic treatment response of those drugs by obtaining quantitative imaging endpoints from PET, MRS, and/or optical imaging platforms [40]. Novel G1 phase reporters (cycE-Luc and cycE-Luc2) were successfully applied for molecular and functional imaging of cell cycle-specific agents to elucidate their biological activity and drug-target effects both in vitro and in vivo. They are also useful to investigate basic processes of cell cycle regulation and their modulation by intracellular pathways and receptor signaling. Our results suggest that cyclin E-Luc reporters might serve as a clinically translatable approach for predicting and monitoring response to CDK4/6 inhibitors in patients with ER-positive breast cancer. Nevertheless, the evaluation of these optical reporters with optimized properties is still of outstanding interest for monitoring early response of cell cycle-specific drugs in patients who are more likely to benefit from these drugs. | 2021-02-19T15:35:49.897Z | 2021-01-31T00:00:00.000 | {
"year": 2021,
"sha1": "28acbbb382d3a3d4121c3b7f8ecd27c94cc6b37c",
"oa_license": "CCBY",
"oa_url": "https://www.ijbs.com/v17p0728.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "28acbbb382d3a3d4121c3b7f8ecd27c94cc6b37c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241094156 | pes2o/s2orc | v3-fos-license | Safety of Endostar in combination with chemotherapy in patients with cancer
Background: A number of studies indicated the benefits and safety of Endostar in combination with chemotherapy in cancer, but the exact real-life safety of Endostar is
Background
Cancer is currently one of the leading causes of global deaths with its incidence and mortality rates both increased dramatically over the past few decades [1].
In 2018, there was an estimated 18.1 million new cancer cases and 9.6 million cancer deaths worldwide [1], with lung cancer as the most common diagnosis (11.6%) and cause of death (18.4%). These data highlight the challenges in cancer prevention and screen as well as the urgent need for new and effective treatment modalities.
Angiogenesis is a process encountered in solid tumors and is necessary to provide optimal nutrient and oxygen supplies to the cancer cells, and targeting angiogenesis is a viable option to delay cancer growth [2][3][4]. Endostatin is a 20-kD internal fragment of collagen XVIII with anti-angiogenesis properties [5]. Endostar, which is recombinant human Endostatin purified from transgenic Escherichia coli, was approved in 2005 in China for the treatment of advanced non-small cell lung cancer (NSCLC) in combination with platinum-based chemotherapy, with improved treatment response rates [6,7].A meta-analysis showed that the relative improvement in the objective response rate (ORR) due to Endostar in patients with NSCLC was 74% compared with placebo and combined with chemotherapy [8].A number of studies also indicated the benefits of Endostar in liver cancer [9] and osteosarcoma [10,11].
With anti-angiogenesis drugs, safety could be a concern. Treatment-related adverse events (AEs) including cardiovascular toxicity [12], proteinuria, hypertension, hemorrhagic events, and neutropenia [13,14]have been previously reported. A metaanalysis of Endostar showed that Endostar combined with chemotherapy did not 4 increase the incidence and severity of leukopenia, thrombocytopenia, anemia or nausea/vomiting in patients with NSCLC, compared with chemotherapy alone. In addition, no increased risk was observed for other common AEs. Nevertheless, data for Endosta rare mostly from selected patients from lung cancer trials [8,[15][16][17].Data from other cancer types and from real-life clinical practice are limited. Real-life studies are an essential component of evidence-based medicine, allowing informed decision-making based on the balance between effectiveness and safety [18].
Therefore, this study aimed to assess the safety of Endostar in combination with chemotherapy in patients with various types of cancer in a real-life setting.
Study design and patients
This was a retrospective study of patients treated with Endostar for cancer in the Chinese PLA General Hospital (Beijing, China) from 1 st January 2006 to 31 st December 2017. The study was approved by the ethics committee of the Chinese PLA General Hospital. The need for written informed consent was waived by the committee.
The major inclusion criteria were: 1) locally advanced and metastatic solid tumor such as NSCLC, liver cancer, osteosarcoma and so on, confirmed by histology orcytology; 2) have baseline and treatment plan records; 3) have used Endostar (Manufactured by Shandong Simcere Bio-pharmaceutical Co.,Ltd.) for the treatment of solid tumor during the study period; 4) have received Endostar at least once; 5) have hematology or laboratory examination records within 30 days after the administration of Endostar.
5
The major exclusion criteria were: 1) non-solid tumor such as hematological malignancy; 2) no baseline or treatment plan records available; 3) have not used Endostar during the study period; 4) no hematology or laboratory examination records within 30 days after the administration of Endostar.
Data collection and outcomes
All data of patients who were prescribed and administered Endostar, including the patients' basic information, results of laboratory test, dose of Endostar, case records, etc were identified and retrieved from the Hospital Information System (HIS) of the Chinese PLA general hospital.
The outcomes for the study were the occurrence of laboratory adverse events
Statistical analysis
Categorical data are presented as numbers and percentages. Continuous variables are presented as mean ± standard deviation or median (interquartile range (IQR)), as appropriate based on the Kolmogorov-Smirnov test. For subgroup analysis, descriptive safety data were summarized by main cancer type (lung, osteosarcoma and liver), main chemotherapy regimen (platinum-based and doxorubicin-based regimen) and total dose of Endostar (≤210 mg and >210 mg). The association of Endostar administration with grade >3 thrombocytopenia or abnormal white blood cell count was assessed using univariable and multivariable Cox models. The standard dose for 6 1 complete cycle of Endostar administration was 210 mg and therefore the total dose of Endostar was classified as ≤210 mg vs.>210 mg. Due to event number restriction, no Cox regression analysis was performed for anemia, abnormal hepatic and renal functions. SAS 9.4 (SAS Institute, Cary, NY, USA) was used for statistical analysis. P values <0.05 were considered statistically significant.
Characteristics of the patients
During the study period, a total of 825 patients had available hematology or laboratory test data at baseline and during 30-day follow-up (Table 1)
Factors associated with severe AEs when using Endostar in combination with chemotherapy When considering all patients, no factors were found to be associated with severe thrombocytopenia (Table 3).Lung cancer and osteosarcoma(as compared with liver cancer) and doxorubicin-based chemotherapy (as compared with platinum-based chemotherapy) were associated with an increased risk of seriously abnormal white blood cell count (Table 3).Total dose of Endostar was not associated with increased risk of severe thrombocytopenia or seriously abnormal white blood cell count when used in combination with chemotherapy (Table 3).
Discussion
A number of studies indicated the benefits and safety of Endostarin combination with chemotherapy in solid tumors [8-11, 15, 16], but the exact real-life safety of Endostar is poorly known. Therefore, this study aimed to assess the safety of Endostar in combination with chemotherapy in patients with cancer in a real-life setting. The results suggested that no new unexpected AEs relating to Endostar were observed when used in combination with chemotherapy.
Endostar has been approved in 2005 for the treatment of NSCLC in China, but it has not yet been approved in other countries. Therefore, all data from clinical trials and real-world studies are from the Chinese populations. The present study has a large sample size compared with the available real-world studies. A real-world study of 88 patients with sensitizing mutation negative NSCLC showed that Endostar combined with platinum-based doublet chemotherapy improved survival, but at the cost of higher occurrence of AEs [20]. A real-world study of 14 patients with pancreatic neuroendocrine tumors showed that the most common grade 1-2 AEs were neutropenia (43%) and leucopenia (21%) [21]. A real-world study of 23 patients with advanced mucosal melanoma showed that the most common severe AEs of Endostar with chemotherapy were leucopenia (13%), thrombocytopenia (13%), anemia (4%), nausea and vomiting (4%), and elevated transaminase (4%) [22]. A large study of 2725 patients with NSCLC showed thatEndostar added to standard NCCNrecommended chemotherapy improved treatment response without resulting in excess AEs [17]. In the present real-world study, grade 3-4 anemia occurred in 10.2% of the assessed patients, thrombocytopenia occurred in 9.8%, abnormal white blood cell count occurred in 22.9%, abnormal liver function occurred in 0.8%, increased creatinine occurred in 0%andno definite bleeding events and wound healing complications after surgery associated with Endostar were found based on case records. This is generally consistent with the available real-world data, as well as with the data from the clinical trials [8-11, 15, 16].
In the present study, the pattern of laboratory AEs seems to vary with the type of cancer. Indeed, the frequencies of thrombocytopenia (46.3%) and abnormal kidney function test (6.7%) were the highest in patients with liver cancer, while the frequencies of anemia (80.5%), abnormal white blood cell count (77.6%) and abnormal liver function test (17.4%) were the highest for osteosarcoma. Multivariable analysis showed that lung cancer and osteosarcoma were associated with increased risk of abnormal white blood cell count, as compared with liver cancer(P<0.01).
Previous studies have showed that Endostar combined with chemotherapy in liver cancer is associated with leucopenia and liver function damage [9], while Endostar combined with chemotherapy in lung cancer did not increase the toxicity of chemotherapy alone [8,15], and Endostar in osteosarcoma showed myelosuppression and transient elevation of liver enzymes [10]. Of course, the differences in AE patterns might be due to disease stage, affected organs, preconditioning regimens, and especially concurrent chemotherapy, among others. Notably, in the present study, over 90% patients with liver cancer and over 80% patients with lung cancer were treated with platinum-based chemotherapy, whereas over 80% of patients with osteosarcoma were treated with doxorubicin-based chemotherapy. Additional studies are necessary to determine the exact AE patterns of Endostarin combination with chemotherapy across different cancer types. Interestingly, a study showed that Endostar could decrease the occurrence of nasopharyngeal mucosal necrosis in patients with nasopharyngeal carcinoma who received chemoradiotherapy [23].
Potential protective effects of Endostar should also be investigated.
The occurrence of laboratory AEs of Endostar with different total dose was also investigated in this study. Descriptive results showed that the frequencies of overall thrombocytopenia (34.9%) and increased creatinine (7.1%) were higher for low-total dose of Endostar (≤210 mg) while the frequencies of overall anemia (77.4%), abnormal white blood cell count (64.4%) and abnormal liver function (13.5%) were higher for high-total dose of Endostar (>210 mg). Multivariable analysis showed that the total dose of Endostar was not associated with increased risk of severe thrombocytopenia or seriously abnormal white blood cell count (P>0.05).
As for the occurrence of laboratory AEs of Endostar when combined with different chemotherapy regimens, the present study suggests that the frequencies of thrombocytopenia (33.2%) and abnormal kidney function test (2.8%) were the highest for platinum-based regimens, whereas the frequencies of abnormal white blood cell count (67.9%) and abnormal liver function test (15.3%) were the highest for doxorubicin-based regimens. Doxorubicin-based regimens were associated with an increased risk of seriously abnormal white blood cell count (P<0.05) compared with platinum-based regimens. In fact, those patterns are more similar to the already known AE patterns of platinum-vs. doxorubicin-based chemotherapies [24][25][26][27], suggesting that the AEs of the chemotherapy part of the treatment are likely to be more significant than the AEs from Endostar. Additional studies are necessary to discriminate between Endostar-specific AEs, chemotherapy-specific AEs, and AEs that arise from the combination of the two. Other regimens should also be investigated.
For example, a study of the combination of Endostar with taxane-based regimens in breast cancer showed that the vast majority of patients experienced neutropenia (80.7%) and leukopenia (77.2%) [28].
This study has limitations. It was a single-center, retrospective study, and the data that could be analyzed were limited to those available in the database. Second, not all patients received laboratory examinations before and after administration of Endostar and a bias could be introduced. For example, patients with lung cancer were more likely to undergo blood testing, as well as those receiving multiple doses of Endostar. Third, all patient in the present study used Endostarin combination with chemotherapy and no controls were included. Finally, the high diversity in individual treatment regimens may introduce some confounding (e.g., dose and type). Only a general classification was used for the present study.
Conclusion
The occurrence of AEs during treatment with Endostar in combination with chemotherapy differed across different tumor types and chemotherapy regimens. No new unexpected AEs relating to Endostar were observed from this study. The total dose of Endostar was not associated with increased risk of severe AEs (grade≥3)of thrombocytopenia and abnormal white blood cell count when used in combination with chemotherapy. The study was approved by the ethics committee of the Chinese PLA General
List of abbreviations
Hospital. The need for written informed consent was waived by the committee.
Consent for publication
Not applicable.
Availability of data and materials
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Competing interests
All authors declare that they have no competing interests. | 2020-07-02T10:36:56.612Z | 2020-06-25T00:00:00.000 | {
"year": 2020,
"sha1": "418146827379bfeb9bcd6e8d2cefcf551f057531",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-36626/v1.pdf?c=1593109832000",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "009cf2ee3c59d9b4ade8985ed5efd94e164ccac2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
4781200 | pes2o/s2orc | v3-fos-license | Strategies for identification and coping with the violence situation by intimate partners of pregnant women
Objective: To know the strategies used by nurses of Units of Family Health Strategies to identify and cope with the violence situation by intimate partners of pregnant women. Method: Descriptive study with a qualitative approach, in which semi-structured interviews were conducted with 23 primary care nurses from September 2015 to April 2016. Thematic content analysis was used. Results: The category “It’s very complex” has emerged – actions to identify and cope with the violence situation by intimate partners of pregnant women. Physical injuries were the main violence indicative identified at prenatal care. The coping strategies were the referrals to specialized services and joint discussion with healthcare team. Conclusion: There’s a need to organize a nursing protocol that helps in the identification and classification of risk exposure to violence, permanent education of these professionals and strengthening of intersectoral actions.
INTRODUCTION
Violence against women is widely recognized as a serious public health problem.For the World Health Organization (WHO), intimate partner violence (IPV) is defined as behavior within an intimate relationship that causes physical, sexual or psychological harm, including acts of physical aggression, sexual coercion, psychological abuse, and controlling behavior, and this setting applies to both spouses and current or former partners (1) .
In all the stages of the growth and development of women, there are reports of violence, however, during pregnancy, the acts of violence may decrease, being a factor of protection; or it may intensify, being a risk factor for the health of the mother-baby binomial (2) .Regarding the intensification of this act, women with the highest number of pregnancies are even more susceptible to violence.The damages from these attacks can cause various effects on the health and quality of life of the mother-baby binomial, such as: premature labor, bleeding, low birth weight, abortion, premature rupture of membranes, maternal death (3)(4) .Such diseases impact on women's physical and mental health and can generate an increased demand for health services.In this sense, the professionals have an important role in receiving and listening to the women, being strategic for the aid in the fight against violence (4)(5) .
In Brazil, in the year of 2007, as a management strategy for guidance and implementation of policies for combating the violence against women, in order to ensure the prevention and combating the violence, providing assistance to guarantee rights to women, the National Pact for Combating Violence against Women was launched.This federal arrangement among the federal government, the government of the states and the Brazilian cities, subsidized the planning of actions to consolidate the National Policy for Combating Violence Against Women, in 2008, through the implementation of integrated public policies (6) .
This political mechanism reaffirms the need for joint action by the various sectors involved, such as health, public security, justice, education, among others; in order to propose actions that deconstruct inequalities and combat gender discrimination and violence against women; as well as interfere in the chauvinist patterns of the Brazilian society and guarantee a qualified and humanized service to those in situation of violence (6) .
The composition of the healthcare services in the network to combat violence against women stands out as the Basic Healthcare (BH) with the potential to become the gateway for women experiencing violent gender relations.The participation of the BH professionals and their insertion into the community, through the Family Health Strategy (FHS), can favor the early identification of risk factors for violence and the intervention in situations of vulnerability (7) .
Studies indicate the importance of deepening the investigations regarding violence against pregnant women by intimate partners, since there is a scarce literature given the severe repercussions on the life and health of the mother-baby binomial.Regarding the performance of nursing in caring for these women, it is possible to observe a gap in the production of this knowledge (7)(8)(9) .
Since the BH nurse, for being close to the people's lives, develops the listening and the embracement of women in the preconception and parturition period and is attentive to their care needs, the following guiding question was sought: What are the strategies used for the identification and coping with violence by intimate partners of pregnant women attended by nurses who work in FHS?
The objective of the present study was to know the strategies used by nurses of the Family Health Strategies Units to identify and cope with the violence situation by intimate partners of pregnant women.
METHOD
It is a descriptive study with a qualitative approach.This approach is based on the perceptions and human interpretations about their experiences and feelings (10) .The scenario of the study consists of twenty Family Healthcare Units in the city of Porto Alegre -Rio Grande do Sul, located on territories of greater social vulnerability and violence.The districts where the research occurred are characterized by presenting the highest general coefficient of mortality for the external causes group (accidents and violence) of the city.The participants were 23 nurses.The inclusion criteria were: to be working in this labor activity for more than six months and to carry out programmatic activities aimed at the assistance of women in prenatal care; and as exclusion criteria, to be absent from work by leave of any nature during the period established for data generation.
The semi-structured interview has been used as a data generation technique.It has been held in the service in which the participants worked, in a reserved room, upon prior appointment.Still, they have been performed individually, with an average duration of 30 minutes and recorded in MP3 (audio).For the interview, a script containing questions related to sociodemographic and academic data, and concerning strategies for identifying and coping with the situation of violence by intimate partners of pregnant women has been written.Data generation occurred in the period from September 2015 to April 2016, and was closed using the data sat-Rev Gaúcha Enferm.2017;38(3):e67593 uration criterion.This is the interruption of the inclusion of new participants when the data, in the researcher's analysis, begin to present a certain repetition, and it is not pertinent to follow up the process of data generation (10) .
The data obtained through the interview has been systematized and analyzed through content analysis, in its three phases: pre-analysis, material exploration, and treatment of results obtained and interpretation.Initially, the data from the recordings of the semi-structured interviews have been transcribed verbatim in a text editor program, composing the corpus of the study.In the pre-analysis, the NVivo 10 software has helped in the codification and treatment of the material.Then, from exhaustive readings, the analogous ideas present in the excerpts of the participants have been highlighted.
The exploration phase of the material made it possible to cut common elements of the transcribed statements, constituting categories.For this purpose, the registration units, consisting of words, sentences and expressions that give meaning to the content of the testimonies and support the determination of the categories (10) , have been listed.Thus, at this stage, the topics that constituted the registration units have been first sought.After finding these units, the thematic category has been determined.The categorization involves reducing the text to more meaningful words and expressions within the corpus of analysis.
In the last phase, treatment of the obtained data and interpretation, we have tried to propose inferences and interpretations about the results, returning to the objective of the study.Thus, the previous clippings have been analyzed based on the related literature.
It should be noted that, before the beginning of the data generation stage, the participants have been informed about the study objectives and methodological procedures through the Informed Consent Term.Those who agreed to participate have signed this term in two copies, one for each participant and the other for the researcher.In order to guarantee the anonymity of the participants, these have been identified by the letter 'E' referring to the nurse, followed by the ordinal number according to the order of the interviews (eg, E1, E2... E23).
The study has been approved by the Research Ethics Committees of the Universidade Federal do Rio Grande do Sul and the Municipal Health Department of Porto Alegre, CAAE: 38025914.8.30015338/2015.
RESULTS AND DISCUSSION
Among the participants of the research, 20 are female and three male; the most frequent marital status was single (17).The age of the participants ranged from 27 to 58 years old, the self-declared race was dark-skinned (1), black (3) and white (19).The training time is between three and 34 years, and the work in BH ranges from six months to 12 years of service.All the interviewees have some type of specialization, being the main ones in the area of the public health, intensive therapy, and urgency and emergency.
The results are presented in the thematic category, constituted from the nurses' statements regarding the strategies used to identify and cope with the violence situation by intimate partners of pregnant women.
"It is very complex": actions of identification and coping with the VIP of pregnant women
The interviewees' testimony shows that the majority of them have already attended women in situation of violence.However, when it comes to pregnant women, few have identified it in the daily routine of their care practices.The identification of situations of violence has been described as a complex phenomenon, since the gestation is a period in which emotions are more exacerbated; and crying and sadness can mask the occurrence of violence.It has been emphasized that, when there is the verbalization or the presence of physical signs, like bruising and lesions, the identification occurs more easily, especially at the moment of the prenatal appointment.
Anything that happens to the woman in this (gestational) period is going to be much bigger than the hormonal issue.Thus, it is difficult to perceive violence in the pregnant woman as against a woman who is not pregnant.(E7) She had her arm in a plaster cast, so she could not see it, but she did not mention what had happened; it was later, in another appointment, in which she told everything that had happened.(E16)
At the first prenatal appointment, she had marks and bruises on her arm. The woman and her husband are crack and cocaine users, and she would prostitute herself to maintain their addiction, otherwise he would beat her. (E22)
Violence, which is considered a public health problem and a violation of human rights, remains a taboo for women, and often for health care providers as it is an intimate and painful case that must be resolved in the domestic sphere.This attitude indicates that the healthcare services are moving away from the responsibility of facing the problem and pregnant women are even more vulnerable to the fragility shown by these services (7) .
Violence is understood as a complex and multiple phenomenon.It can be understood from social, historical, cultural and subjective factors, but should not be limited to any of them.The discussion about the subject must cover two fundamental aspects: the conceptualizations of violence, which allow the identification of the violent experience and the perspectives of those who are involved in this violent situation, since the way an experience is perceived is related to the way it is felt and identified (11) .The valuation of the subjective and social dimension in approaching women in VIP by nurses is necessary, since only the objective dimension of care limits their performance in relation to the situation, based only on objective signs and symptoms, with a healing focus and little resolution of the social and health demands of these women.
The prenatal care is an appropriate moment to recognize and identify cases of violence, since it is during this period that visits to the healthcare services are more frequent.However, when there is no evidence of physical bruises resulting from the aggression, there are difficulties for this finding, therefore, greater attention is needed on the part of the healthcare teams, and a close monitoring in each case (12) .A systematic review study indicates that health professionals know the benefits of the prenatal care that covers the VIP, but they are unable to perform these specific appointments routinely, because the partner is present, the time is very variable, or they do not know how to address the issue and how to address the situation (13) .Establishing a relationship of trust and being able to understand what the health user does not verbalize, but expressed in their body and behavior, were the means that the health professionals used to facilitate the identification of the VIP (13) .
The more frequent presence of women in services can generate a greater link with the healthcare team and favor the identification of cases of violence.However, most cases are not registered, constituting the invisibility of the situation.Regarding the nurse's performance in the prenatal care, a study recommends the adoption of an intervention plan with questioning in the team's records, with direct questions to all the users, asking if they face or suffer any type of violence (2) .
Considering that violence rarely starts during the gestation, since the frequency is a regular and systematic pattern of the relationship of the couple, it is fundamental that the topic is approached from the first appointment in the healthcare services, seeking to identify families that experienced marital conflicts, even before the current gestation (14) .
Nurses report the occurrence of manifestations that help in the identification and understanding that violence was repeated in the current gestation, such as: absence in prenatal appointments due to alternating address, report of living with the partner at one time and with relatives in others, allied to the direct verbalization of the occurrence of violence in the previous gestation.
Her husband used to beat her during the gestation, so she separated from him and came back several times.At a moment she was being taken care of by our unit, and in another she was going to another unit.We found out that sometimes she would go to her husband' s house, sometimes to her mother' s house, as he would beat and fight with her.(E11) The pregnant woman was sloppy in her prenatal care, she did not attend the activities (appointment and group), and she lived a little bit in each place [...].Maybe she did not want this pregnancy from this relationship with her partner, who uses drugs, beats her and their other child, and gets stuck.(E15) A good quality prenatal care can reduce physical aggression to women during the pregnancy, however, the appointments should cover the health of the pregnant woman in an integral way, including the psycho social aspects, configuring fundamental actions for the development of the care and protection of the pregnant woman (14) .
It is observed that women's inadequate or late access to prenatal care may be due to the partner's prohibition for this search or due to the intense psychological stress experienced by the woman during the pregnancy as a result of the abuse.Another aspect to be considered concerns the shame and the fear of being discovered by the health professionals.Thus, she moves away from adequate assistance and becomes more exposed to violence by the aggressor (4,7) .
The findings of this study reveal, in the perspective of the nurse, the removal of the pregnant woman from the prenatal care actions, implying the discontinuity of care and the difficulty of helping her cope with violence.
Only after her gestation the patient told me that she suffered aggression during her pregnancy, according to her, only at that time. But she stopped attending the prenatal care, she left the territory of our scope, it was difficult to help her. (E18)
The reception of the pregnant woman and the development of an "interested" listening have been indicated as a powerful tool to identify the violence: Rev Gaúcha Enferm.2017;38(3):e67593
It was during the embracement, after identifying that the patient had a very sad face. During the embracement, the professional has to be open to listen, because, in general, women are ashamed to speak, they feel guilty. (E1)
What should be done is to have a look at this person, to value her as a person, to value her as a woman.So that she will reveal herself in all her facets.If you are too superficial or just look at the body of the pregnant mother, she will probably not feel free to reveal her life if the other person is not showing the slightest interest in hearing.(E8)
The woman must have confidence in you and come to get help, to create a bond with our UBS, to be heard. (E17)
In this sense, the embracement by the healthcare professionals is an opportunity for accountability in listening to women's reports and complaints, allowing the free expression of their concerns and anxieties (2) .The committed listening from the healthcare professionals is an element that allows the recognition of VIP of pregnant women, so it is necessary that the team is prepared to establish a care relationship that conquers the patient's trust (15) .
As a difficulty in the identification of violence, the fragile attachment of the woman to the service and to the professionals has been mentioned, this can be justified by the high turnover of the Primary Healthcare professionals in the local reality of the research, along with the fear of reprisal.Besides the difficulty in revealing the violence situation to the professional by the woman, it was perceived by the shame of herself in exposing the experience of violence, the emotional and financial dependence on the companion.These elements constitute the challenges in facing violence, once the nurses understand that the lack of financial autonomy of the pregnant woman creates difficulties for her to break with the context of the violence, besides weakening them regarding their self-esteem and mental health.
The financial and emotional dependence as motivation/justification of some women to remain in a situation of violence is sustained in issues that constitute gender relations, which bring to the woman the responsibility for the maintenance of the marriage (16) ; and, in the study, reinforced by the perspective of family constitution through the gestation and raising of the children next to the paternal figure.
Even when they are suffering from violence while pregnant, they take the relationship forward because, in their thinking, if they become pregnant, they have to continue together, keeping the family. (E3) I perceive the fear, the shame and the guilt felt by the pregnant woman, and the need to stay with this man, for several reasons. (E12)
I made the notification form, I guided her on looking for the Women' s Police Station, I sent her to the social service and to the psychologist, I helped her to get "Bolsa Família", but at last she said: "I cannot leave the house now, I cannot do anything, I depend on him, so what am I going to do with this unborn baby?(E15) When questioned about ways of coping with VIP situations for pregnant women, the nurses interviewed report the referral to a specialized service in the field of the tertiary healthcare.Also highlighted are the referrals to mental health demands.In addition to the discussion of cases, and conduct within the healthcare team.
The first step would be to address the psychological aspects. Then, refer her to the social worker and psychologist. (E9) I discuss with my healthcare team when I suspect or identify it, and with the community agents who are very familiar with that situation, so we can make joint decisions. (E14) Refer to a reference hospital. (E21)
Thus, the referral to other services was somewhat recurrent in the speeches, denoting that FHS in cases of VIP has a limitation in assisting and accompanying the pregnant women in situation of violence.When transferring the responsibility of this care to another service/professional, and often not knowing the consequences of this action, it creates a network care with little communication between the points, and sometimes it is not resolute.
The construction of appropriate and sensitive local practices and conduits is done through the referral to specialized services, in a committed way, with effective communication, with a view to optimizing available resources and services, greater agility in referrals and, consequently, more qualified and humanized service to women.It is up to the healthcare professionals the healthcare of the mother-baby binomial, surveillance and monitoring, and prevention and health promotion considering the intersectoral articulation.
The Family Health Support Center (FHSC) has been mentioned by some nurses as a coping strategy in cases of violence, since the proposal of this nucleus to share health practices and knowledge allows broadening the scope of primary care actions to coping with violence, as well as its Rev Gaúcha Enferm.2017;38(3):e67593 resolubility.In addition, for nurses, the support offered by the FHSC regarding the increase of the capacity of analysis and intervention on the problems and social and health needs of these pregnant women would fill gaps in the training/qualification on the theme.FHSC' s help, too, because of the concern for the woman' s mental health, and on how to carry out this pregnancy.(E10) I would like more training and that there were professionals at FHSC so we would be able to discuss the cases.We are very lacking in guidance on the subject.(E5) Studies point out that the strategies to deal with women in situations of VIP include the creation of linkage and appreciation of women's speech; the institutionalization of spaces for discussions on the issue, the involvement of professionals of the FHSC referral team, and the knowledge about the services that make up the network of attention to women in situation of violence and referrals, as the most important ones (15) .These aspects corroborate with the findings of the present study.
The notification of suspected and confirmed cases of violence against pregnant women is understood as a device that allows the visibility of the problem in question.It denotes the State's commitment to the diagnosis of violence against women from the construction of official statistics.The notification must be made in all healthcare services, including the FHS, and, thus, it also becomes a support for the professional, in the sense that they have acted facing such a situation.
The notification, reporting and internal referrals provide protection for the professional as well, since there are other devices involved. So I would send it to the network, even though it does not work as it should, but it is a protection for the professional that it does not just stay inside the unit. (E20)
The first thing to be done is to notify, notify this case for surveillance.And look for all the support, like the FHSC, the psychological, the psychiatric appointment.Try to engage this woman in a whole network of facilitating service so that she can somehow solve it.But the main thing is the notification, and getting her into that psycho social help network.(E23) The records made by health professionals in BH have been examined in a study, and it demonstrated the occurrence of violence not clearly described in the medical re-cords, due to lack of data, mostly, they do not contain records of socioeconomic data, nor the user's life history, showing gaps for the understanding of the context in which violence occurs.The actions are focused on the physical or psychological consequences of violence on women's health (2) .
Although they verbalize in a specific way about the healthcare network, referrals, and notification as possibilities of coping with violence outsource little training focused on the topic and knowledge of the devices to guide their actions.
I would have to have more property to know where to referral.If you direct her to the wrong place, she stays in that go and returns all the time, and it ends up discrediting the situation of the woman.(E2) The continuity of care for pregnant women (...) runs into the lack of training and in the lack of a network of services with good communication.(E13) We are not prepared to meet and have a network of services to give us this support (E19).
The professionals are not qualified to deal with this type of situation in the FHS.This is largely due to the absence of the topic in the academic training of nurses and other health professionals.It is notorious that the fear that the professional feels when acting when the topic is violence against women, because they do not perceive themselves as qualified to act in these situations, the attitude taken is often summed up in the removal or denial (7,17) .It is necessary for professionals to move out of from impotence and to become new agents of the social change, being capable of giving direction to women who live in situation of violence (17) .
Nurses express concern about acting in a territory where violence and drug dealing are present; and that acting in a purposeful and co-responsible manner in confronting violence against women requires safer and more protective spaces for health professionals themselves.
Many professionals are afraid to make the notification; there is the question of the identification, which we know is not something that needs to be identified.There is also the issue of drug dealing, of violence, with which we know that the professionals are involved with, and that they have to be careful to deal with such situations, be it violence against women, elderly or children.Be it any kind of violence, we would have to have more security as a professional.(E06) Rev Gaúcha Enferm.2017;38(3):e67593
We work in areas of social vulnerability, violence, drug dealing... This exposes our users as well as us. (E08)
Violence at work is present in all sectors; however, it is more frequent in services where there is a predominance of women, such as the health sector and social services.In general, it is more frequent in the healthcare services, because the worker has a very close relation with their object of work that is the health needs of the patient.In this sector, violence is invisible, and any measure adopted for the formation of public policies aimed at workers' health will have repercussions on the quality of care provided to the population (18) .
The professionals involved in the care to this clientele perceive the complexity and intensity of the violence experienced, which sometimes mobilize personal issues of each one.There is also the difficulty in establishing strategies to deal with the repercussions of violence on the health and daily life of the professionals, especially given the scarcity and fragility of a support network to deal with these situations (19) .
Thus, it is recognized that, in order to identify and cope with intimate partner violence against pregnant women, a support network that goes beyond the basic healthcare services is needed.In order to guarantee humanized and qualified care to those in a situation of violence through the continuous training of public and community agents; through the creation of specialized services and the creation/strengthening of the Healthcare Network for the establishment of a network of partnerships to combat violence against women, in order to guarantee the integrality of care.
It is also reinforced that, for professionals working in the FHS, being a strategic and privileged position for the detection of violence against pregnant women, they should be able to act in this context, by guaranteeing them resources and support (20) .However, for this to become operational, there is an immediate need of developing professionals' skills and modifying the work processes in order to combat violence (7) .
FINAL CONSIDERATIONS
The study showed potentialities and weaknesses in the role of nurses in identifying and coping with VIP.Regarding the identification of strategies, the embracement has been pointed out as a tool to identify the social and/or health needs of pregnant women through attentive and sensitive listening to the suffering caused by violence.On the other hand, in an ambivalent way, the fragile relationship with the service and the professionals, the fear, the shame and the financial and emotional dependence on the aggressor, appeared as factors that hinder the process of revealing the situation and its consequent identification.
The ways of coping demonstrate the need of nurses to share experiences when receiving women in situations of VIP with the health team, through referrals to specialized services or even through devices such as FHSC.The concern with the referral in a punctual and uncoordinated way with other strategic points of the network indicates an unpreparedness of the professionals in working with such demands.When and how to notify unveil the fears and insecurities of these professionals in BH.
It has been observed that the identification and coping with IPV need specific actions, such as the introduction of a prenatal care protocol that covers the specificity of violence in this vulnerable group, and that they are broad with the nurses and other FHS members, especially regarding the permanent education of the teams, focusing on aspects of the responsibility of each professional and the collective, a space for listening to the insecurities and fears of professionals, as well as intersectoral actions that guarantee compliance with the National Policy for Combating Violence against Women.
Aiming at the organization of nurses' actions in the prenatal care, in the context of coping with violence in BH, there is a strategy to include in the nursing protocol an interview script of the first and/or subsequent appointments, with questions to the pregnant women that would assist the professional in the identification and classification of the risk of exposure to violence.
It is remarkable that this study presented the actions of identification and strategies of the FHS nurses in combating violence, however, there are other actors involved in the healthcare of pregnant women, as other professionals in the health team who should also have their perceptions evaluated in the production of knowledge, opening up, therefore, space for new studies concerning the subject in question.In addition, this is a qualitative research, subject to limitations of this type of study.Although the results do not allow wider generalizations, they are thought to provide an overview of the daily challenges and limitations of the nurses in the context of intimate partner violence. | 2018-04-26T23:46:28.678Z | 2018-04-05T00:00:00.000 | {
"year": 2017,
"sha1": "e8d347718d52a52f904058067008f14e9c44144c",
"oa_license": "CCBYNC",
"oa_url": "http://www.scielo.br/pdf/rgenf/v38n3/en_0102-6933-rgenf-38-3-e67593.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "28ea7572f9b4603ff3fb5a8a3101a644a0fdef52",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
234121623 | pes2o/s2orc | v3-fos-license | Novel and accurate non-linear index for the automated detection of haemorrhagic brain stroke using CT images
Brain stroke is an emergency medical condition which occurs mainly due to insufficient blood flow to the brain. It results in permanent cellular-level damage. There are two main types of brain stroke, ischemic and hemorrhagic. Ischemic brain stroke is caused by a lack of blood flow, and the haemorrhagic form is due to internal bleeding. The affected part of brain will not function properly after this attack. Hence, early detection is important for more efficacious treatment. Computer-aided diagnosis is a type of non-invasive diagnostic tool which can help in detecting life-threatening disease in its early stage by utilizing image processing and soft computing techniques. In this paper, we have developed one such model to assess intracerebral haemorrhage by employing non-linear features combined with a probabilistic neural network classifier and computed tomography (CT) images. Our model achieved a maximum accuracy of 97.37% in discerning normal versus haemorrhagic subjects. An intracerebral haemorrhage index is also developed using only three significant features. The clinical and statistical validation of the model confirms its suitability in providing for improved treatment planning and in making strategic decisions.
Introduction
Stroke is one of the life-threatening cerebrovascular diseases that occurs due to the abrupt termination of blood supply to the brain. As the brain tissues cannot receive oxygen and nutrients, they begin to die, which can result in substantial lifetime disability and even death [1,2]. Stroke is the second leading cause of lifetime disability as well as death worldwide [3]. Intracerebral haemorrhage (ICH) is one of the devastating stroke subtypes with poor outcome and high mortality rate within the first year. Hence, early diagnosis and management of ICH are essential for improved functional outcomes and to save patient lives [4].
Computed tomography (CT) is the preferred modality of choice for assessment of haemorrhage due to its wide availability, lesser cost, rapidity, and high sensitivity for haemorrhage [5]. In CT imagery, haemorrhagic lesions can be characterised as brighter areas as compared to the surroundings. Manual segmentation of haemorrhage from CT imagery remains challenging due to its uneven boundaries, overlapping pixel intensities, noise, and artefacts in CT scans. Even though manual demarcation and estimation appears accurate, it is a time-consuming task, heavily dependent on the expertise of clinicians, and subject to intraobserver and interobserver variability [6][7][8][9]. Furthermore, the irregularities and complexities associated with varied shapes and sizes of haemorrhagic lesions with time will also make the process more difficult and strenuous. Moreover, the process can become a laborious and daunting task, particularly in large clinical settings, which can introduce inadvertent error and delay [10][11][12][13][14]. This can cause additional morbidity and even mortality to the patient. Thus, development of automated methods which will support clinicians for making rapid, reliable, and efficient diagnosis of haemorrhage from CT images would be highly beneficial.
Computer aided diagnosis (CAD) for haemorrhage detection
CAD systems can play an important role in early diagnosis of haemorrhage. CAD offers quick and consistent results with improved accuracy, thus enabling clinicians to make better treatment planning and strategic decisions [15,16]. The triage workflow can also be streamlined and optimized by automating the routine scanning of CT images with little or no human intervention. The following subsection discusses the existing works related to detection of haemorrhage using CAD. Several semi-automated and automated methods have been developed to detect ICH from CT images. Prakash et al. [17] applied the modified distance regularized level set (MDLRSE) for the detection of ICH and intraventricular haemorrhage (IVH) in CT scans. This two-step method involves shrinking and expansion and is highly sensitive to several DRLSE parameters; it requires a long computation time. Shahangian and Pourghassem [18] presented another modified version of the DRLSE method to detect small and uncertain haemorrhages. A hierarchical classifier was further used to distinguish the segmented regions into three different types of haemorrhage. However, control parameters require proper adjustment for accurate segmentation. Gillebert et al. [19] developed a voxel-based outlier detection procedure using a set of control images to classify infarct and haemorrhage in stroke CT slices. They reported a dice similarity index (DSI) of 0.52-0.89 for 500 CT scans. Muschelli et al. [20] proposed a fully automated method based on voxel-level probabilities to segment the haemorrhage. They obtained a higher median DSI of 0.899 using the random forest algorithm on 112 3D CT scan images. Al-Ayoob et al. [21] presented an automated segmentation method using Otsu's thresholding, morphological operations, and a region-growing technique. The method was tested on 76 CT images, and achieved a 92% accuracy for classification. Zhang et al. [22] applied adaptive thresholding and case-based reasoning to detect ICH and IVH in CT head scans. They achieved a segmentation accuracy of 95% for a set of ten CT images with different haemorrhage sizes ranging from small to large. Kumar et al. [23] combined fuzzy c-mean (FCM) clustering and DRLSE for haemorrhage segmentation using CT images. An entropy-based thresholding is applied to automatically selected FCM clustered imagery, and DRLSE is applied to the resultant images for ICH segmentation. They reported an average sensitivity of 87.06% for 35 brain CT images. Phan et al. [24] developed another method using Hounsfield values and k-nearest neighbour to classify four types of haemorrhage. The method obtained an accuracy of 93.33% on 150 images. Gautam and Raman [25] have used white matter fuzzy c-means (WMFCM) and wavelet based thresholding to detect and localise haemorrhage in 20 CT images. Nag et al. [26] has delineated haemorrhagic regions in 48 CT images by combining an autoencoder and the Chan Vese model. Foo et al. [27] developed a CAD system using multiple thresholding and symmetry detection to identify acute ICH in 108 CT volumes.
From the above studies, it is evident that a limited dataset of CT images has been utilized for ICH detection and classification. Furthermore, a few of the existing techniques involves intricate engineering techniques such as skull stripping, rigid body transformation and feature extraction based on intensity of voxels and information on local moments. It can also be noted that some of the segmentation methods are time consuming and complex due to the intricacies associated with the initialisation of mathematical models and also require manual intervention. In addition, some of these methods are based on hardcoded logic and presumptive rules, which will limit their application to certain specific scenarios. Hence, there is a need for a quick, robust, efficient, and cost-effective automated method which can accurately detect and classify ICH for a large set of CT images. The main goal of this paper is to identify the ICH very quickly in the CT images using efficient feature extraction method. The contributions of the paper are two-fold: • Entropy based non-linear features are efficiently extracted on a significantly large dataset to characterise the haemorrhagic brain stroke using CT images. • ICH risk index is formulated using only three significant features. This helps to perform the categorisation using a unique number.
The remainder of the paper is prepared as follows: "Materials" designates the used materials and the acquisition method. The framework of the proposed assessment tool is described in "Framework of our approach". The experimental results and its analysis is discussed in "Experimental results and discussions". Finally, we conclude the paper in "Conclusion".
Materials
The CT images were collected from ICH patients who are admitted for the CT scan in the Department of Radio-diagnosis of Kalinga Institute of Medical Science (KIMS), Bhubaneswar, India. The images were retrospectively acquired on a Digital p64-Slice CT scan machine in consultation with the radiologist. Based on visual inspection by the experts, the presence or absence of lesions was considered for the image collection. A total of 1603 non-contrast head CT axial images were collected from 48 patients, of which 784 were normal and 819 were ICH. The CT dataset was properly anonymized by an individual who was not part of the study. The CT slice which includes the largest area of haemorrhage was selected and saved in JPEG format. Figure 1 displays utilized samples of CT images in the current study.
Framework of our approach
We propose a new automated technique that can classify normal and haemorrhagic CT imagery using a minimal set of highly discriminating non-linear features along with normal classifiers. The proposed brain pathology identification system utilizes the following stages: (1) Feature generation, (2) feature organization, and (3) classification. In the first stage, brain imagery is pre-processed and nonlinear features are extracted with seven different types of entropy measures. Thereafter, features are rearranged using Student's t test. Finally, the significant features are categorized using the probabilistic neural network classifier. The details of the proposed technique are explained in the following sections. The overview of the proposed technique is illustrated in Fig. 2.
Feature generation
This stage generates the features to characterise brain CT imagery. It involves pre-processing and nonlinear feature extraction stages, which are briefly explained below.
Pre-processing
The main aim of the initial pre-processing phase is to improve image quality for further processing. This phase involves removal of noise and unwanted regions from the input images, which will contribute to an overall improvement in the performance of the automated system. The following steps are applied to each input image as part of preprocessing. (a) A mask is initially created to remove the skull from the CT image [28], (b) contrast limited adaptive histogram equalisation (CLAHE) is applied to enhance the contrast and intensity of the image [29], and (c) the mask is then applied to the thresholded image to extract the brain region [28]. Figure 3 depicts the output of the pre-processing stage.
Non-linear feature extraction
Morphological alteration in brain tissue can be captured by automated extraction of key non-linear texture-based features. In this work, textural features are extracted using statistical moments of the image gray level histogram. Entropy is one such robust textural descriptor which can be used to accurately classify normal versus abnormal images. Entropy indicates the degree of randomness in the intensity distribution of a grayscale image. Consider an image f(x, y) with a gray level histogram h i where i = 0,1,…L − 1 denotes the various gray levels present. Then the normalised histogram H i of image f with dimensions M and N in the X and Y directions is given as: Fig. 1 Specimens of brain CT images used in the current study Various entropy measures for textural feature extraction are as follows: Shannon entropy (Shan.entropy) [30]: It is defined as: Rényi entropy (Ren. entropy) [30,31]: It is defined as: where ∝ is the diversity index, and it is considered that ∝= 3.
Kapur entropy (Kap. entropy) [30,31]: It is defined as: where and are diversity indices with ≠ 1 , + − 1 > 0 , and = 0.5 and = 0.7. Yager entropy (Yag. entropy) [31]: It is defined as: Vajda entropy (Vaj. entropy) [32]: It is defined as: Maximum entropy (Max. entropy) [33]: Assume (i, j) where i is the gray level of a pixel and j is the average gray level of its neighborhood in the image f(x, y) and p(i, j) is the gray level probability. Supposing that the threshold of maximum entropy is set at [s, t] gray level pair, then the target segmented areas will consist of pairs (i 1 , j 1 ) where i 1 = 0,1,..,s and j 1 = 0,1,..,t and background areas will have pairs (i 2 , j 2 ) where i 2 = s + 1,..,L − 1 and j 2 = t + 1,..,L − 1. Then the total entropy H(s, t) can be computed as, The pair (s*, t*) with highest entropy will be selected as the optimal threshold [33], such that: Log energy entropy: It is defined as:
Feature organization
Student's t test is used for optimal feature selection. It measures the statistical difference between two sets. This method helps to compute a ratio between the difference of the two class means and the variability of the two classes [34]. The t value indicates the significance of the difference between the two groups, and a significant p value denotes that the sample data does not occur by chance. The t value and p value will be calculated for the features of both classes, and Fig. 3 Pre-processed brain CT images better ranking of features can be done by selecting a higher t value with corresponding lesser p value [35].
Classification
To classify normal versus ICH images, we have used three conventional classifiers, namely, k-nearest neighbour (k-NN), probabilistic neural network (PNN) and support vector machine (SVM) to evaluate feature performance.
k-Nearest neighbour
k-NN is a supervised learning method which assigns a class label to the unknown sample based on its resemblance with the known sample [36]. The k-nearest points in proximity to the unknown data are evaluated, and the most common class among k-nearest neighbours will be the class label for the unknown data [37,38]. The distance criteria used to measure the resemblance of new test data with the known data is the Euclidean distance. The set of samples with accurate classification should be selected as neighbors to predict the class for the new test data [38].
Probabilistic neural network
PNN is a multiclass classifier which uses a kernel discriminant analysis algorithm to classify the new test data into one of the various classes [39]. This multi-layered feedforward neural network architecture consists of four layers, namely the input layer, pattern layer, summation layer, and output layer [40].
Support vector machine
This supervised classifier is aimed at determining the hyperplane which maximises the margin among the different input classes, which are viewed in n-dimensional space [41]. The hyperplane that forms the maximum separation between the two input classes is selected for classification. Initially, SVM is designed for two-class problems and was extended to address multi-class problems. The Radial Basis Function (RBF) and polynomial kernel are the most frequently used kernel functions utilized by the classifier to solve non-linear boundary problems [42].
ICH indexing
Indexing is employed by several research groups to demonstrate the implication of generated features. As a result, a single number is composed to discriminate healthy versus pathology images [43][44][45][46][47][48][49][50][51]. Hence, we have developed a novel ICH index, which is formulated using only three clinically significant features. The mathematical equation for the ICH index is given as where F 1 is Max. entropy, F 2 is Yag. entropy, and F 3 is Kap. entropy. The above equation is developed with the assistance of mathematical simulation. All the constant values and the equation itself are formulated empirically to obtain a maximum separation between the normal and ICH classes using a single index value.
Experimental results and discussion
The proposed model is tested on a set of 1603 axial CT images (normal: 784, ICH: 819). Initially, the brain tissue was separated from non-brain tissues (such as skull, fat, etc.) from all the images by thresholding in the pre-processing stage to obtain better classification results. Then, the entropies of each contrast enhanced image are calculated, i.e., in total seven entropy features. Hence, a complete feature set for these brain CT images is generated, and the significance is tested using t values, as shown in Fig. 4 TN), and false positive and negative (FP and FN) statistical metrics. For generality, tenfold cross-validation is used. The average value of the tenfold is taken to assess the system performance. k-NN achieved an accuracy of 97.06% for k = 5, PNN achieved an accuracy of 97.37% for = 0.03 , and SVM with RBF (i.e., radial basis function) kernel achieved an accuracy of 97% using only six features. Table 1 shows the obtained maximum results for various classifiers using only six features. The complete algorithm is executed under the MATLAB environment using a standalone personal computer.
Discussion
The main goal of this study was to develop a quick-assessment tool for ICH estimation with CT brain imagery. In the current study, the identification of ICH is explored, with an accuracy of 97.37%. It was observed that the entropies are efficiently used to analyse medical data, i.e., 1D signals and 2D signals [52][53][54][55]. Hence, seven entropies were used in our study. These entropies were able to capture the sudden changes in the pixular distribution. They quantify these variations so as to suitably discern the presence of intracerebral haemorrhage in brain CT imagery. It is observed that the system achieved a maximum performance using only six features. Table 2 shows the means and standard deviations (SD) of these features.
It is noted that the first six features are statistically significant with a p value < 0.05, and these features are arranged in descending order using t value. These entropies powerfully distinguish brain CT imagery, and can be analysed using a box plot as shown in Fig. 5. It is noted from Fig. 5 that the range of distributions of the entropy features to characterise brain CT imagery differ. Thus, the features are able to provide clear structural information about imagery and play a key role in performing accurate classification. Therefore, the combination of all six features produces a remarkable performance, shown in Fig. 6. As observed in Table 1, PNN clearly distinguishes normal versus ICH imagery efficiently, with an accuracy of 97.37%, an achieved sensitivity of 96.94%, and a specificity of 97.83%. Both k-NN and SVM classifies the normal versus abnormal group with a sensitivity of 97.43 and 96.21% as compared to PNN. It is also observed that the system reached a PPV of 97.90%. Figure 6 reveals that the performance of the proposed algorithm increases monotonically with the number of features incorporated. To the best of our knowledge, this is the first method which deploys a minimal set of highly discriminative entropy-based nonlinear features to classify a large dataset of 1603 CT images. The sensitivity of the system showed that it is robust and can support doctor's decisions at a high level. Hence, we expect such tools will be incorporated in hospital settings for improved interpretation.
3
To perform improved discrimination between normal and ICH classes, we have introduced an ICH index using Max. entropy, Yag. entropy, and Vaj. entropy. Figure 7 shows the plot of ICH index for the two classes. It can be observed from Fig. 7 that the range of ICH indexing values provides a clear distinction between normal versus ICH classes. This Feature-by-feature performance using the PNN classifier helps to provide quick and accurate results using a single quantity. Hence, we can conclude as follows. From the result, it is evident that the false positive and negative values are 17 and 25 samples, respectively. It is noted that the specificity approaches 97.83%, which reduces doctor workload by approximately 50%. Furthermore, only three features are employed to develop ICH risk indexing. It is noted that for ICH, mean values of Max. entropy and Kap. entropy are high when compared to normal brain images. This helps to formulate a unique equation to enhance the level of discernment between the two classes. Furthermore, the proposed methodology can be used to analyse medical images with various modalities. In the future, we would like to develop a generalised risk index using a greater number of images.
Several studies have designed a CAD tool for assessment of ICH using brain CT imagery [56][57][58]. A summary of these methods is shown in Table 3. Most of the approaches are segment-based; however the proposed approach is feature-based. Hence, it avoids localization of the region of interest. Our proposed method requires minimal features to model a large set of data. After validating the algorithm with a greater number of subjects, the proposed method can be used as an assistive tool for clinicians in hospital and polyclinic settings. In the future, we intend to extend this work by incorporating deep learning models with an even larger data set, as deep learning models outperform all conventional machine-learning models [59][60][61][62]. Figure 8 shows the proposed architecture for future digital healthcare using the Internet of Things (IoT) for detecting ICH, wherein patients will receive an instant diagnostic result to their mobile phone numbers through cloud-based services. This will automatically reduce the overall turnaround time for diagnosis and decision-making.
To summarize the advantages of the proposed model: Fig. 7 Error plot of ICH indexing for normal and ICH categories • It is a feature-based method requiring a minimal number of features, and can be implemented in systems with limited resources. It avoids the localization of clinical features for brain CT images. • Speedup of the estimation is done using ICH indexing, which helps to categorise brain CT imagery with a unique value.
Conclusion
We have developed a fully automated system for early diagnosis and management of intracerebral haemorrhage in CT imagery. The technique, which consists of extraction of a minimal set of entropy features and supervised learning methods (k-NN, PNN and SVM), can clearly discern normal versus ICH imagery. Four evaluation metrics were used to assess system performance, and the PNN was found to successfully classify CT imagery with an accuracy of 97.37%. This shows that our approach can perform better as compared to existing state-of-the-art methods. Furthermore, the developed ICH indexing offers a swift, cost effective, and highly accurate solution for identification of ICH lesions. Moreover, this method simplifies the entire diagnostic procedure, and enables doctors to analyse a large number of CT scans in quick succession with highly accurate and consistent results. We would like to extend this work by incorporating more subjects, as well as by including additional features and deeper convolutional neural network techniques.
Funding
The authors received no financial support for the research.
Compliance with ethical standards
Conflict of interest None of the authors have any conflict of interest.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/. | 2021-01-07T09:09:44.437Z | 2021-01-05T00:00:00.000 | {
"year": 2021,
"sha1": "f3332d8c013ba8e7007505fa2807bbf7d9ae9ae6",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40747-020-00257-x.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d1a0565479c1f940c79952101488f14f4451bd3e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
18375760 | pes2o/s2orc | v3-fos-license | A Measurement of Small Scale Structure in the 2.2
The amplitude of fluctuations in the Ly-a forest on small spatial scales is sensitive to the temperature of the IGM and its spatial fluctuations. The temperature of the IGM and its spatial variations contain important information about hydrogen and helium reionization. We present a new measurement of the small-scale structure in the Ly-a forest from 40 high resolution, high signal-to-noise, VLT spectra at z=2.2-4.2. We convolve each Ly-a forest spectrum with a suitably chosen wavelet filter, which allows us to extract the amount of small-scale structure in the forest as a function of position across each spectrum. We compare these measurements with high resolution hydrodynamic simulations of the Ly-a forest which track more than 2 billion particles. This comparison suggests that the IGM temperature close to the cosmic mean density (T_0) peaks near z=3.4, at which point it is greater than 20,000 K at 2-sigma confidence. The temperature at lower redshift is consistent with the fall-off expected from adiabatic cooling ($T_0 \propto (1+z)^2$), after the peak temperature is reached near z=3.4. At z=4.2 our results favor a temperature of T_0 = 15-20,000 K. However, owing mostly to uncertainties in the mean transmitted flux at this redshift, a cooler IGM model with T_0 = 10,000 K is only disfavored at the 2-sigma level here, although such cool IGM models are strongly discrepant with the z ~ 3-3.4 measurement. We do not detect large spatial fluctuations in the IGM temperature at any redshift covered by our data set. The simplest interpretation of our measurements is that HeII reionization completes sometime near z ~ 3.4, although statistical uncertainties are still large [Abridged].
INTRODUCTION
A key characteristic in our description of the baryonic matter in the Universe is the thermal state of the gas in the intergalactic medium (IGM). As such, detailed constraints on the temperature of the gas in the IGM, its spatial variation, density dependence, and redshift evolution, are of fundamental importance to observational cosmology. During the Epoch of Reionization (EoR), essentially the entire volume of the IGM becomes filled with hot ionized gas. The thermal state of the IGM subsequently retains some memory of when and how the intergalactic gas was ionized (Miralda-Escude & Rees 1994, Hui & Gnedin 1997, owing to the long cooling times for this low density gas. Measurements of the thermal history of the IGM hence translate into valuable constraints on the reionization history of the Universe (e.g. Theuns et al. 2002a, Hui & Haiman 2003. Current observations suggest that there may in fact be two separate EoRs: an early Epoch of Hydrogen Reion-ization during which hydrogen is ionized, and helium is singly-ionized, by star-forming galaxies, followed by a later Epoch of Helium Reionization during which helium is doubly ionized by bright quasars (e.g. Madau et al. 1999). Recent measurements of the quasar luminosity function (Hopkins et al. 2007), combined with estimates of the quasar spectral shape and the clumpiness of the IGM, suggest that HeII reionization may complete somewhere near z ∼ 3 (Furlanetto & Oh 2008, Faucher-Giguère et al. 2008a, McQuinn et al. 2008. Indeed, there are some observational indications that helium is doubly-ionized close to z ∼ 3 (see e.g., Schaye et al. 2000, Furlanetto & Oh 2008, Faucher-Giguère et al. 2008a, Mc-Quinn et al. 2008 for a discussion), although the evidence is generally weak and controversial.
Further detailed studies of the HI Ly-α forest near z ∼ 3 offer promise to pin-point when HeII reionization occurs and can potentially constrain properties of HeII reionization, such as the filling factor and size distribution of HeIII regions at different stages of reionization. Photoheating during HeII reionization impacts the thermal state of the IGM (e.g., Miralda-Escude & Rees 1994, Abel & Haehnelt 1999, McQuinn et al. 2008), and in turn influences the statistics of the HI Ly-α forest. In the midst of HeII reionization, the temperature of the IGM should be inhomogeneous (e.g. McQuinn et al. 2008): there are hot regions where HeII recently reionized, and cooler regions where helium is only singly-ionized. Additionally, regions reionized by nearby sources will typically be cooler than re-gions reionized by far away sources. Regions reionized by distant sources receive a heavily filtered and hardened spectrum, and experience more photoheating than gas elements that are close to an ionizing source. The average temperature, as well as the amplitude of temperature fluctuations and the scale dependence of these fluctuations, are hence closely related to the filling factor and size distribution of HeIII regions during reionization. Detailed studies of the HI Ly-α forest may allow us to detect these temperature inhomogeneities, and thereby constrain details of HeII reionization with existing data. In principle, additional processes including heating by large scale structure shocks, heating from galactic winds, cosmic-ray heating, Compton-heating from the hard Xray background, photo-electric heating from dust grains, or even heat injection from annihilating or decaying dark matter, may also impact the temperature of the IGM (see e.g. Hui & Haiman 2003 for references and a discussion). Sufficiently detailed constraints should help determine the relative importance of photo-heating and these additional effects.
The aim of the present paper is to make a new measurement of small-scale structure in the Ly-α forest, which can be used to constrain the thermal history of the IGM, and to search for signatures of HeII reionization in the HI Ly-α forest. There have been several previous measurements of the thermal history from the Ly-α forest (Schaye et al. 2000, Ricotti et al. 2000, McDonald et al. 2001, Zaldarriaga et al. 2001, Theuns et al. 2002b, Zaldarriaga 2002. However, the agreement between these studies is somewhat marginal, and the different authors reach differing conclusions regarding the thermal history of the IGM. Note that it has been almost a decade since many of these measurements were made. In the meantime, better Ly-α forest data sets have become available, and we now have better numerical simulations to help interpret and calibrate the observational measurements. It is hence timely to revisit these issues. Of particular interest from the theoretical side is the work of McQuinn et al. (2008), who performed the first detailed, three-dimensional radiative transfer simulations of HeII reionization which self-consistently track the thermal state of the IGM during HeII reionization (see also Paschos et al. 2007). Recent analytic (Furlanetto & Oh 2008) and one-dimensional radiative transfer calculations (Tittley & Meiksin 2007) are also refining our understanding of HeII reionization. In this paper we use improved observational data, along with a somewhat refined methodology, to make a new measurement of small-scale structure in the Ly-α forest. We also make a first comparison of the results with high resolution hydrodynamic simulations of the forest, in order to explore broad implications of our measurements for the thermal history of the IGM. In future work, we will use HeII reionization simulations to obtain more detailed constraints.
The small-scale power in the Ly-α forest is very sensitive to the temperature of the IGM (e.g. Zaldarriaga et al. 2001): a hotter IGM leads to more Doppler broadening, and Jeans-smoothing, which in turn leads to less small-scale structure in the Ly-α forest. The amplitude of the transmission power spectrum on small-scales hence provides an IGM thermometer. In addition to the average temperature, we aim to measure or constrain temper-ature inhomogeneities, i.e., we would like to be sensitive to variations in the small-scale power across each quasar spectrum. In order to accomplish this, we convolve each spectrum with a filter that is localized in both Fourier space and configuration space, i.e., a 'wavelet' filter. For a suitable choice of smoothing scale, this provides a measurement of the IGM temperature as a function of position across each quasar spectrum. Although our basic method closely resembles that of Theuns & Zaroubi (2000) and Zaldarriaga (2002), there are some differences in the details of our implementation. For instance, we employ a different filter than these authors.
The outline of this paper is as follows. In §2 we detail our methodology for constraining the thermal history of the IGM. In §3, we describe the data set used in our analysis, and present measurements. §4 focuses on the theoretical interpretation of the measurements. Here we describe cosmological simulations which we compare with the observations, present preliminary constraints on the thermal history of the IGM, comment on the implications for our understanding of the reionization history of the Universe, and compare with previous measurements. §5 discusses cross-correlating temperature measurements from the HI Ly-α forest with HeII Ly-α forest spectra. In §6 we conclude, mentioning plans and possibilities for related future work. Several appendicies explore shot-noise bias, metal line contamination, and the convergence of our numerical simulations.
METHODOLOGY
In this section, we present our method for constraining the temperature of the IGM, and illustrate its utility with cosmological simulations. First, we introduce some notation and briefly mention a few relevant facts regarding the thermal history of the IGM and the Ly-α forest.
2.1. The thermal history of the IGM and the Ly-α forest After a low-density gas element is photo-heated during reionization, it will subsequently cool and gas elements with similar photo-heating histories generally land on a 'temperature-density relation' (Hui & Gnedin 1997): Here δ ρ (r) denotes the fractional gas over-density (implicitly smoothed on the Jeans scale) at spatial position r. T 0 is the temperature of a gas element at the cosmic mean density, and the power-law index γ approximates the density-dependence of the temperature field. The temperature that a gas element reaches at say, z = 3, depends on the temperature that it reaches during reionization, and on its subsequent cooling and heating. The temperature attained by each gas element during reionization depends mostly on the shape of the spectrum of the sources that ionize it. The relevant spectrum is generally modified from the intrinsic spectral shape of an ionizing source, owing to intervening material between a source and the gas element in question, which tends to harden the ionizing spectrum. After a gas element is photoheated during reionization, adiabatic cooling owing to the expansion of the Universe is the dominant cooling mechanism (for the bulk of the low density gas that makes up the Ly-α forest). 7 When a gas element is significantly ionized during reionization it reaches photoionization equilibrium and receives only a small amount of additional photoheating as low levels of residual neutral material are ionized. During reionization, gas elements gain heat as hydrogen is ionized, as helium is singly ionized, and when helium is doubly ionized. If helium is doubly-ionized significantly after hydrogen is ionized, two separate 'reionization events' may be important in determining the thermal history of the IGM. As both hydrogen and helium reionization are extended, inhomogeneous processes, T 0 and γ may be strong functions of spatial position following reionization events. However, once a sufficiently long time passes after reionization, gas elements reach a 'thermal asymptote' and lose memory of the initial photoheating during reionization (Hui & Gnedin 1997). At this point the inhomogeneities in T 0 and γ should again be small. In the absence of HeII photoheating, one expects the temperature of the IGM at z ∼ 3 to be T 0 10, 000 K, with the precise temperature depending on the timing of hydrogen reionization and the nature of the ionizing sources (Hui & Haiman 2003). Sufficiently long after a reionization event, the slope of the temperature density relation, γ, tends to γ ∼ 1.6, owing to the competition between adiabatic cooling and residual photoionization heating (Hui & Gnedin 1997). HeII reionization likely raises the temperature of the IGM by roughly 10, 000 K, with the precise increase depending on the spectrum of the ionizing sources and other factors. HeII photoheating and the spread in timing of HeII reionization flatten the temperature-density relation to γ ∼ 1. 3 (McQuinn et al. 2008).
The temperature of the IGM has three separate effects on Ly-α forest spectra. First, increasing the temperature of the absorbing gas increases the amount of Doppler broadening: thermal motions spread the absorption of a gas element out over a length (in velocity units) of b = 2kT /m p ∼ 13 km/s for T = 10 4 K gas. Second, the gas pressure and Jeans smoothing scale increase with increasing temperature. Since it takes some time for the gas to move around and the gas pressure to adjust to prior heating, this effect is sensitive not to the instantaneous temperature, but to prior heating (Gnedin & Hui 1998). This effect is more challenging for simulators to capture, because properly accounting for it requires re-running entire simulations after adjusting the simulated ionization/reheating history. The Jeans smoothing effect is not completely degenerate, however, with the Doppler broadening one because Jeans-smoothing smooths the gas distribution in three dimensions, while Doppler broadening smooths the optical depth in one dimension (Zaldarriaga et al. 2001). Finally, the recombination coefficient is temperature dependent, scaling as T −0.7 : hotter gas recombines more slowly, and reaches a lower neutral fraction than cooler gas.
The first two of these effects mostly impact the amplitude of small-scale fluctuations in the Ly-α forest (e.g. 7 Compton cooling off of the CMB is efficient only at higher redshifts than considered here. Specifically, the Compton cooling time for gas at the cosmic mean density is equal to the age of the Universe at z = 6. Gas reionized sufficiently before this redshift will lose memory of its initial temperature -i.e., its temperature at reionization -by z ≤ 6 (Hui & Gnedin 1997). Zaldarriaga et al. 2001). For the range of models we are interested in presently, the first effect (Doppler broadening) should be the dominant influence on the small-scale power. At a given redshift, the small-scale structure in the Ly-α forest is most sensitive to the temperature of absorbing gas at some characteristic density, with less dense gas giving very little absorption and more dense gas giving rise to mostly saturated absorption. At z ∼ 3 the forest is sensitive mostly to the temperature of gas a little more dense than the cosmic mean (McDonald et al. 2001, Zaldarriaga et al. 2001. At higher redshifts, the absorption is sensitive to the temperature of somewhat less dense gas, while at lower redshifts the absorption depends on more dense gas (Davé et al. 1999).
2.2. Data Filtering and Constraining the Temperature Next we describe our method for constraining T (r) (Equation 1) from absorption spectra. Following earlier work (Theuns & Zaroubi 2000, Zaldarriaga 2002, Theuns et al. 2002b), we convolve Ly-α transmission spectra with a filter that pulls out high-k modes across each spectrum. As mentioned above, Doppler broadening convolves the optical depth field with a Gaussian filter with a -temperature-dependent -width of tens of km/s. We hence desire a filter that extracts Fourier modes with wavelengths of tens of km/s across each spectrum.
We have found that a very simple choice of filter accomplishes this task. In configuration space, the filter we use may be written as We fix the normalization, A, by requiring the filter to have unit power -i.e., after filtering a white-noise field with noise power spectrum P N (k) = ∆uσ 2 , the filtered field has variance σ 2 . (∆u denotes the size of a spectral pixel in velocity units.) 8 With this normalization, the filter's Fourier transform in k-space is In configuration space this filter is simply a plane-wave, damped by a Gaussian. In Fourier space, the filter is a Gaussian centered around k = k 0 . We would like the filter to have zero mean. Throughout this work we choose k 0 s n = 6, in which case Equation 3 shows that the zero mode of the filter Ψ n (k = 0) is extremely close to zero, satisfying closely the zero mean requirement. This filter clearly has the properties of being localized in both configuration space and Fourier space. These are among the defining properties of a 'wavelet filter', and the filter of Equations (2) and (3) is known as a 'Morlet Wavelet' in the wavelet literature. 9 We plot its form in Figure 1 for s n = 34.9 km/s, which, as we discuss further below, turns out to be one convenient choice. Note that the filters Ψ n (Equation 2) do not form an orthogonal set, but this is unnecessary for our present purposes. We do not expand the entire spectrum in terms of a wavelet basis 8 The variance is σ 2 = R ∞ −∞ dk/(2π)|Ψn(k)| 2 P N (k) for our Fourier convention. 9 http://en.wikipedia.org/wiki/Wavelets and references therein. in this work -the Morlet wavelet, with locality in real and configuration space, is simply a convenient filter.
We then convolve each observed (or simulated) spectrum with the above filter. In this paper, we consider throughout the fractional Ly-α transmission field, δ F = (F − F )/ F . Here F = e −τ is the Ly-α transmission, and F is the global average Ly-α transmission. We label the flux field, δ F , convolved with the filter Ψ n as a n : and compute the convolution using Fast Fourier Transforms (FFTs). Note that a n (x) is a complex number for our choice of filter, Ψ n (x). A measure of small-scale power is then which for brevity of notation we sometimes refer to as 'the wavelet-filtered field' or as 'the wavelet amplitudes' (even though it is proportional to the transmission field squared). It is also useful to note that the average wavelet amplitude is just with P F (k) denoting the power spectrum of δ F . Hence, the mean wavelet amplitude is nothing more than the usual flux power spectrum for some convenient 'band' of wavenumbers (see Figure 5 for further illustration). Ad- A simulated spectrum, with some portions of the spectrum drawn from a simulated 'hot' model with T 0 = 2 × 10 4 K and γ = 1.3, and other regions drawn from a 'cold' model with T 0 = 1 × 10 4 K and γ = 1.3. The hot and cold regions are alternating and are each of length 20 co-moving Mpc/h (2230 km/s). Bottom panel: The red dashed lines and the tick marks on the right hand side of the panel indicate the temperature of the corresponding regions in the upper panel. The solid blue line shows the wavelet-amplitudes (for sn = 34.9 km/s), top-hat filtered with a L = 1, 000 km/s filter. The smoothed wavelet amplitudes are a good tracer of the temperature of each region. . ditional statistics of A(x), beyond the mean, characterize the spatial variations in the small-scale transmission power. We frequently find it convenient to smooth A(x) using a top-hat filter of smoothing length L: Here Θ(|x − x ′ |; L/2) = 1 for |x − x ′ | ≤ L/2 and is zero otherwise. Smoothing the wavelet filtered field is desirable since the small-scale power is not a perfect indicator of the local temperature, and smoothing reduces the noisy excursions that the wavelet amplitudes can take. Since the hot regions are expected to be rather large during HeII reionization (McQuinn et al. 2008), we can smooth considerably without diluting any temperature inhomogeneities. We generally adopt L = 1, 000 km/s, corresponding to roughly ∼ 10 co-moving Mpc/h at z = 3. We discuss this choice further below.
Since thermal broadening smooths the optical depth field on tens of km/s scales, A L (x) should be a good tracer of the temperature for suitable choices of s n . In order to illustrate this concretely, we apply the filter to a simulated spectrum from a simple toy inhomogeneous temperature model, following a similar example from Theuns & Zaroubi (2000). Specifically, we splice -PDF of the wavelet amplitudes for different models at z = 3 and sn = 34.9 km/s. The curves show simulated models for the PDF of the wavelet amplitudes, top-hat smoothed over L = 1, 000 km/s, for several temperature-density relations. The mean transmitted flux is fixed in this comparison. The black solid and red-dashed curves correspond very roughly to temperaturedensity relations expected just after HeII reionization. The blue short-dashed and green long-dashed curves, on the other hand, loosely correspond to the temperature-density relation expected when HI and HeII are both reionized much before z = 3.
together simulated lines of sight (see §4.1) with alternating portions of spectrum drawn from each of a 'hot' temperature model with T 0 = 2 × 10 4 K, and γ = 1.3, and a 'cold' temperature model with T 0 = 1 × 10 4 K, and γ = 1.3. We refer the reader to §4.1 and §6 for details regarding the simulated spectra. If the wavelet filtered field provides a good indicator of the temperature, regions with hot temperatures should tend to produce low wavelet amplitudes, while the cold regions should produce high wavelet amplitudes. The results of this test are shown in Figure 2, for smoothing scales of s n = 34.9 km/s and L = 1, 000 km/s. Cold regions tend to contain several narrow lines, and produce a large response after filtering: the regions near ∆v = 6, 000 km/s and 15, 000 km/s have A L 0.02. The hot regions typically have A L 0.005 and never reach the large amplitudes found in the cold regions. There is some variance in the wavelet amplitude from region to region -for example, A L is not as large in the cold region near ∆v = 10, 000 km/s as it is at ∆v = 6, 000 km/s and 15, 000 km/s. Nonetheless, the smoothed wavelet amplitude is a fairly good tracer of the underlying temperature field.
In order to quantify this further, we calculate the probability distribution function (PDF) of the smoothed wavelet amplitudes. We do this for the two choices of small-scale smoothing adopted in this paper (see §2.3): s n = 34.9 km/s, and twice this, s n = 69.7 km/s. The PDF of smoothed wavelet amplitudes will be the main statistic we consider in the present paper. For now, we examine models with homogeneous temperature-density relations. The models we select for the temperaturedensity relation loosely correspond respectively to what one expects right after HeII reionization (T 0 ∼ 20 − 25, 000 K and γ = 1.3) (McQuinn et al. 2008), and to what one expects if HI, HeI, and HeII are all ionized much before z ∼ 3 (T 0 ∼ 7, 500 − 10, 000 K and γ = 1.6) (Hui & Haiman 2003). The latter, cooler model, might be expected if, for example, the IGM is reionized by abundant faint quasars which have sufficiently hard spectra to doubly ionize helium at the same time they reionize hydrogen, or if high redshift galaxies have a surprisingly hard spectrum and can doubly ionize helium themselves. Note that the precise z ∼ 3 temperature in the early reionization models is determined by residual photoheating and depends on the reprocessed spectra of the post-reionization ionizing sources (Hui & Haiman 2003).
The PDFs in these models are shown for two choices of small-scale smoothing in Figure 3 (s n = 34.9 km/s), and Figure 4 (s n = 69.7 km/s). A larger range of models will be examined in §4. Considering first the smaller smoothing scale (Figure 3), one sees that the peak of the PDF in the T 0 = 20, 000 K, γ = 1.3 model is reached at a smoothed wavelet amplitude that is roughly a factor of 2 smaller than the peak location in the T 0 = 10, 0000 K, γ = 1.6 model. The PDFs in the hotter T 0 ∼ 25, 000 K model and the colder T 0 = 7, 500 K model differ by even more. In the midst of HeII reionization, one expects an inhomogeneous temperature field and the true temperature-density relation may be a mix of the models shown here. At any rate, the wavelet PDFs differ significantly in the models with 20, 000 K and those with cooler temperatures. This further demonstrates -beyond the visual inspection of Figure 2 -that the wavelet PDF is a useful statistic for constraining the thermal history and HeII reionization. The typical wavelet amplitude in each model is significantly larger at s n = 69.7 km/s ( Figure 4), a consequence of the roughly exponential fall-off in flux power towards high k (Zaldarriaga et al. 2001). The PDFs still vary significantly with temperature-density relation at this larger smoothing scale, although the sensitivity is a little bit reduced.
Smoothing Scales
Before we move on to analyze observational data, let us consider further the two smoothing scales, s n and L, in our calculations. We make measurements for two choices of small-scale smoothing: s n = 34.9 km/s and s n = 69.7 km/s. 10 For the former choice of smoothing scale |Ψ n (k)| 2 is proportional to a Gaussian centered on k 0 = 6/s n = 0.17 s/km, with width σ k = √ 2/s n = 0.04 s/km. The latter choice of smoothing scale centers the Gaussian on k 0 = 6/s n = 0.086 s/km, with a width of σ k = √ 2/s n = 0.02 s/km. The range of scales probed by these filters is shown in comparison to simulated flux power spectra in Figure 5. As illustrated in Figures 3, 4, and 5, the wavelet PDFs are slightly less sensitive to the IGM temperature for the larger smoothing scale filter. On the other hand, the results at the larger smoothing scale are less sensitive to metal line contamination 10 The precise values are chosen because it is convenient for the smoothing scale to be related to the pixelization of our data ∆u (see §3) by sn = 2 n ∆u for some choice of n. and other systematics. Increasing the smoothing by still another factor of two would almost completely remove the sensitivity to temperature (see Figure 5). Decreasing s n by an additional factor of two (to s n = 17.4 km/s) increases the fractional difference between model curves, but brings one very far out on the exponential tail of the power spectrum ( Figure 5) and makes the results very sensitive to metal line contamination, detector noise, and pixelization effects. The two choices of filtering scale used here represent a compromise between discriminating power and systematic effects. Considering both choices of filtering scale gives a consistency check on the results and helps to protect against systematic effects.
Let us now consider the large scale smoothing, L. Naively, one would want to tune this filtering to precisely the scale on which the temperature field is inhomogeneous. Since the power spectrum of temperature fluctuations during HeII reionization has a relatively well defined peak (McQuinn et al. 2008), one might expect the variance of the wavelet amplitudes to also show a clear maximum at some characteristic smoothing scale. However, in practice we find that this is washed out in Ly-α forest spectra, which as one dimensional skewers suffer owing to aliasing from high-k modes transverse to the line of sight (Kaiser & Peacock 1991). To illustrate this, consider the two-point function of the wavelet amplitudes (squared), and its Fourier transform, the power spectrum of waveletamplitudes squared, P A (k).
Here v 1 and v 2 are two points along a quasar spectrum and A is the globally averaged wavelet amplitude squared, and we have normalized this two-point function by the (square of the) mean wavelet amplitude squared. The power spectrum of wavelet amplitude squared fluctuations encodes how much the small-scale power spectrum fluctuates across a quasar spectrum as a function of scale. It involves a product of four values of δ F and is hence a four-point function.
We show two simulated examples of P A (k) in Figure 6 for s n = 34.9 km/s. One can see that, except for the small-scale cut-off, the power spectra are quite flat as a function of scale. This is somewhat unfortunate, as one would naively hope that the scale dependence of P A (k) would directly reveal the scale dependence of temperature fluctuations, but the flatness we find is a direct consequence of aliasing. We have experimented with various inhomogeneous temperature models, including simulated models from McQuinn et al. (2008) and find similarly flat power spectra. One might be able to get around this by using quasar pairs to measure the power spectrum of wavelet amplitude squared transverse to the line of sight. We defer, however, investigating this to future work. For the moment, our main conclusion is that, owing to the flatness of P A (k), the precise smoothing scale L is relatively unimportant. Hence we generally stick to L = 1, 000 km/s as a convenient choice. We nevertheless investigate the dependence on large scale smoothing from observational and simulated data in §4.4.
To summarize, by applying a very simple filter to a quasar spectrum, we can measure the small-scale power spectrum of transmission fluctuations as a function of position across each spectrum, and thereby constrain the temperature of the IGM. Note that our procedure does not involve identifying absorption lines and fitting profiles to identified lines, (although we find in §3 that it is important to identify metal absorbers in the forest which does involve line-fitting). It is instead within the spirit of treating the forest as a one dimensional random field and measuring the statistics of this continuous field (e.g. Croft et al. 1998). This is more appropriate given the modern understanding that the forest arises from fluctuations in the line of sight density field, rather than discrete absorbing clouds (e.g. Hernquist et al. 1996. In this way our approach is very similar to Theuns & Zaroubi (2000) and Zaldarriaga (2002), and somewhat resembles Zaldarriaga et al. (2001), but is rather different than Schaye et al. (2000), Ricotti et al. (2000) and McDonald et al. (2001).
Additionally, recall that the widths of most of the absorption lines in the Ly-α forest are dominated by the Hubble expansion across an absorber, and not by thermal broadening . In order to determine the temperature with a line fitting method, one typically looks for a low-end cut-off in the distribution of line widths (e.g. Schaye et al. 2000). One might worry that this throws out information as thermal broadening smooths the spectrum everywhere. In practice, though, it appears that most of the signal and information in our method also arises from deep narrow lines which produce a large response after wavelet filter-ing. Another possible issue is that the precise interpretation of the line width cut-off in the line-fitting studies is unclear when the temperature field is inhomogeneous. It would certainly be interesting to compare more closely the different methods, but we defer this to future work. For now, note that our method is very simple to apply.
DATA ANALYSIS
We now move on to apply the method to observational data. The main result will be a measurement of the PDF of the smoothed wavelet amplitudes at z ∼ 2.2 − 4.2. Our data set consists of 40 quasar spectra observed with UVES on the VLT, described and reduced as in Dall'Aglio et al. (2008). We have identified metal lines in the Ly-α forest for 11 of these spectra, as described in §3.2. The spectra have high S/N ranging from S/N ∼ 30−130 (quoted at the continuum level per 0.05Å pixel), and high spectral resolution, FWHM ∼ 6 km/s. High spectral resolution and S/N are essential to reliably probe high-k modes in the spectra and to estimate the temperature of absorbing gas. A detailed list of the quasar spectra, with redshift estimates and other properties, can be found in Dall'Aglio et al. (2008).
Raw Measurements
We aim to estimate the small-scale power in a way that minimizes sensitivity to uncertainties in the quasar continuum. Dall'Aglio et al. (2008) carefully continuum fit the data we use here, and used Monte Carlo simulations to check the accuracy of their fits. We can further mitigate uncertainties by considering fluctuations in the transmission around the mean, relative to the mean. This is helpful because the overall normalization of the continuum divides out. Provided that the continuum varies slowly across each spectrum in comparison with the fluctuations in the forest, we can additionally remove any slowly varying trend produced by the quasar continuum -or any slowly-varying residuals in the case of data that has previously been continuum fitted -and obtain an unbiased estimate of the small-scale structure in the forest . For each spectrum, we estimate a running mean flux by filtering the data on large scales as in Croft et al. (2002), Kim et al. (2004), and . Our estimate of the fractional transmission is then:δ Here F (∆v) is the flux at velocity separation ∆v, and F R (∆v) is the spectrum smoothed with a large radius filter. We use here a Gaussian filter with radius R = 2, 500 km/s. One may formδ F using either the raw flux or a continuum-normalized flux. In the present work, we use the continuum fitted data from Dall'Aglio et al. (2008) throughout. The large scale filter removes any slowlyvarying trend owing to structure in the underlying quasar continuum from, e.g. weak emission lines, or slowly varying residuals in the case of continuum fitted data. It also means that we sacrifice measuring large scale modes in the Ly-α forest, but we presently focus on small-scale structure, and sufficiently large scale modes are regardless dominated by structure in the quasar continuum. We refer the reader to Croft et al. (2002) and for some tests illustrating the robustness of δ F to continuum-fitting uncertainties. As a double-check that the present results are insensitive to the preciseδ F estimator, we also generatedδ F with a different choice of large scale smoothing for one of our redshift bins, R = 10, 000 km/s -i.e., close to the flat mean caseand found a nearly identical wavelet PDF. We begin by estimatingδ F across each spectrum, first re-binning, using linear interpolation, all of the data onto uniform pixels in velocity space with ∆u = 4.4 km/s. We consistently use the same binning in constructing simulated spectra. This avoids effects from variable pixelization, while still preserving the scales of interest. 11 After formingδ F across each spectrum, we break the data into several (contiguous and non-overlapping) redshift bins of full-width ∆z = 0.4, centered around z = 2.2, 2.6, 3.0, 3.4, 3.8, and 4.2. Owing to uneven redshift sampling in the data set, the redshift bin atz = 3.8 (Dall'Aglio et al. 2008) would be almost entirely empty and so we do not consider it further here. This occurs because most of the spectra in the Dall' Aglio et al. (2008) sample have emission redshift z em 3.7, but the sample has two high quality spectra at emission redshift above z em 4.6, which contribute extended ( 150 co-moving Mpc) stretches to our highest redshift bin atz = 4.2. We select only spectral regions that lie between rest frame wavelengths of λ r = 1050Å and λ r = 1190Å. This conservative cut serves to remove spectral regions that may be contaminated by either the proximity effect, by the Ly-β forest (and other higher Lyman series lines), or by Ly-β and OVI emission features. We then form the wavelet amplitude squared field, smoothed at L = 1, 000 km/s, using Equations 2 -7. The resulting spectra and wavelet amplitudes are visually inspected. Regions impacted by DLAs, or with obvious spurious stretches, are removed from the data sample by hand.
It is instructive to examine a few example spectra visually before measuring their detailed statistical properties. In Figures 7 -10 we show several spectra, along with the corresponding (smoothed) wavelet amplitudes squared for a few redshift bins. The most conspicuous change across the different redshift bins is the increasing average absorption with increasing redshift. Since we are considering fractional fluctuations, this manifests itself as an increase in the fraction of pixels withδ F close to −1, with occasional excursions to very largeδ F . The next impression provided by the spectra appears at first tantalizing: most regions have low A L , but there are occasional upward excursions over portions of the spectrum. This behavior is especially apparent for the smaller of the two filtering scales, and is less apparent in the highest redshift case ( Figure 10).
Consider for example the spectrum Q2139-44, in thē z = 3.0 bin, convolved with a s n = 34.9 km/s Morlet filter, as shown in Figure 9. In this spectrum the regions near ∆v = 5, 000, 7, 500, and 12, 500 km/s all have relatively high wavelet amplitudes, A L 0.02, while the rest of the spectral regions have low amplitude. Inspecting the simulated PDF of Figure 3, the low amplitude 11 We estimate that rebinning reduces the mean wavelet amplitude by 5% for sn = 69.7 km/s. The amplitude squared of the wavelet filtered field, formed with a sn = 34.9 km/s filter, smoothed over L = 1, 000 km/s. Bottom panel: Similar to the middle panel, but using a Morlet wavelet with sn = 69.7 km/s. Note that the y-axis in the bottom two panels have rather different ranges. This is required because of the strong dependence of small scale power on smoothing scale.
floor with A L ∼ 0.005 seems to indicate hot T 0 ∼ 20, 000 K gas, while the regions with A L 0.02 seem to require cooler gas T 0 7, 500 K gas.
At first glance, these upward wavelet amplitude excursions seem to be cold regions embedded in an otherwise hot IGM. This is what one naively expects in the midst of HeII reionization: cool regions where HI and HeI reionized long ago, and hotter regions where helium is doubly ionized. Before we dispel this fantasy -these regions are contaminated by metal absorbers (see §3.2) -let us add some sightlines from simulated models to further illustrate this ( Figure 11). 12 The sightlines show that the low wavelet amplitude floor in the observed spectrum roughly matches the hot IGM sightline. This implies that there are indeed significant quantities of hot ∼ 20, 000 K gas in the IGM at z = 3. However, the hot model fails to produce the high wavelet amplitude excursions seen in the data. Matching these seems, at first glance, to require a cooler model -one with roughly T 0 ∼ 7, 500 K, γ = 1.6, for example. (To be clear, note that the simulation and observational data are drawn from different realizations, so one does not expect the simulated case to match the observations region-by-region or feature-by-feature. The meaningful comparison is the overall number of regions with high or low wavelet amplitude.) It is at first tempt- Figure 7, but for the spectrum of HE2243-6031. Note that the x and y axes have different ranges than in the previous figure. The x-axis range is set by the portion of the forest that we use from the example spectrum in a given redshift bin. We vary the y-axis range because the mean wavelet amplitude changes strongly with redshift, owing mostly to evolution in the mean absorption, and so a varying range is necessary for visual clarity. ing to conclude that we are detecting temperature inhomogeneities from incomplete HeII reionization.
Metal Line Contamination
We need, however, to consider a very important systematic. A hot region that lands at the same wavelengths as a 'clump' of prominent narrow metal lines may look to us like a cold region. The wavelet filter just tells us the total level of small-scale power from place to place, and does not distinguish whether absorption arises from HI or some other element. To make a robust estimate of the IGM temperature, we need to identify metal line absorbers within the Ly-α forest. 13 We expect metal line contamination to be most severe in the low redshift bins, where the fractional contribution of metals to the overall opacity in the forest is highest (e.g. Faucher- Giguère et al. 2008b), and on the smaller of our two filtering scales (see Appendix B).
Naturally, distinguishing metal absorption lines and Ly-α lines within the Ly-α forest is a challenging and imperfect process. We do, however, have a few separate handles on distinguishing metal lines from Ly-α lines within the forest. First, we identify all of the metal absorbers redward of Ly-α and look for 'partner' transitions. The partner transitions are additional transitions that lie at the same redshift as an identified red-side line, Fig. 11.-Example wavelet amplitude field compared with models. The smoothing scale is sn = 34.9 km/s here. The blue lines are the same as in Figure 9. The observed wavelet amplitudes are shown by a dashed line to avoid confusion with the model curves. The red and black lines in the bottom panel are simulated sightlines for a hot IGM model (red), and a cold IGM model (black). Random noise has been added to the simulated spectra (see §4.5). The wavelet amplitudes in most spectral regions are roughly consistent with the hot IGM model, but the high wavelet amplitude excursions (near ∆v = 5, 000, 7, 500, and 12, 500 km/s) look naively like cold gas. In §3.2, we show that these apparent cold regions are spurious and are instead consistent with being hotter gas contaminated by metal lines.
yet which land within the Ly-α forest. Next, we search for doublets within the Ly-α forest, which can be identified by their distinctive optical depth ratios and by the characteristic separation between a doublet's two components. For instance, CIV is a doublet with a strong component at λ r = 1548.2Å, and a weaker component at λ r = 1550.8Å, and the ratio of the absorption cross sections of the two components is 2. So CIV should stand out as a doublet with the two components separated by ∼ 640 km/s, with the lower wavelength line a factor of two stronger than its partner component. MgII is another prominent doublet. After identifying a doublet, one can use the estimated redshift of an identified doublet to search for additional transitions at the same redshift: we look for CII/III/IV, NII/III/V, OI/VI, MgI/II, AlII/III, SiII/III/IV, SVI, and FeII, and consider further transitions for DLAs. This approach already identifies a host of metal lines within the forest, but there are inevitably some remaining metal lines left within the forest. For example, there are sometimes absorbers where the doublet features are undetectable owing to line blending. To further mitigate metal line contamination, our final step is to mark extremely narrow lines (with b-parameter b 7 km/s) as metals. This final cut amounts to only 25% of the identified lines. Clearly, one needs to be careful about making cuts based on line width: doing so could bias us against detecting cold regions. However, for an HI line to have a linewidth of b 7 km/s it needs to have an implausibly low temperature of T 3, 000 K. Hence, we are confident that this final cut does not bias our results, yet it helps protect against remaining unidentified metal lines within the forest. We will subsequently present tests to check how much the results depend on the precise way in which we excise metal line contaminated regions.
In this paper, we have identified metal lines for 11 of the 40 spectra in our data sample. The identified metals come entirely from portions of spectrum absorbing at z 3 -where we expect the metal line contamination to be strongest -and not in the higher redshift bins. That is, we do not presently have estimates of metal line contamination in the redshift bins centered around z = 3.4 andz = 4.2. In these redshift bins, we will focus entirely on the larger (s n = 69.7 km/s) filtering scale where the metal line contamination is less of an issue (Appendix B).
In order to check the influence of metal line contamination, we calculate the wavelet amplitudes as before, and excise regions impacted by metal line contamination. An important assumption here is that gas absorbing in a metal line transition at a given wavelength is spatially uncorrelated with gas absorbing in Ly-α at nearby wavelengths. If this assumption were violated, we could bias ourselves by preferentially removing regions of above average hydrogen absorption when excising metal contaminated regions. Fortunately, most of the metal line transitions have rest-frame wavelengths that are very different than that of Ly-α and so the gas absorbing in a metal transition at a given wavelength is very widely separated (in physical space) from that absorbing in Ly-α. Hence the metal and Ly-α absorption are uncorrelated. This justifies our approach. 14 Since the wavelet filter is not completely local, pixels with metal line absorption will contaminate neighboring pixels after filtering. Furthermore, we generally smooth the wavelet squared field over L = 1, 000 km/s. As a simple and conservative cut, we examine the fraction of contaminated pixels within a smoothing length L around each pixel, and discard a pixel if less than f m = 95% of its neighbors (within a smoothing length) are metal free.
We find that metal line contamination can have a significant impact, especially for s n = 34.9 km/s, and at z 3. We show a few example sightlines in Figures 12 -14. It is striking that the most prominent peaks in the wavelet filtered field at s n = 34.9 km/s, shown in the figures, correspond very closely to metal lines. Essentially, our filter was designed to look for temperature inhomogeneities, but it appears most effective at identifying metal-line contaminated regions! In fact, wavelet filtering may be a good way to quickly identify prominent metals in the forest. The metal line contamination is less severe for spectra passed through the larger wavelet filter (s n = 69.7 km/s). The amplitude of fluctuations in the forest is much greater on this smoothing scale. The met- als also generally contribute more power on the larger smoothing scale, but the amplitude of fluctuations from HI increases more strongly with smoothing scale, and so the metals are fractionally less important on larger scales. This is perhaps seen most easily in the example of Figure 13. In the s n = 34.9 km/s panel of this figure, all of the prominent peaks are metal contaminated regions. In the larger smoothing scale panel, there are some peaks from HI and some from metals, and the heights of the various peaks are comparable. The more significant contamination of the metals on the smaller smoothing scale likely results because the metal lines tend to be narrower than the HI lines. In Appendix B we find qualitatively similar results by adding metal line absorbers, with empirically derived properties, to mock Ly-α forest spectra.
Since we can attribute many of the peaks observed in the wavelet amplitudes to metal lines, this does imply, however, that the temperature inhomogeneities cannot be too large. If temperature inhomogeneities were present and large, we would expect to see more high wavelet amplitude regions left over after excising the metals. In particular, consider Figure 11. In this example, we found that the low wavelet amplitude regions of the spectrum are consistent with hot 20, 000 K gas. While we have not identified metal lines for this particular spectrum, our results from other lines of sight clearly suggest that the high wavelet amplitude regions are metal-contaminated rather than genuine cold regions Figure 12 but for the spectrum HE0940-1050 in thez = 2.6 bin. Notice in particular that the very large wavelet amplitudes near ∆v = 2, 000 km/s for sn = 34.9 km/s correspond closely to several strong metal lines. Again the wavelet peaks at this filtering scale trace mostly metal line contaminated regions. The lower wavelet amplitude regions, and not these high amplitude portions, indicate the IGM temperature. Note that the metal line contamination is less severe for the larger smoothing scale filter in the bottom panel.
with T 0 ∼ 7, 500 − 10, 000 K. The lack of high wavelet amplitude regions after metal excision implies there are few such cold regions left, and that most of the volume of the IGM at z ∼ 3 is hot with T 0 ∼ 20, 000 K (although see §4.1 for a discussion regarding the dependence of our results on γ).
It is clear, however, that metal line contamination is a very important systematic for these measurements, although the contamination is less of an issue on the larger smoothing scale and for the high redshift measurements. This issue is not unique to our method, although the detailed impact of metal lines will depend on the precise algorithm for constraining the IGM temperature. For instance, measurements based on fitting the minimum width of absorption lines in the Ly-α forest need to carefully avoid including metal lines in the sample of lines used to estimate the temperature. Power spectrum based temperature estimates need to account for the small-scale power contributed by metal absorbers or mask the metal absorbers before estimating the power spectrum.
3.3. The Wavelet Amplitude Squared PDF Let us now move past mere visual inspection and measure statistical properties from the observed spectra. We focus mostly on the PDF of A L for our fiducial choices of s n = 34.9 km/s, s n = 69.7 km/s, L = 1, 000 km/s, and f m = 0.95. In each redshift bin, we find the minimum and maximum A L and then choose 10 evenly-spaced logarithmic bins in A L for the PDF measurement. We tabu- late the average A L and the differential PDF in each A L bin for each redshift bin. The average redshift of pixels in a redshift bin is typically close to the redshift at bin center, and the error bars are still fairly large, so we ignore any issues associated with redshift evolution across each bin and quote all results at the bin center.
We use a jackknife resampling technique to calculate error bars for the PDF measurements. We first estimate the PDF from the entire data sample within a given redshift bin,P (A i ). HereP (A i ) is the PDF estimate for the ith A L bin, and A i is the average wavelet amplitude squared and smoothed within the bin. Next we divide the data set into n g = 10 subgroups, and estimate the PDF of the data sample omitting each subgroup. Let P k (A i ) represent the PDF estimate omitting the pixels in the kth subgroup. Then our estimate of the jackknife covariance between bins i and j, C(i, j), is: In practice our estimates of the off-diagonal elements of the covariance matrix are very noisy. Consequently, we will be forced to ignore the off-diagonal elements of C(i, j). We have tested the jackknife error estimator with lognormal mocks (see McDonald et al. 2006) that approximately mimic the properties of the current data set. We generate 10, 000 mock realizations of a z = 3 data set and compare error bars estimated from the dispersion across the mock realizations with the jackknife error estimates. In the mock data, we find that neglecting the off-diagonal elements in the covariance matrix increases the average value of χ 2 by ∼ 1 for 14 degrees of freedom (the mock PDFs had 15 rather than 10 A L bins), and so ignoring the off-diagonal elements is likely a good approximation. The jackknife estimates of the diagonal elements of the covariance matrix agree with direct estimates of the dispersion across the mock data to better than 20% on average, although the jackknife estimator sometimes under-predicts the errors in the tails of the PDF more severely. We provide tables of the wavelet PDF measurements in Tables 1-5.
Shot Noise
We plot the measured wavelet PDF in the next section, but pause to consider first the impact of shot-noise. The observed Ly-α forest spectra are contaminated by random noise owing to Poisson fluctuations in the discrete photon count and around the mean night sky background count, as well as by random read-out noise in the CCD detector (see e.g. Hui et al. 2001 for discussion). We need to consider how this noise impacts the wavelet PDF measurements.
In Appendix A, we derive estimates of the noise bias for measurements of the first two moments of the wavelet amplitude PDF. We apply these formulae here to estimate the impact of noise on the present measurements. On the larger smoothing scale, s n = 69.7 km/s, we find that shot-noise bias is unimportant for our present data set. For example, at z = 3, applying the formulae of Appendix A, we find that the noise contamination to the mean wavelet amplitude is less than onethird of the 1-σ statistical error on this quantity for our present data sample. Similarly, in this redshift bin and for this smoothing scale, we find that the wavelet amplitude variance is biased by random noise only at the ∼ 1% level. However, the shot-noise bias is not negligible on the smaller smoothing scale, s n = 34.9 km/s. For instance, a quasar spectrum with S/N ∼ 50 at the continuum contributes a mean wavelet amplitude owing to noise of |a noise Hui et al. 2001). This is comparable to the wavelet amplitude signal in the tail of the PDF in the favored hot IGM models (see Figure 3). The more significant noise contamination on the smaller smoothing scale owes to the rapid decline in signal power towards small scales. To guard against noise bias at the smaller smoothing scale, we cut spectra with S/N ≤ 50 redward of Ly-α from the sample used in the smaller smoothing scale measurement. We cut based on the red side noise, rather than using a noise estimate in the forest, to avoid introducing any possible selection bias. Further, we add noise to the mock spectra when comparing with the measurement on the smaller smoothing scale ( §4.5).
THEORETICAL INTERPRETATION
In this section, we compare the wavelet PDF measurements with cosmological simulations in order to estimate the implied IGM temperature. A particular goal here is to determine whether the IGM is closer to the thermal state expected in the midst of HeII reionization (T 0 ∼ 20 − 25, 000 K, γ = 1.3) or whether it more closely resembles the state much after a reionization event (T 0 ∼ 7, 500 − 10, 000 K, γ = 1.6). Furthermore, we aim to check whether the data indicate large temperature inhomogeneities. We perform this comparison over the full redshift range of our data set,z = 2.2 − 4.2.
Cosmological Simulations
For the purpose of this project and related Ly-α forest work, we have run a new suite of cosmological smoothed particle hydrodynamics (SPH) simulations using the simulation code Gadget-2 (Springel 2005). The simulations adopt a LCDM cosmology parameterized by: n s = 1, σ 8 = 0.82, Ω m = 0.28, Ω Λ = 0.72, Ω b = 0.046, and h = 0.7 (all symbols have their usual meanings), consistent with the WMAP constraints from Komatsu et al. (2009). Each simulation was started from z = 299, with the initial conditions generated using the Eisenstein & Hu (1999) transfer function. We ran several simulations to test the convergence of our results with boxsize, as well as mass and spatial resolution (see §6). From these tests, we determined that the best choice simulation for the present project has a boxsize of L box = 25 Mpc/h and N p = 2 × 1024 3 particles, and this run is the fiducial simulation in what follows. This simulation represents a fairly significant improvement in boxsize and resolution compared to most previous work (see §4.10 for details). It has approximately the gas mass recommended for resolution convergence in a recent study by Bolton & Becker (2009), and tracks over an order of magnitude more particles than the simulations of these authors. In each run, the softening length was taken to be 1/20th of the mean inter-particle spacing. In order to speed up the calculation, we chose an option in Gadget-2 that aggressively turns all gas at density greater than 1000 times the cosmic mean density into stars (e.g. Viel et al. 2004). Since the forest is insensitive to gas at such high densities, this is a very good approximation.
The simulations were run using the Faucher-Giguère et al. (2009) photoionizing background, which is an update of the Haardt & Madau (1996) model (see, also Katz et al. 1996a, Springel & Hernquist 2003 15 . The ionizing background was turned on at z = 7 in the simulations. This model for the ionizing background determines the photoheating and gas temperature in the simulation. We would like, however, to explore a wide range of thermal histories. In order to do this, we make an approximation. The approximation is to fix the fiducial ionization history to the Faucher-Giguère et al. (2009) model for the purpose of running the simulation and accounting for gas pressure smoothing, but to vary the temperaturedensity relation (Equation 1) when constructing simulated spectra. This 'post-processed spectra' approximation neglects the dependence of Jeans smoothing on the detailed thermal history of the IGM, but correctly incorporates thermal broadening for a given temperaturedensity relation model, parametrized by T 0 and γ. It also neglects the inhomogeneities in T 0 and γ expected during HeII reionization. Finally, by assuming a perfect temperature-density relation in constructing mock absorption spectra, we also neglect the impact of shock heating -which adds scatter to the temperature density relation (Hui & Gnedin 1997) -on the amount of thermal broadening. We caution against taking the results of 15 Tables are available electronically at http://www.cfa.harvard.edu/∼cgiguere/uvbkg.html these first pass, homogeneous temperature-density relation calculations too literally: if the IGM temperature is significantly inhomogeneous, these calculations provide only a crude approximation. The calculations are meant only to get a sense for whether the IGM is mostly at T 0 ∼ 20, 000 K, or instead at T 0 ∼ 10, 000 K, and to check whether large temperature inhomogeneities are present. We intend to make more detailed theoretical calculations in future work.
Although our measurements of the wavelet amplitude PDF are robust to uncertainties in fitting the quasar continuum, our interpretation of the measurements still relies on estimates of the mean transmitted flux. Specifically, we follow the normal procedure of adjusting the intensity of the simulated ionizing background in a post-processing step, so that the simulated mean transmitted flux (averaged over all sightlines) matches the observed mean flux. We assume here the best-fit measurements of Faucher-Giguère et al.
(2008b), and subsequently explore the impact of uncertainties in the mean flux ( §4.3). We adopt their estimates in ∆z = 0.2 bins, and use their measurements that include a correction for metal line opacity based on Schaye et al. (2003), and a continuum-fitting correction (which accounts for the rarity of regions with nearly complete transmission, F = 1, at high redshift). The corresponding Faucher-Giguère et al. We output simulation data at every ∆ z = 0.5 between z = 4.5 and z = 2. In order to generate model wavelet PDFs at redshifts in between two stored snapshots, we measure the simulated wavelet PDF from each stored snapshot and linearly interpolate to find the PDF at the precise desired redshift.
Comparison with Measurements
Let us first compare the measured PDF in the different redshift bins for s n = 69.7 km/s. The results of these calculations are shown in Figures 15 -19. We start with a qualitative 'chi-by-eye' assessment, and provide more quantitative constraints in §4.6.
The blue histogram with error bars in Figure 15 shows the measured PDF atz = 4.2, uncorrected for metal line contamination. We have not identified metal lines in the high redshift spectra contributing to this redshift bin, but we expect metal line contamination to have only a small effect on the wavelet PDF at this redshift and smoothing scale (see Appendix B). The model curves with T 0 ∼ 7, 5000 − 10, 000 K and γ = 1.6 correspond roughly to models in which HI is reionized early, and HeII is not yet ionized. One expects a similarly low temperature in models in which each of HI, HeI and HeII are all ionized early. Interestingly, these models produce too many large wavelet amplitude regions and too few small wavelet amplitude regions compared to the data. The model curves with T 0 = 15, 000 K, and each of γ = 1.3 and γ = 1.6 are fairly close to the measurements, but overproduce slightly the high amplitude tail. These two curves are almost completely degenerate because the wavelet amplitude PDF is sensitive to the temperature over only a limited range in overdensity. At this redshift the measurements appear most sensitive to the temperature at densities near the cosmic mean, and so the models depend sensitively on T 0 but not on γ. The model with T 0 = 2 × 10 4 K, γ = 1.3 is the best overall match to the data of the models shown, although it over-predicts the point near A L ∼ 0.4 by more than 2.5 − σ. Finally, the model with T 0 = 2.5 × 10 4 K seems to produce too many low wavelet amplitude regions, and too few high amplitude pixels. It is also interesting that the measured PDF is not much wider than the model PDFs. Taken at face value, this argues against the temperature field being very inhomogeneous at this redshift.
The results are tantalizing because they suggest the IGM is fairly hot with T 0 ∼ 15 − 20, 000 K at z = 4.2. This requires some amount of early HeII photoheating and/or HI reionization to end late and heat the IGM to a high temperature. If metal line contamination is in fact significant, this only strengthens the argument for a high temperature atz = 4.2: metal lines can only add power and increase the number of high wavelet amplitude regions. Similarly, the finite resolution of our numerical simulations causes us to underestimate the IGM temperature (see §6). While we show that our results are mostly converged with respect to simulation resolution in §6, convergence is most challenging at high redshift and this may lead to a small systematic underestimate in this redshift bin. On the other hand, we show in §4.3 that a cooler IGM model (T 0 ∼ 10, 000 K) can match the PDF measurement at this redshift if the true mean transmitted flux is 2 − σ higher than the best fit value estimated by Faucher-Giguère et al. (2008b).
The measurements in our next redshift bin (z = 3.4) suggest the presence of even hotter gas in the IGM (Figure 16). At this redshift the best overall match is the model with T 0 = 2.5 × 10 4 K, γ = 1.3. Even a fairly hot model with T 0 ∼ 2 × 10 4 K, γ = 1.3 produces too few low wavelet amplitude regions, and too many high amplitude ones. Models with lower temperatures are clearly quite discrepant. At this redshift, the measured PDF is a bit wider than the simulated ones. This might owe to temperature inhomogeneities, or it may indicate some metal line contamination since, as with thez = 4.2 data, we have not identified and excised metal lines in this redshift bin. In either of these cases, the measurements may allow for some even hotter gas at T 0 ∼ 3 × 10 4 K.
The measurements atz = 3.0 indicate similarly hot gas ( Figure 17). By this redshift, the average absorption in the forest is increased and the wavelet PDF is most sensitive to gas a little more dense than the cosmic mean, at roughly 1 + δ = ∆ ∼ 2 for our method. This means that models that have a lower temperature at mean density (T 0 ), yet a steeper temperature-density relation (γ) give similar wavelet PDFs to models with higher T 0 and flatter γ at this redshift. This explains why the model curves with T 0 = 2.5 × 10 4 K, γ = 1.3 and T 0 = 2 × 10 4 K, γ = 1.6 are nearly identical to each other, as are the models with T 0 = 2 × 10 4 K, γ = 1.3, and T 0 = 1.5 × 10 4 K, γ = 1.6. At this redshift, the metal line correction appears fairly important: it shifts the peak of the PDF to lower amplitude and narrows the histogram somewhat (as seen by comparing the black dashed histogram and the blue solid histogram in the figure). The error bars are significantly larger for the metal-cleaned measurement than for the full measurement. This is mostly because we only have metal line identifications for some of the spectra in this bin and the metal-cleaned measurement hence comes from a smaller number of spectra, and also because we use a smaller portion of each spectrum after metal cleaning. The mean wavelet amplitude changes by less than the 1 − σ error bar as we vary f m between f m = 0.8 and f m = 1, and so f m = 0.95 is a conservative choice, and we hence stick to this choice throughout. After accounting for metal contamination, the model curves with T 0 = 2 × 10 4 K, γ = 1.3, and T 0 = 1.5 × 10 4 K, γ = 1.6 are somewhat disfavored. Again, the cooler models with T 0 = 7, 500 − 10, 000 K differ strongly with the measurement, regardless of the metal correction. The hottest model shown with T 0 = 3.0 × 10 4 K, γ = 1.3 produces too many low-amplitude, and two few high amplitude regions. The models with T 0 = 2.5 × 10 4 K, γ = 1.3 and T 0 = 2.0 × 10 4 K, γ = 1.6 are strongly degenerate and each roughly match the measured PDF.
Proceeding to lower redshift, the data atz = 2.6 disfavor some of the hotter IGM models (Figure 18). At this redshift, the models shown with T 0 = 3.0 × 10 4 K, γ = 1.3; T 0 = 2.5 × 10 4 K, γ = 1.3; T 0 = 2.0 × 10 4 K, γ = 1.6 all produce too many low amplitude regions, and too few high amplitude ones. The other models shown with T 0 = 2.0 × 10 4 K, γ = 1.3; T 0 = 1.5 × 10 4 K, γ = 1.6, and T 0 = 1.0 × 10 4 K, γ = 1.6 are closer to the measurements, although none of the models are a great fit. The cooler model with T 0 ∼ 7, 500 K is again a poor match to the measurement. The preference for somewhat more moderate temperatures at this redshift may result from cooling after HeII reionization completes at higher redshift.
Finally, the measurement in thez = 2.2 bin is shown in Figure 19. The general features are similar to the results atz = 2.6: the hotter models are clearly a poor match to the data, and there is some preference for cooler temperatures, although none of the models are a great match to the data. The models with (T 0 , γ) = (2.0 × 10 4 K, 1.3), (1.5 × 10 4 K, 1.6) and (1.0 × 10 4 K, γ = 1.6) are the closest matches of the models shown. At this redshift, the mean transmission is high ( F = 0.849), and the method is sensitive to somewhat overdense gas as a result. The similarity between the models with T 0 = 3.0 × 10 4 K, γ = 1.3 and T 0 = 2.0 × 10 4 K, γ = 1.6 suggests that the PDFs are most sensitive to densities around ∆ = 3.9 at this redshift. We expect scatter in the temperature density relation from shock-heating to be most important at this low redshift, especially since the wavelet PDF is becoming sensitive to the temperature of moderately overdense gas. This may be part of the reason for the poorer overall match between simulations, where the effects of shocks on T are ignored in postprocessing, and observations at this redshift. We will investigate this in more detail in the future.
In summary, our measurements appear to support a picture where the IGM is being heated in the middle of the redshift range probed by our data sample, with the temperature likely peaking between z = 3.0 − 3.8, before cooling down towards lower redshifts. The favored peak temperature appears to be around T 0 ∼ 25 − 30, 000 K, somewhat hotter than found by most previous authors (see §4.10), although consistent with theoretical expectations from photoheating during HeII reionization, especially if the quasar ionizing spectrum is on the hard side of the models considered by McQuinn et al. (2008) (see their Figure 12).
Uncertainties in the Underlying Cosmology and
Mean Transmitted Flux In the previous section we showed model wavelet PDFs for varying temperature-density relations, but left the underlying cosmology and mean transmitted flux fixed. Here we consider how much the wavelet PDF varies with changes in these quantities. As far as the underlying cosmology is concerned, we restrict our discussion to uncertainties in the amplitude of density fluctuations. Note that there is some (2 − σ level) tension between the amplitude of density fluctuations determined from the Ly-α forest and WMAP constraints (Seljak et al. 2006).
In order to gauge the impact of uncertainties in the amplitude of density fluctuations, we generate mock spectra for a given model using simulation outputs of varying redshift. In particular, we consider a model atz = 4.2 with F = 0.346, T 0 = 2 × 10 4 K, and γ = 1.3, which roughly matches the measured PDF. We generate mock spectra in this model from outputs at z o = 3.5, 4.0, and 4.5. For the prediction in our fiducial cosmology, we linearly interpolate between wavelet PDFs generated from the z = 4.0 and z = 4.5 outputs. Using instead the model PDFs at z o = 3.5 or 4.0 (with the mean transmitted flux fixed at F = 0.346) -in which structure formation is more advanced -should mimic a model with a higher amplitude of density fluctuations, while using the z = 4.5 snapshot should correspond to a model with smaller density fluctuations. Our fiducial model has σ 8 (z = 0) = 0.82, roughly in between the preferred values inferred from the forest alone and that from WMAP-3 alone (Seljak et al. 2006). Using the outputs at z o = 3.5, 4.0, and 4.5 for thez = 4.2 mock spectra should roughly correspond to models with σ 8 (z = 0) = 0.95, 0.85 and 0.78 respectively. The re- sults of these calculations, shown in Figure 20, illustrate that the wavelet PDF is only weakly sensitive to the underlying amplitude of density fluctuations. The mean small-scale power is exponentially sensitive to the temperature, which is uncertain at the factor of ∼ 2 level, and so it is unsurprising that ∼ 10% level changes in the amplitude of density fluctuations have relatively little impact. The small effect visible in the plot is that the wavelet PDF shifts to smaller amplitudes for the outputs in which structure formation is more advanced. This likely owes to the enhanced peculiar velocities in models with larger density fluctuations, which suppress the small-scale fluctuations in the forest via a finger-of-god effect (e.g. McDonald et al. 2006). The impact of uncertainties in the amplitude of density fluctuations on the wavelet PDF are similarly small at other redshifts, and so we do not discuss this further here.
The amplitude of fluctuations in the forest, and the wavelet PDF, are sensitive to the mean transmitted flux and uncertainties in this quantity impact constraints on the temperature from the PDF measurements. The mean transmitted flux partly determines the 'bias' between fluctuations in the transmission and the underlying density fluctuations, with the bias increasing as the mean transmitted flux decreases. This impacts the small-scale transmission power spectrum, and the wavelet PDF, as well as fluctuations on larger scales. When the gas density is sufficiently high, and/or the ionizing background sufficiently low -i.e., when the mean transmitted flux is small -even slight density inhomogeneities produce . The red dotted line shows the same model, except adopting a mean transmitted flux that is 1 − σ less than the best fit value. The blue dashed line shows the same, except for a mean transmitted flux 1 − σ larger than the best fit. The magenta line shows a cooler IGM model, where the mean transmitted flux is 2 − σ higher than the best fit. absorption features, yielding large transmission fluctuations on small-scales.
In the previous section, we adopted the best fit values of the mean transmitted flux from Faucher-Giguère et al. (2008b), but now consider variations around these values. These authors provide estimates of the statistical and systematic errors on their mean transmitted flux measurements. Their 1 − σ errors at our bin centers are: (z, F ± 1 − σ) = (2.2, 0.849 ± 0.017), (2.6, 0.778 ± 0.017), (3.0, 0.680±0.02), (3.4, 0.566±0.022), (4.2, 0.346± 0.042). Their systematic error budget accounts for uncertainties in estimating metal line contamination, and for uncertainties in corrections related to the rarity of true unabsorbed regions at high redshift, among other issues. Nonetheless, there is some tension between the measurements of different groups. We refer the reader to Faucher-Giguère et al. (2008b) for a discussion.
Below z 4 uncertainties in the mean transmitted flux have a noticeable yet fairly small impact on our constraints. A typical example, in thez = 3.4 redshift bin, is shown in Figure 21. The solid black line in the figure shows a model with T 0 = 2.5×10 4 K, γ = 1.3 that adopts the best fit value for the mean transmission, F = 0.566. The blue dashed line is the same model, but with the mean transmitted flux shifted up from the central value by 1 − σ. This reduces the amplitude of transmission fluctuations in the model, and shifts the wavelet PDF towards slightly lower amplitudes. Reducing the transmission by 1 − σ has the opposite effect of boosting the typical wavelet amplitudes slightly, as illustrated by the red dotted line in the figure. While the uncertainty in the mean transmitted flux can shift the preferred temperature around slightly, the effect at this redshift is relatively small and has little impact on our main conclusions. For example, a cooler IGM model with T 0 = 1.0 × 10 4 K and γ = 1.6 still differs greatly from the PDF measurement, even after assuming a mean transmitted flux that is 2−σ higher than the central value. This is demonstrated by the magenta line in Figure 21.
The impact of uncertainties in the mean transmitted flux is more important in our highest redshift bin, at z = 4.2. The impact is larger at this redshift both because data samples are smaller and the fractional error on the mean transmitted flux is larger at this redshift, and because the wavelet amplitudes are more sensitive to the mean transmission once the transmission is sufficiently small. We repeat the exercise of the previous figure atz = 4.2 and present the results in Figure 22. In this case, the model that roughly goes through the PDF measurement with our best fit mean transmitted flux has T 0 = 2.0 × 10 4 K, and γ = 1.3. After shifting the mean transmitted flux up in this model by 1 − σ it produces too many low wavelet amplitude regions, and too few high amplitude ones, in comparison to the measurement. Indeed, at this redshift, even the cooler IGM model with T 0 = 1.0 × 10 4 K, γ = 1.6 will pass through the measurement after a 2 − σ upwards shift in the mean transmitted flux. In other words, accounting for uncertainties in the mean transmitted flux, the cool IGM model with T 0 = 1.0×10 4 K, γ = 1.6 can only be excluded at roughly the 2 − σ level.
Furthermore, systematic concerns with direct continuum-fitting are most severe at high redshift, and the agreement between different measurements, while generally good at lower redshifts, is marginal above z ∼ 4 or so (Faucher-Giguère et al. 2008b). Direct continuum measurements must correct for the fact that there are few genuinely unabsorbed regions at high redshift, which can cause one to systematically underestimate the mean transmitted flux. Part of the disagreement can be traced to the fact that some of the measurements in the literature do not make this important correction. Since Faucher-Giguère et al. (2008b) make a correction using cosmological simulations, we consider their measurement to be more reliable than many of the other previous measurements. However, McDonald et al. (2006) constrain the mean transmitted flux based on a multi-parameter fit to their SDSS power spectrum measurements, which should be immune to this concern. Their best fit to the redshift evolution of the mean transmitted flux gives F = 0.41 at z = 4.2. This disagrees with the Faucher-Giguère et al. (2008b) measurement at this redshift by 1.6 − σ. The overall disagreement is in fact more severe than this, because there is a similar level of disagreement in neighboring redshift bins. Dall'Aglio et al. (2008) also perform a direct continuum-fit, correct for the rarity of unabsorbed regions at high redshift with a different methodology, and find a best fit to the redshift evolution of the opacity of F = 0.40 at z = 4.2. Again, this measurement is in tension with the measurement we adopt. Adopting either of these measurements for the best fit mean transmitted flux would favor a cooler IGM temperature.
Dependence on Large-Scale Smoothing
The measured PDF in thez = 3.4 redshift bin requires hot (T 0 20, 000 K) gas. Interestingly, the PDF in this redshift bin is somewhat broader than the theoretical model curves, which assume a homogeneous temperature-density relation. This may be the result of uncleaned metal line contamination, but a more interesting possibility is that the wide measured PDF indicates temperature inhomogeneities from ongoing HeII reionization. We argued in §2.3 that the precise choice of large scale smoothing, L, should be relatively unimportant. Nevertheless, to further explore the exciting possibility that the data indicate temperature inhomogeneities in this redshift bin, we measure the PDF for a few additional choices of L and compare with theoretical models.
The results of these calculations are shown in Figure 23. In addition to our usual large scale smoothing of L = 1, 000 km/s, we also compare simulated and observational wavelet PDFs for L = 200; 2, 000; and 5, 000 km/s. Here we use 15 logarithmically spaced A L bins for the PDF measurement, rather than 10 as in the previous sections, to increase our sensitivity to any bi-modality in the PDF. The mean of the model curves with different smoothing scales is of course fixed, while the width of the PDF increases with decreasing smoothing scale (see §2.3, Figure 6). At all smoothing scales, the simulated model with T 0 = 25, 000 K and γ = 1.3 is the best overall match to the data. The fit is poorest at L = 2, 000 km/s, but it is not clear precisely how to interpret this since the model is a formally poor fit at each smoothing scale. There does appear to be a slight, yet tantalizing, hint that the PDF is bimodal on large smoothing scales: this trend is most apparent at L = 2, 000 km/s and L = 5, 000 km/s. This may be a first indication of temperature inhomogeneities from ongoing HeII reionization, or it may be the result of The blue histogram in the panels is the wavelet PDF for a large-scale smoothing L of Top-Bottom: 200; 1, 000; 2, 000; and 5, 000 km/s. The color code for the different temperaturedensity relation models is identical to that in Figure 16. uncleaned metal line contamination, as the abundance of metals can vary significantly on large smoothing scales. It will be interesting to revisit this measurement with larger data samples in the future.
Dependence on Small-Scale Smoothing
We found in the previous sections that our results at s n = 34.9 km/s are quite susceptible to metal-line con-tamination and somewhat to shot-noise bias. Because of this, we will not presently use the results at this smoothing scale in constraining the thermal history of the IGM. Nevertheless, as a consistency check we compare here the measured wavelet PDF at this smoothing scale with simulated models.
As mentioned previously, to guard against shot-noise bias, we cut spectra with a (red-side) S/N ≤ 50 and add random noise to the mock spectra. Provided we cut out the low S/N data, the random noise mainly impacts only the low wavelet amplitude tail by decreasing the number of very low amplitude wavelet regions. We add Poisson distributed noise to the mock spectra, assuming that the noise is dominated by Poisson fluctuations in the photon counts from the quasar itself. We have experimented with incorporating Poisson distributed sky noise, and Gaussian random read-noise, and find qualitatively similar results at fixed noise level. We estimate the average wavelet amplitude in the forest contributed by noise (after our S/N cut) as described in Appendix A, and find that it corresponds to S/N ∼ 70, per 4.4 km/s pixel at the continuum for thez = 3.0 bin. In Figure 24 we compare some example model PDFs with the measurements, and find results gratifyingly close to those at larger smoothing scale. In particular, the model with T 0 = 2.0 × 10 4 K, and γ = 1.6 atz = 3.0 that roughly matched the measurement on larger scales, matches the PDF on this smaller scale as well. For contrast, we show a hotter and a colder IGM model which are again a poor match. Atz = 2.2 andz = 2.6 the results are similar to the previous ones, suggesting a cooler IGM at these redshifts. Comparing the blue and black dashed histograms, it is clear that the metal contamination correction is quite important at this scale and we do not use these results in what follows.
We have also compared the s n = 34.9 km/s wavelet PDF in the two highest redshift bins -where we have not identified metal lines -with model PDFs. The measured PDF at z = 3.4 looks similar to the T 0 = 2.5 × 10 4 K, γ = 1.3 model that we previously identified as the best general match of our example models at s n = 69.7 km/s, except with a fairly prominent tail towards high wavelet amplitudes. We expect more significant metal contamination at this smoothing scale (Appendix B), and so this is in line with our expectations. Indeed, the tail towards high wavelet amplitude looks similar to the one in the top panel of Figure 32. Similar conclusions hold at z = 4.2, except the agreement without excising metals is better, likely owing to the smaller impact of metals at this redshift (Appendix B). 4.6. Approximate Constraints on the Thermal History of the IGM In this section, we perform a preliminary likelihood analysis, in order to provide a more quantitative constraint on the thermal history of the IGM from the wavelet measurements. We confine our analysis to a three-dimensional parameter space, spanning a range of values for T 0 , γ, and F . The results of the previous section suggest that CDM models close to a WMAP-5 cosmology should all give similar wavelet PDFs, and so it should be unnecessary to vary the cosmological parameters in this analysis. In order to facilitate this calculation, we adopt here an approximate approach to cover the rel- -Wavelet PDFs for the sn = 34.9 km/s filter at z = 3.0, 2.6 andz = 2.2 (from Top -Bottom). Similar to previous plots at sn = 69.7 km/s, the black dashed histograms with error bars show the measured wavelet PDFs, corrected for metal line contamination. The blue solid histogram is the same, without masking metal lines. A few example model curves are shown at each redshift, with random noise added to the mock spectra. The models that match the measurements at this smoothing are similar to the ones at the larger smoothing scale. evant parameter space. We generate the wavelet PDF for a range of models by expanding around a fiducial model in a first order Taylor series (see Viel & Haehnelt 2006 for a similar approach applied to SDSS flux power spectrum data). In particular let p denote a vector in the three-dimensional parameter space. Then we calculate the wavelet PDF at a point in parameter space assuming that: Although inexact, this approach suffices to determine degeneracy directions, approximate confidence intervals, and the main trends with redshift. We use the results of the previous section to choose the fiducial model to expand around: at each redshift we choose the best match of the example models in the previous section as the fiducial model. Using the Taylor expansion approximation of Equation 11, we then estimate the wavelet PDF for a large range of models, spanning T 0 = 5, 000 − 35, 000 K, γ = 1.0 − 1.6, and F = F c ± 3σ F (subject to a F prior). Here F c denotes the central value from Faucher-Giguère et al. (2008b), and σ F denotes their estimate of the 1 − σ uncertainty on the mean transmitted flux. For each model PDF in the parameter space, we first compute χ 2 between the model and the wavelet PDF data, ignoring off-diagonal terms in the co-variance matrix. We then add to this χ 2 an additional term to account for the difference between the model mean transmitted flux and the best fit value of Faucher-Giguère et al. (2008b). Finally, we marginalize over F (subject to the above prior based on the results of Faucher-Giguère et al. (2008b)) to compute two-dimensional likelihood surfaces in the T 0 −γ plane at each redshift, and marginalize over γ to obtain reduced, one-dimensional likelihoods for T 0 . We assume Gaussian statistics, so that 1 − σ (2 − σ) two-dimensional likelihood regions correspond to ∆χ 2 = 2.30(6.17), while one-dimensional constraints correspond to ∆χ 2 = 1(4).
The best fit models at z = 4.2, 3.4, 3.0, 2.6, and 2.2 have χ 2 = 9. 5, 19.8, 5.7, 8.0, and 23.1 respectively for 7 degrees of freedom (10 A L bins minus 1 constraint since the PDF normalizes to unity, minus two free parameters). The fits at z = 4.2, 3.0, and 2.6 are acceptable, while the χ 2 values in the z = 3.4 and z = 2.2 bins are high (pvalues of 6×10 −3 and 2×10 −3 respectively). The poor χ 2 in these redshift bins results because the measured PDFs are broader than the theoretical models in these bins, as discussed previously. We will nevertheless consider how χ 2 changes around the best fit models in these redshift bins, although we caution against taking the results too literally.
The constraints from these calculations are shown in Figure 25 and Figure 26. They are qualitatively consistent with the example models shown in the previous section. The degeneracy direction of the constraint ellipses results because the z = 4.2 measurements are sensitive only to the temperature close to the cosmic mean density, while the lower redshift measurements start to constrain only the temperature of more overdense gas. The best fit model at z = 4.2 has T 0 ∼ 20, 000 K, but uncertainties in the mean transmitted flux allow cooler models with T 0 ∼ 10, 000 K at ∼ 2 − σ, as discussed previously. The z = 3.4 measurements indicate the largest temperatures, and require that T 0 20, 000 K at 2 − σ confidence. The lower redshift measurements, particularly that at z = 2.6, generally favor cooler temperatures although at only moderate statistical significance. Figure 26 shows (2−σ) constraints on the temperature at mean density after marginalizing over γ and F . We conservatively allow γ to vary over γ = 1.0 − 1.6, even though γ 1.2 is expected theoretically (McQuinn et al. 2008). If we enforced a prior that γ be steeper than 1.2, then the results at z 3.4 would disfavor some of the higher T 0 models. The T 0 results are consistent with the IGM temperature falling off as T 0 ∝ (1 + z) 2 below z = 3.4, i.e., below this redshift the temperature evolution appears consistent with simple adiabatic cooling owing to the expansion of the Universe. Theoretically, we expect the temperature fall-off to be similar, but slightly slower, than the adiabatic case just after reionization with the temperature evolution eventually slowing owing to residual photoionization heating (Hui & Gnedin 1997). The statistical errors are however still large, and a flat temperature evolution is also consistent with the T 0 constraints, although this case is disfavored theoretically (see below). Note also that enforcing a γ ≥ 1.2 prior would disfavor the high T 0 models that are otherwise allowed at z = 2.2 and z = 2.6, strengthening the case for cooling below z ∼ 3.4.
Moreover, the high temperatures at z = 3.0 and z = 3.4 suggest recent HeII photoheating. To illustrate this point, we show several example thermal history models in Figure 26, considering both cases without any HeII photoheating, and ones in which HI/HeI/HeII are all reionized together at high redshift (z ≥ 6). The upper The upper blue dashed line shows a model in which HI/HeI reionization completes late, the IGM is reionized to a high temperature, and HeII is not yet reionized. The lower blue dashed line is similar, except in this case HI/HeI reionize early. The black dot-dashed line is for a model in which HI/HeI/HeII are all reionized together at z = 6 by sources with a quasar like spectrum. This curve is roughly an upper limit to the temperature without late time HeII reionization. A flat T 0 ∼ 20, 000 K thermal history is consistent within the errors, but an implausibly hard ionizing spectrum is required to achieve such a high temperature from residual photoheating after reionization. This comparison suggests late time HeII reionization, perhaps completing near z ∼ 3.4. blue dashed line is a late HI reionization model (z r = 6), with a high temperature at reionization (T r = 3×10 4 K), and a hard spectrum near the HI/HeI ionization thresholds (with a specific intensity near threshold of J ν ∝ ν −α and α = 0). This case should roughly indicate the highest possible temperature without HeII photoheating over the redshift range probed. Note that this is a rather extreme situation, since even if reionization completes as late as z = 6, much of the volume will be reionized significantly earlier (e.g. Lidz et al. 2007). The lower blue dashed line is an early reionization model (z r = 12 and α = 2) that approximately indicates the lowest plausible temperature without HeII photoheating.
Finally, perhaps the most interesting case is the black dot-dashed line which shows a model in which HI/HeI/HEII are all reionized together at z = 6. Here we assume that the temperature at reionization is T r = 3 × 10 4 K, since atomic hydrogen line cooling should keep the temperature less than this when all three species are ionized simultaneously (Miralda-Escudé & Ress 1994, Abel & Haehnelt 1999. The temperature after reionization depends on the ionizing spec-trum, which determines the amount of residual photoheating. The curve here adopts a quasar like spectrum, reprocessed by intervening absorption, to give α HI = 1.5 near the HI ionization threshold and α HeII = 0 near the HeII ionization threshold (Hui & Haiman 2003). This case is hence similar to the other z = 6 reionization model, except with the addition of residual HeII photoheating. Each of the examples considered gives too low a temperature in the z = 2.2, z = 3.0, and z = 3.4 redshift bins, particularly at z = 3.4 and z = 3. One can further ask how hard the post-reionization ionizing spectrum would need to be to give a thermal asymptote as large as ∼ 20, 000 K. For a power law spectrum we find, using the thermal asymptote formula of Hui & Haiman (2003), that an implausibly hard spectrum with α −0.73 is required to match the 2 − σ lower limit on the z = 3.4 temperature. In fact, there is evidence that galaxies rather than quasars produce most of the ionizing background at z 3 (e.g. Faucher-Giguère et al. 2008b), and so assuming even a quasar like spectrum likely overestimates residual photoheating for plausible early HeII reionization models. In summary, although the errors allow the possibility of a slow temperature evolution and T 0 ∼ 20, 000 K, this temperature is higher than expected from residual photoheating long after reionization.
The simplest interpretation is that HeII reionization heats the IGM, with the process completing near z ∼ 3.4, at which point there is relatively little additional heating and the Universe expands and cools. The redshift extent over which the heat input occurs is, however, not well constrained by our present measurement. Clearly the large error bars on the measurements still leave room for other possibilities. For example, models in which HeII reionization completes a bit later at z ∼ 3 -or perhaps even as late as z ∼ 2.7 as favored by a recent analysis of HeII Ly-α forest data by Dixon & Furlanetto 2009 or earlier at z ∼ 4 are likely consistent with our present measurements given the large error bars. We will consider this further in future work. Finally, other heating mechanisms may be at work in addition to photoionization heating.
An Inverted Temperature-Density Relation?
Recently, Bolton et al. (2008), Becker et al. (2007) and Viel et al. (2009) have suggested that measurements of the Ly-α flux PDF favor an inverted temperature density relation (γ < 1), i.e., situations where low density gas elements are hotter than overdense ones. Bolton et al. (2008) and Viel et al. (2009) construct simulated models with inverted temperature-density relations by adding heat into the simulations in a way that depends on the local density, i.e., on the density smoothed on the Jeans scale. This particular case for an inverted temperaturedensity relation seems unphysical to us since heat input from, e.g. reionization, should be coherent on much larger scales. Nonetheless, we can consider this as a phenomenological example that the flux PDF data favor, and examine the implications of these models for the small-scale wavelet amplitudes. Theoretically, Trac et al. (2008) and find that hydrogen reionization does produce a weakly inverted temperature-density relation. This effect is driven by the tendency for large-scale overdensities to reionize hydrogen first, coupled presumably with the small correlation -Wavelet PDF in inverted temperature density relation models compared to measurements. The dashed histogram shows the metal line corrected wavelet PDF at z = 3 (L = 1, 000 km/s, sn = 69.7 km/s), and the blue histogram is the same without correcting for metal line contamination. The colored lines show several models with γ = 0.5. One can fit the PDF with an inverted temperature density relation, but this requires an extremely high temperature at mean density.
between the overdensity on large scales and that on the Jeans scale. On the other hand, McQuinn et al. (2008) find that HeII reionization leads to a non-inverted equation of state with γ ∼ 1.3 in the midst and at the end of HeII reionization. We refer the reader to this paper for further discussion.
To explore this, we generate mock spectra and measure wavelet amplitudes for several inverted temperaturedensity relation models and compare with our z = 3 measurements. As before, we are considering the impact of the temperature in a post-processing step, and so we are not accounting for differences between the gas pressure smoothing in the inverted models and that in the simulation. Likewise, we incorporate thermal broadening assuming a perfect temperature density relation, and so the impact of scatter in the temperature density relation is ignored in this part of the calculation. We consider inverted temperature density relations with a power law index of γ = 0.5, close to the value suggested by Bolton et al. (2008) and Viel et al. (2009) from their flux PDF measurements near z = 3. The results of these calculations are shown in Figure 27. These cases also roughly match the observed PDF, but require a very high temperature at mean density of T 0 ∼ 40 − 45, 000 K. The reason for this is that the wavelet PDF measurements are sensitive mostly to the temperature around a density of ∆ ∼ 2 at this redshift. In the previous section we found that models with, for example, T 0 ∼ 25, 000 K and γ = 1.3 roughly match the data. A model with an inverted temperature density relation (γ = 0.5) produces the same temperature at a density of ∆ ∼ 2 only for a much higher temperature (at mean density) of T 0 ∼ 45, 000 K. The figure suggests that the expected degeneracy between T 0 and γ indeed extends to even these inverted temperaturedensity relations. Hence one can fit the measurements with a very high T 0 , small γ model, although the inverted cases produce slightly wider PDFs. While these can fit the data, the high required temperatures seem unlikely to us, and we disfavor inverted models for this reason. Bolton et al. (2008) and Viel et al. (2009) found that inverted models with substantially smaller temperature at mean density match their flux PDF measurements. On the other hand, Viel et al. (2009) did a joint fit to the flux PDF and the SDSS flux power spectrum from McDonald et al. (2006). Recall that the SDSS measurements are sensitive only to the large scale flux power spectrum (k 0.02 s/km), and thus depend on IGM parameters differently than the small-scale wavelet measurements explored here. Their joint fit requires high T 0 for cases with inverted temperature-density relations, similar to our conclusions from a different type of measurement. There thus appears to be some tension with the flux PDF measurement, which may reflect systematic errors in one or more of the measurements and/or the modeling. We intend to consider this further in future work.
Inhomogeneities in the Temperature-Density
Relation Let us further consider the implications of our measurements for the presence or absence of temperature inhomogeneities in the IGM. In most redshift bins, the measured PDF has comparable width to the simulated PDFs, which assume a perfect temperature-density relation. 16 The possible exceptions are thez = 3.4 bin (where metal contamination is a possible culprit) and thē z = 2.2 bin (where scatter from shocks may be most important). One might wonder if the widths of the wavelet PDFs are too small to be compatible with ongoing or recent HeII reionization, which is presumably a fairly inhomogeneous process. A related question regards the precise meaning of our temperature constraints in the presence of inhomogeneities: which temperature do we measure exactly -the mean temperature, the minimum temperature, etc.? We intend to address these issues in detail in future work, but we outline a few pertinent points here. In this discussion, we draw on the results of McQuinn et al. (2008).
The first point is that temperature inhomogeneities during HeII reionization, while likely important, are smaller than one might naively guess. McQuinn et al. (2008) emphasized the importance of hard photons, with long mean free paths, for HeII photoheating: much of the heating during HeII reionization by bright quasars occurs far from sources, rather than in well-defined 'bubbles' around ionizing sources. This is quite different than dur-ing HI/HeI reionization by softer stellar sources, where the ionizing photons have short mean free paths and heating does occur within well-defined bubbles. Since the hard photons have long mean free paths, and a 'background' radiation field from multiple sources needs to be built up before these photons appreciably ionize and heat the IGM, the heating is much more homogeneous than might otherwise be expected. The softer photons, typically absorbed in bubbles around the quasar sources, only heat the IGM by δT 7, 000 K. Consider the temperature PDFs in Figure 11 of McQuinn et al. (2008). This figure illustrates that by the time any gas is heated significantly, there are very few completely cold regions left over in the IGM: the temperature field is more homogeneous than might be expected.
Simplified models with discrete ∼ 30, 000K bubbles around quasar sources and a cooler IGM outside (e.g. Lai et al. 2006) are hence not realistic, and overestimate the temperature inhomogeneities. In the McQuinn et al. (2008) simulations, the temperature inhomogeneities peak at a level of σ T / T ∼ 0.2, which is reached in the early phases of HeII reionization. For contrast, a toy twophase hot/cold IGM with hot bubbles that are 3 times as hot as a cooler background IGM, gives a more substantial peak fluctuation level of σ T / T = 0.58, reached when the hot bubbles fill 25% of the IGM. In the midst of HeII reionization, the McQuinn et al. (2008) simulations predict roughly 10% level temperature fluctuations on large scales from inhomogeneous HeII heating. This level of scatter may be hard to discern with our existing measurement.
To illustrate this, we compare thez = 4.2, s n = 69.7 km/s measurement to a simplified and extreme two phase model. This redshift bin probes extended stretches of spectrum along just two lines of sight. Imagine a model where one line of sight passes entirely through cold regions of the IGM with T 0 = 10 4 K, γ = 1.6, while the other line of sight passes entirely through hot regions with T 0 = 2.5 × 10 4 K, γ = 1.3. This is a contrived example since each sightline probes hundreds of co-moving Mpc, and so each sightline should in reality probe a mix of temperatures, but this simple case nonetheless illustrates the challenge of detecting temperature inhomogeneities. For simplicity, in this toy model we imagine that each line of sight probes an equal stretch through the IGM so that the wavelet PDF is a fifty-fifty mix of the hot and cold models. In this toy scenario the mean IGM temperature is T = 17, 500 K and the fluctuation level is σ T / T ∼ 0.43, i.e., substantially larger than we expect. The wavelet PDF in this toy model is shown in Figure 28. This simple model clearly produces too broad a PDF, but it is also apparent that smaller, likely more realistic, levels of inhomogeneity will be hard to distinguish with the existing data. For example, an inhomogeneous model with fewer cold regions than in the toy twophase model would agree with the measurement. Indeed, the data may even favor slightly inhomogeneous models, but we leave exploring this to future work. The z = 4.2 and z = 3.4 data, which may be in the midst of HeII reionization, and which are sensitive to the temperature near the cosmic mean density, are the best redshift bins for further exploration. Provided the inhomogeneities are relatively small, as suggested by the measurements in most redshift bins, ambiguities in which temperature The blue histogram with error bars is the wavelet PDF atz = 4.2 and sn = 69.7 km/s. The curves show theoretical models: the red line is a hot model, the black line is a cold model, while the blue curve shows a fifty-fifty mix between the hot and cold models. This extreme model can be ruled out as it is too broad, and produces too may high amplitude regions compared to the data, but one can see that detecting smaller levels of inhomogeneity is challenging.
we constrain precisely are unimportant, and our temperature estimates should be accurate.
Another possible issue, related to the discussion in §2.3, is that the one-dimensional nature of the Ly-α forest may obscure detecting temperature inhomogeneities from HeII reionization. Consider the three-dimensional power spectrum of temperature fluctuations in Figure 10 of McQuinn et al. (2008). There is a large scale peak in the three-dimensional power spectrum, owing to inhomogeneous heating, and a prominent small-scale ramp-up that results from the temperature-density relation and small-scale density inhomogeneities. The large scale peak in the power spectrum is essentially the signal we are after, while the small scale ramp-up is noise as far as extracting inhomogeneities is concerned. However, the onedimensional temperature power spectrum may be more relevant than the three-dimensional one for absorption spectra. In the one-dimensional temperature power spectrum, high-k transverse modes, which are dominated by the small-scale ramp-up, will be aliased to large scales, swamping the temperature inhomogeneities. This argument is imperfect though, since the one-dimensional temperature power spectrum is not exactly the relevant quantity either: absorption spectra are insensitive to the temperature of large overdensities, which regardless produce saturated absorption. It will be interesting to consider this further in the future, and to consider the potential gains from cross-correlating the wavelet amplitudes of pairs of absorption spectra.
A final issue, particularly relevant in the highest red-shift bin, is that the temperature inhomogeneities may depend on the timing and nature of hydrogen reionization. The temperature contrast between regions with doubly ionized helium and those in which only HI/HeI are ionized depends on when hydrogen (and HeI) reionized. Specifically, the temperature contrast between HII/HeII and HII/HeIII regions will be reduced if hydrogen is reionized late to a high temperature, and increased if hydrogen reionizes early to a smaller temperature. Moreover, heating from hydrogen reionization will itself be inhomogeneous (e.g. Cen et al. 2009). Extending the measurements in this paper to higher redshift can help disentangle the impact of hydrogen and helium photoheating. Further modeling will also be helpful.
4.9. The Impact of Jeans Smoothing As mentioned previously, a shortcoming of our modeling throughout is that we have run only a single simulated thermal history in describing the gas density distribution in the IGM: we vary the thermal state of the gas only as we construct mock absorption spectra and incorporate thermal broadening. Similar approximations are common in the Ly-α forest literature. The gas density distribution is sensitive to the full thermal history of the IGM (Gnedin & Hui 1998) and so properly accounting for a range of thermal histories requires running many simulations. This certainly deserves further exploration, but we do not expect a big impact on our present results. Thermal broadening directly smooths the optical depth field and results in a roughly exponential decrease in small-scale flux power (Zaldarriaga et al. 2001), while Jeans smoothing acts on the three-dimensional gas distribution and has a less direct impact. Properly accounting for the impact of HeII photoheating in the simulation run should smooth out the gas distribution a bit, and reduce the wavelet amplitudes in these models slightly. This might reduce our favored temperatures during HeII reionization, but we expect this effect to be small compared to other uncertainties. Observational studies of the absorption spectra of close quasar pairs may help disentangle the effects of thermal broadening and Jeans smoothing.
Comparison with Previous Measurements
A detailed comparison with previous measurements is difficult since our methodology differs from that of most previous work. Instead, we will simply compare the bottom line, and make a few remarks about the differences. Figure 29 shows our constraints on T 0 (z), compared to the results of Schaye et al. (2000), Ricotti et al. (2000), McDonald et al. (2001), & Zaldarriaga et al. (2001). It is encouraging that some of the main trends are similar across all of the measurements: for example, all of the measurements favor a fairly hot IGM near z ∼ 3. In this sense, our work reinforces the previous results. There are differences in the details, however: the peak temperatures in Ricotti et al. (2000), and Schaye et al. (2000) are reached at lower redshift than in our analysis. The McDonald et al. (2001) and Zaldarriaga et al. (2001) results are, on the other hand, flat as a function of redshift, although they adopt wide redshift bins and may average over any temperature increase. Our measurements are also fairly consistent with a flat temperature evolution given the large error bars on our measurements. Our results mostly favor higher temperatures than the previous measurements, particularly the high redshift points of Schaye et al. (2000).
One possible reason for some of the differences is related to improvements in simulations of the forest over the past decade or so. In Appendix A, we found that our method -and we suspect related methods -require fairly large simulation volumes and high mass and spatial resolution, particularly at high redshift (see also e.g. Bolton & Becker 2009). The requisite particle number, while achievable today, was of course prohibitive for past studies. Indeed, this was one of our motivations for revisiting the temperature measurements. While some of the previous studies varied simulation resolution and boxsize, they often considered only a single additional run, which may have been inadequate to fully assess convergence. Finite resolution, in particular, can bias temperature estimates low.
It is instructive to compare our fiducial simulation with a boxsize of L b = 25 Mpc/h and N p = 2×1024 3 particles to the main runs of previous work. Schaye et al. (2000) used a (L b , N p ) = (2.5 Mpc/h, 2 × 64 3 ) SPH simulation, Ricotti et al. (2000)'s main runs were (2.56 Mpc/h, 2 × 256 3 ) HPM calculations, McDonald et al. (2001) used an Eulerian hydrodynamic simulation with L b = 10 Mpc/h and 288 3 cells, and Zaldarriaga et al. (2001) used a dark matter only simulation with L b = 16 Mpc/h and 128 3 dark matter particles. Given the differences between methods, we will not try to estimate the impact of systematic errors from finite boxsize and resolution on previous results. However, it is clear that increases in computing power allow us to do a much better job with respect to boxsize and resolution than previous work. Finally, improved estimates of the mean transmitted flux (Faucher-Giguère et al. 2008b), and improved masking of metal lines, may also contribute to some of the differences between our results and previous work.
CROSS-CORRELATING WITH THE HEII LY-α FOREST
An interesting possibility is to cross-correlate wavelet amplitude measurements from HI Ly-α forest spectra with measurements in the corresponding regions of HeII Ly-α forest spectra. It is timely to consider this, as larger samples of HeII Ly-α forest spectra will soon be available (Syphers et al. 2009), especially given the recent installation of the Cosmic Origins Spectrograph on the Hubble Space Telescope.
A fundamental difficulty with HeII Ly-α forest observations is that the HeII Ly-α cross section is relatively large, and so even a mostly ionized (mostly HeIII) medium may give rise to complete absorption. McQuinn (2009) recently emphasized, however, that this problem is not as acute as it is for the z ∼ 6 HI Ly-α forest. First, the z ∼ 3 HeII Ly-α optical depth is significantly smaller than the z ∼ 6 HI Ly-α optical depth owing to the lower cosmic helium abundance, the smaller absorption cross section, and the lower mean gas density at z ∼ 3. Moreover, one can locate low density gas elements using high transmission regions from HI Ly-α forest observations of the same quasar: if even these low density regions manage to give complete absorption, these elements and surrounding gas in the absorption trough must be significantly neutral (see McQuinn 2009 for details). As a quantitative measure, it is helpful to note that a gas element at the z = 3 cosmic mean density with a HeII fraction of only X HeII = 10 −3 produces a significant HeII optical depth of τ HeII = 3.6 (e.g. Furlanetto 2008). A gas element at one tenth of the cosmic mean density will give the same optical depth when it is one percent neutral.
While constraining on their own, HeII Ly-α observations may be fruitfully combined with our methodology to extract still more information. Specifically, we propose to measure wavelet amplitudes from the HI Ly-α forest for quasar spectra with existing HeII Ly-α observations, contrasting the wavelet amplitudes in HeII absorption trough regions with those in HeII transmission regions. If the HeII troughs correspond to purely neutral HeII regions, untouched by high energy quasar photons, we expect them to be cold, provided that HeI and HI in the region were ionized long ago, as one expects for absorbing gas at say z ∼ 3 − 4. The temperature-density relation in the neutral HeII regions should be at T 0 10, 000 K, and γ ∼ 1.6, depending on the nature of the HI ionizing sources, and on when HI reionization occurs (Hui & Haiman 2003). On the other hand, if the regions instead contain mostly ionized HeII (yet are nevertheless opaque in HeII Ly-α owing to the large absorption cross section), they will be at similar temperature to the transmission regions. In this case all of the gas will be hot, unless HeII reionization completed at much higher redshift. A final, somewhat subtle, possibility relates to the fact that towards the end of HeII reionization there will likely be very hot gas elements with neutral fractions as large as X HeII ∼ 0.1 that are (partly) ionized by a heavily filtered ionizing spectrum from distant quasars (McQuinn . Such regions will give rise to troughs, will generally be hotter than more ionized regions, and occur before HeII reionization completes. Hence, at the end of HeII reionization, we may expect the HeII troughs to be hotter than transmission regions. Only troughs of purely neutral HeII gas, untouched by quasar photons, should be cold. Discovering any cold regions in the HI Ly-α forest that correspond to HeII troughs would also make the presence of cold regions and their connection to HeII reionization more plausible. A detailed study of this type will certainly await future HeII Ly-α observations, but we can nevertheless illustrate the main idea with the single spectrum from our sample, HE 2347-4342, for which there is an existing HeII Ly-α forest spectrum (e.g. Smette et al. 2002). These authors identify two spectral regions that are consistent with complete HeII absorption troughs to within the signal to noise of their measurement. Specifically, an observed spectral region between λ obs = 1165.00−1173.50Å (an absorption redshift ofz gas = 2.849) is estimated to have a mean HeII Ly-α transmission of F = −0.001 ± 0.007, and a region between λ obs = 1150.00 − 1154.95 (z gas = 2.7938) has F = 0.024 ± 0.030 (Smette et al. 2002).
We plot the transmission,δ F , for the corresponding portion of the VLT HI Ly-α spectrum in Figure 30 (top panel). In the bottom panel of the figure, we show the wavelet amplitudes (for s n = 34.9 km/s, L = 1, 000 km/s) of the corresponding stretch of spectrum, and compare it to the amplitude along a typical sightline drawn from a hot T 0 = 20, 000 K, γ = 1.6 model and a cold T 0 = 10, 000 K, γ = 1.6 model. The cold model produces larger wavelet amplitudes than the data, and the hot model matches more closely, (although even it has two regions of higher wavelet amplitude than found in the observed spectrum). Hence, our measurement suggests that the high opacity regions are already quite hot. This is unsurprising based on the findings of the previous section that the IGM is mostly quite hot at z ∼ 3. These special HeII trough regions do not appear cooler than typical regions, and this argues against the gas in these regions being purely neutral. In addition, the trough regions are not obviously hotter than the transmission regions.
We tentatively suggest that the trough regions are hot and ionized. For now, our argument is based only on a small portion of a single spectrum, and so we caution against drawing strong conclusions from it. We regard it as suggestive, and eagerly await further HeII Ly-α spectra to perform a more complete study, hopefully out to higher redshift. Note that there will likely be significant HeII transmission before HeII reionization completes (Furlanetto 2008, McQuinn et al. 2008, and so we should be able to contrast the temperature in trough and transmission regions even in the midst of HeII reionization and fully exploit this method.
CONCLUSIONS
In this paper, we used a method similar to that of Theuns & Zaroubi (2000) and Zaldarriaga (2002) to quantify the amount of small-scale structure in the Ly-α forest. In particular, we convolved Ly-α forest spectra with suitably chosen Morlet wavelet filters, and recorded the PDF of the smoothed wavelet amplitudes. Using cosmological simulations, we showed that this measure of small-scale structure in the forest can be used to extract information about the temperature of the IGM and its inhomogeneities. We then applied this methodology to 40 VLT spectra, spanning absorption redshifts between z = 2.2 and z = 4.2 and presented tables of the resulting smoothed wavelet PDFs. The tables (Tables 1-5) of smoothed wavelet PDFs are the main result of this paper.
In order to examine the main implications of our measurements for the thermal history of the IGM, we made an initial comparison with high resolution cosmological simulations. This comparison suggests that the temperature of the IGM, close to the cosmic mean density, peaks in the redshift range studied near z = 3.4, at which point it is hotter than T 0 20, 000 K at 2 − σ confidence. At lower redshift, the data appear roughly consistent with a simple adiabatic fall-off (T 0 ∝ (1 + z) 2 ) from the peak temperature at z = 3.4. The high temperature measurements require significant amounts of late time heating, and are inconsistent with models in which HeII reion-ization completes much before z ∼ 3.4. At the highest redshift considered, the temperature in our best fit model is rather high, T 0 ∼ 15−20, 000 K but cooler T 0 ∼ 10, 000 K models are still allowed at 2 − σ confidence at this redshift, owing mostly to uncertainties in the mean transmitted flux. We believe that the most likely explanation for our results is that HeII reionization completes sometime around z ∼ 3.4, although the statistical errors are still large and other heating mechanisms may conceivably be at work. In general, our analysis favors higher temperatures and higher redshift HeII reionization than most previous analyses in the literature (see §4.10).
This work can be extended and improved upon in several ways, some theoretical and some observational. First, we intend to compare our measurements to more detailed theoretical models which follow photoheating and radiative transfer during HeII reionization. Next, the wavelet PDF measurements can be combined with measurements of the large scale flux power spectrum from the SDSS ). This should tighten our constraints, and hopefully break some of the degeneracies present with the mean transmitted flux at high redshift. It would also be interesting to apply our method to a larger data set, beating down the statistical error bars, and filling in the redshift gap in our present data set around z = 3.8. Identifying metal line absorbers in additional spectra would help further control metal line contamination, an important systematic for smallscale measurements. Particularly interesting would be to apply our methodology at higher redshifts. This would help disentangle the effects of hydrogen and helium photoheating, and perhaps provide interesting constraints on hydrogen reionization (Theuns et al. 2002a, Hui & Haiman 2003. A similar analysis applied to the Ly-β region of a quasar spectrum would be sensitive to the temperature of more overdense regions, and help constrain γ(z) (Dijkstra et al. 2004). Finally, it would be interesting to consider the implications our our measurements for cosmological parameter constraints from the Ly-α forest, for which the temperature of the IGM is an important nuisance parameter. Although challenging to extract, the small-scale structure in the Ly-α forest contains a wealth of information regarding the thermal and reionization histories of the Universe! scale s n : |a noise m (x)| 2 = |a noise n (x)| 2 . Provided we can find a scale s m at which the noise dominates over the signal, that the noise is white-noise, Gaussian random, and that the noise is uncorrelated with the signal, we can construct an un-biased estimator of the signal's mean wavelet amplitude. We simply subtract the average of the small-scale filtered wavelet amplitudes from that on larger scales. Our estimate of the noise bias comes from filtering the data on a scale s m = 17.4 km/s, and assuming |a tot m (x)| 2 ∼ |a noise m (x)| 2 on this scale, after metal excision. In a spectrum with low noise, the signal may still dominate over the noise even on this smoothing scale, and in this case we overestimate the noise bias. However, since the signal drops off strongly with wavenumber, we conclude in this case that the noise bias is unimportant.
We would also like to estimate the noise bias in the wavelet amplitude power spectrum, and the bias in the variance of the wavelet amplitudes, smoothed on length scale L. To begin with, we neglect any variations in the noise power spectrum, P N , from sightline to sightline and assume that it is independent of scale. Using the notationÂ(x) = |a sig n (x) + a noise n (x)| 2 , let us consider the (configuration space) two-point function ofÂ(x): Here ξ sig A (|x 1 − x 2 |) denotes the two-point function of the underlying signal (i.e., Equation 8, although in the above expression we have not yet normalized by A in the denominator), and ξ noise A (|x 1 − x 2 |) is a pure noise term, while the other terms are cross-terms.
The power spectrum ofÂ(x) is the Fourier transform of Equation 14. Using the convolution theorem and Equation 3, the pure noise part of the power (i.e., the Fourier transform of ξ noise A (|x 1 − x 2 |)) can be written as: Here B = π −1/4 (2πs n /∆u) 1/2 is a normalization factor (Equation 3), and we abbreviate s n as s. The noise contribution to the wavelet amplitude (squared) power spectrum is proportional to P 2 N because A is a quadratic function of δ F (Equations 4-5).
Next we consider the cross terms. The terms on the third line of Equation 14 can be shown to be very small. The important cross terms can be derived by again applying the convolution theorem, and using the Fourier transform of the Morlet Wavelet filter and its complex conjugate. The result is: Here P F (k) denotes the flux power spectrum. The power spectrum of the underlying signal, P A (k), is related to the one we measure, PÂ(k), by P A (k) = PÂ(k) − P cross A (k) − P noise A (k). Note that in order to estimate the bias in the measured power spectrum we need to first estimate the underlying flux power spectrum P F (k). The expressions also require an estimate of the noise power spectrum which we derive from the small-scale filtered field, P N (k) = |a tot m | 2 ∆u, under the assumption that |a tot m | 2 = |a noise m | 2 = |a noise n | 2 . Finally, we want to estimate the bias on the variance of the (smoothed) wavelet amplitude squared. The variance follows from the power spectrum by: It is also useful to note that the noise contribution to the variance can be calculated analytically from Equation 15 and Equation 17 and is given by: Integrating over the power spectrum, the variance we measure, σ 2 A (L), is related to the underlying signal variance, σ 2 A (L), by σ 2 A (L) = σ 2 A (L) noise + σ 2 A (L) cross + σ 2 A (L). This expression almost provides us with an un-biased estimate of the signal variance, but we still need to take into account sightline-to-sightline variations in the noise power spectrum. The above expression for σ 2 A (L) can be interpreted as a conditional variance var(A L |P N ), i.e., the variance in A(L) given that the noise power is P N . The unconditional variance is then given (for uniform weighting) by where Noise denotes averaging over the ensemble of sightlines with different noise properties, and A L is the global average wavelet amplitude. With these formulae in hand, we can estimate the bias in our variance estimates owing to random noise in the spectra. The cross term in Equation 16 requires an estimate of the flux power spectrum. We use here a simulated model for the flux power spectrum.
APPENDIX B: SIMULATED METAL LINE ABSORPTION
In this Appendix, we explore the impact of metal line contamination on the wavelet PDF measurements theoretically. Our main goal here is to build some intuition for the contamination and its relative importance at different smoothing scales and redshifts -i.e., we expect this investigation to be useful qualitatively but do not expect quantitatively accurate estimates of metal line contamination. Our strategy is to randomly populate mock spectra with metal lines in a way that roughly matches empirical constraints on metal line absorbers, rather than attempting to directly simulate metal absorbers from first principles. Ideally, our prescription for including metal lines would match the column density distribution, two-point correlation function, b-parameter distribution, and overall opacity for many different species of metal line absorbers. In practice, the relevant statistical properties have not been measured for all of the metal absorbers that may contaminate the forest. We instead populate mock spectra only with lines that match the observed properties of CIV lines, which produce the strongest contamination to the forest. To roughly account for absorption by additional metal line species, we generate three independent sets of absorption lines, with each set of lines drawn according to the statistical properties of CIV. This crude approximation is adequate to the extent that the statistical properties of other metal line absorbers are similar to those of CIV. Generating three sets of CIV-like lines is also somewhat arbitrary of course, and we find that even with three sets of strong CIV absorbers, we -somewhat surprisingly -underestimate the fractional contribution of metals to the opacity of the forest by a factor of a few (Schaye et al. 2003, Faucher-Giguère et al. 2008b. We generate mock metal absorption lines by first generating a lognormal random field, and then Poisson sampling from the lognormal field to produce random realizations of discrete metal lines. The measured two-point correlation function of CIV absorbers has the form (Boksenberg et al. 2003): We want to generate realizations of a random field with the above clustering, which we do approximately with a lognormal model. Specifically, we generate a Gaussian random field δ G and then form a lognormal field via the mapping: with the parameter A chosen so that the field δ CIV has mean zero, A = exp(− δ G 2 /2). In order for δ CIV to have the correct two-point function, the Gaussian random field δ G must be drawn from a model with an appropriate power spectrum. By experimentation, we find that a model with with A G = 1.11 × 10 3 , and σ G = 135 km/s, gives roughly the correct clustering. Given a line of sight realization of the random field δ CIV , the average number of CIV lines expected in a simulated cell of velocity width ∆v cell , and density δ CIV , at spatial position x is: We denote the cosmic average number of lines per velocity increment, ∆v cell , as n CIV . This can be computed from the average number of lines per unit redshift, which in turn follows from the CIV column density distribution. The average number of lines per unit redshift above some minimum column density N CIV,min is given by We adopt N CIV,min = 10 12 cm 2 throughout. Here dX dz is the absorption pathlength, Given the average number of CIV lines in a cell, N CIV (x), the exact number of CIV lines to place in the cell is determined by drawing from a Poisson distribution. Each absorption line is then assigned a column density by drawing from a power-law fit to the observed column density distribution (Scannapieco et al. 2006). This power-law fit has f (N ) ∝ (N/N 0 ) −α , with α = 1.8, and is normalized to f = 10 12.7 cm 2 at N 0 = 10 13 cm −2 . We use this fit at all redshifts since the observed distribution evolves only weakly over the redshifts of interest. Since CIV is a doublet, we create a weaker partner line for each mock absorption line generated. We give each absorption line a Gaussian profile, and approximate the b-parameter distribution as a delta-function. We have experimented with delta functions around b = 5, 10 and 20 km/s, comparable to the observed values (Boksenberg et al. 2003). For reference, the stronger CIV absorption component has a rest frame wavelength of λ r = 1548.2Å, while the weaker component is at λ r = 1550.8Å. The cross section of the stronger component is σ 1,CIV = 2.6 × 10 18 cm 2 , and is σ 2,CIV = 1.3 × 10 18 cm 2 for the weaker component. It is also useful to note that the line center optical depth of the stronger component is related to the column density and b-parameter of the line by: while the line center optical depth of the weaker component is a factor of two smaller. We have generated mock metal absorption lines according to the above prescription, and added them to simulated Ly-α forest spectra at z = 2.2, 3.0, 3.4 and 4.2. A typical example sightline at z = 3.4 is shown in Figure 31, assuming b = 10 km/s and T 0 = 2.5 × 10 4 K, γ = 1.3. 17 This illustrates a few key qualitative features regarding metal line contamination, and its impact on the wavelet amplitudes. The first feature is that our mock metal absorbers do lead to prominent peaks in the wavelet amplitudes, similar to the peaks observed and associated with metal absorbers in our observational data ( §3.2). The next feature one notices is the considerably larger impact of metal absorbers on the smaller smoothing scale, again consistent with our previous findings from observational data. In some cases there are peaks in the wavelet amplitude on the smaller smoothing scale that are entirely absent at larger smoothing scale. For example, the metal line absorbers beyond ∆v 10, 000 km/s in Figure 31 produce peaks in the wavelet amplitude only on the smaller filtering scale. There are also cases where metal line absorbers lead to peaks for both filters (e.g. the lines near ∆v ∼ 2, 000 km/s). In these cases, the fractional boost in wavelet amplitude from the metal lines is larger for the smaller smoothing scale filter. The metal lines are typically narrower than the HI lines, and the fractional contamination is hence significantly larger on small scales. Finally, a metal line that lands on a pixel where there is already significant Ly-α absorption is obviously irrelevant. We find many examples from the mock spectra of strong, narrow metal lines that happen to overlap strong Ly-α lines, and have little impact as a result. The strong increase in the mean absorption with redshift, and the corresponding boost in the amplitude of fluctuations in the forest, result in significantly less contamination towards high redshift. For example, in our simulated models the fractional impact of metals on the mean wavelet amplitude (for s n = 34.9 km/s) is 7 times larger at z = 2.2 than it is at z = 4.2.
In order to provide a more quantitative measure of the impact of metal lines on wavelet amplitude measurements, we measure the wavelet PDF from 1, 000 mock spectra with added metal lines. Examples at z = 3.4 are shown in Figure 32. By comparing the top and bottom panels, one can see that the metal lines generally have a much larger impact on the smaller filtering scale. At s n = 34.9 km/s, for b = 5 and 10 km/s, the mean wavelet amplitude is shifted significantly, and the PDF develops a long tail towards high wavelet amplitudes. There is relatively little impact for lines with larger b-parameters, as demonstrated by the b = 20 km/s curve, but most observed CIV lines have smaller b-parameters: b = 20 km/s is really at the upper end of the observed CIV linewidths (Boksenberg et al. 2003). We have also generated a more extreme model, with 6 independent sets of CIV-like lines. Even this model produces only a small shift in the wavelet PDF on the large smoothing scale. Although our model for metal lines is rather crude, we expect fairly small shifts in the wavelet amplitudes on the larger smoothing scale, especially in the higher redshift bins.
APPENDIX C: CONVERGENCE WITH SIMULATION RESOLUTION AND BOXSIZE
In this section we assess the convergence of the simulated wavelet PDFs with increasing simulation resolution and boxsize. It is relatively challenging to obtain fully converged results in Ly-α forest simulations. On the one-hand, one needs to simulate a large volume to: compare simulations with large scale flux power spectrum measurements (if desired), to sample a representative fraction of the Universe, to capture the cascade of power from large to small scales, and to simulate peculiar velocity fields, which are coherent on rather large scales. On the other hand, high mass and spatial resolution at the level of tens of kpc (in regions of low to moderate overdensity) are required to fully resolve the filtering (Gnedin & Hui 1998) and thermal broadening scales.
In order to examine the convergence of the wavelet PDFs with simulation volume, we ran a set of cosmological SPH simulations with fixed mass and spatial resolution, yet increasing boxsize. Specifically, we ran simulations with boxsize L b and particle number N p of (L b , N p ) = (12.5Mpc/h, 2 × 256 3 ), (25Mpc/h, 2 × 512 3 ), and (50Mpc/h, 2 × 1024 3 ). To isolate resolution effects, we ran a sequence of fixed boxsize, increasing particle number simulations with (L b , N p ) = (25Mpc/h, 2 × 256 3 ), (25Mpc/h, 2 × 512 3 ), and (25Mpc/h, 2 × 1024 3 ). In each simulation the force softening was set to 1/20th of the mean inter-particle spacing. In general, the initial conditions in each of the fixed boxsize simulations are drawn from the same random number seeds, so that the Fourier modes of the initial displacement field are identical (for the wavenumbers common to each pair of simulations). Owing to imperfect planning, however, the highest resolution simulation with N p = 2 × 1024 3 particles was run with different initial conditions, and so there are random differences between this simulation and the lower resolution realizations, in addition to any systematic dependence on resolution. Given that the random seed-to-seed fluctuations are fairly small, and that are results are fairly well converged, we have not rerun the (faster) lower resolution simulations with initial conditions that match the highest resolution run.
In order to test how the convergence depends on redshift (mostly owing to evolution in the mean transmitted flux) we examine simulation outputs at z = 2, 3, and 4. We re-adjust the intensity of the ionizing background in each simulation to match a given (averaged over all sightlines) mean transmitted flux. At z = 3, we assume a mean transmitted flux of F = 0.680. For the tests here, we adopt F = 0.849 at z = 2, and F = 0.393 at z = 4. We assume a perfect temperature density relation when incorporating thermal broadening in the mock quasar spectra. To test whether the convergence depends on the assumed model for the thermal state of the IGM, we consider two temperature-density relations: (T 0 , γ) = (2 × 10 4 K, γ = 1.3) and (T 0 , γ = 1 × 10 4 K, γ = 1.6). In each case we adopt a small-scale smoothing of s n = 34.9 km/s and a large scale smoothing of L = 1, 000 km/s (see §2.3). In the text we consider s n = 69.7 km/s as well as s n = 34.9 km/s, but the resolution requirements are more stringent on the smaller of these scales, and so we consider it throughout this convergence study.
The results of the boxsize convergence test are shown in Figures 33-35. The convergence with simulation boxsize is generally encouraging. In fact, the wavelet PDFs from the rather small L b = 12.5 Mpc/h box are similar to those in the larger L b = 25 Mpc/h and L b = 50 Mpc/h volumes. The z = 2 results, however, suggest that the L b = 12.5 Mpc/h box is a bit small: the wavelet PDF looks systematically narrow compared to the PDF in the larger volume simulations, although the differences are fairly small. It is not particularly surprising that this small volume run is inadequate at z = 2, even for the relatively undemanding task of characterizing the distribution of small-scale power. For one, the amplitude of the linear power spectrum at the fundamental mode of this simulation box is ∆ 2 (k F ) ∼ 0.4 in our adopted cosmology at this redshift, and so one does expect to start seeing systematic errors from missing large scale modes. In some of the z = 3 and z = 4 models the trend with boxsize appears to be non-monotonic. This may suggest that some of the differences are random, rather than systematic: i.e., a different choice of random number seed in the initial conditions can shift the PDF around a little bit in the smaller volumes. This scatter can be reduced by running several different realizations of each model and averaging, but the effects are small and so we do not pursue this here. It may also be that some of the non-monotonic trends result from two competing systematic effects. For present purposes, bear in mind that our main goal is to distinguish hotter T 0 ∼ 2 × 10 4 K, γ = 1.3 models from cooler T 0 ∼ 1 × 10 4 K, γ = 1.6 models: the differences between simulations of different boxsize are mostly quite small compared to the model differences. The one possible exception appears to be for the cooler model at z = 4, where the peak of the PDF appears at surprisingly large amplitude in the large volume simulation, although the boxsize shift is still relatively small compared to the difference between the hot and cold models. Since we focus on small-scale fluctuations in this paper, and we find that the resolution requirements are fairly stringent at high redshift (see below), we sacrifice simulation volume slightly for resolution and adopt L = 25 Mpc/h as our fiducial boxsize.
Next we show the results of varying the spatial and mass resolution at fixed simulation volume (Figures 36-38). At z = 2 and z = 3, the results of the N p = 2 × 256 3 , L b = 25 Mpc/h and the N p = 2 × 512 3 , L b = 25 Mpc/h simulations are quite similar. This gives us confidence that even the N p = 2 × 512 3 , L b = 25 Mpc/h simulation is adequately converged at these redshifts for measurements of the wavelet PDF. At z = 4, however, there are noticeable differences, suggesting that higher spatial resolution is required. Note that the convergence with resolution is better for the hotter model. Since the data appear to favor this model over the cooler model, its convergence properties may be more relevant. It is clear, however, that resolution requirements are rather stringent at high redshift and so we use the L b = 25 Mpc/h, N p = 2 × 1024 3 simulation as our main simulation run throughout. Note also that any bias from limited simulation resolution causes us to systematically underestimate the temperature of the IGM, and strengthens the argument for a hot IGM. Note.
-Here the Morlet filter scale is s n = 69.7 km/s. The first column is the bin number, the second column is the average wavelet amplitude in the bin, the third column is the differential PDF (per ln A L ) in the bin, and the fourth column is the 1 − σ error on the differential PDF. The measurements have not been corrected for metal line contamination. Note. -Similar to Table 3 except atz = 2.6. | 2009-09-28T20:34:01.000Z | 2009-09-28T00:00:00.000 | {
"year": 2009,
"sha1": "9f04e1052a655faa865d4db8aaa9ce06dcff4b9f",
"oa_license": null,
"oa_url": "https://dash.harvard.edu/bitstream/1/41381638/1/98399%20aam%200909.5210.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "9f04e1052a655faa865d4db8aaa9ce06dcff4b9f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
118567187 | pes2o/s2orc | v3-fos-license | Biharmonic Split Ring Resonator Metamaterial: Artificially dispersive effective density in thin periodically perforated plates
We present in this paper a theoretical and numerical analysis of bending waves localized on the boundary of a platonic crystal whose building blocks are split ring resonators (SRR). We first derive the homogenized parameters of the structured plate using a three-scale asymptotic expansion in the linearized biharmonic equation. In the limit when the wavelength of the bending wave is much larger than the typical heterogeneity size of the platonic crystal, we show that it behaves as an artificial plate with an anisotropic effective Young modulus and a dispersive effective mass density. We then analyze dispersion diagrams associated with bending waves propagating within an infinite array of SRR, for which eigen-solutions are sought in the form of Floquet-Bloch waves. We finally demonstrate that this structure displays the hallmarks of All-Angle-Negative-Refraction(AANR) and it leads to superlensing and ultrarefraction effects, interpreted thanks to our homogenization model as a consequence of negative and vanishing effective density, respectively.
I. INTRODUCTION
Left Handed Materials (LHM) are a new kind of materials which were theoretically envisionned by Veselago 1 as early as in 1967. Such materials have simultaneously negative relative permittivity (ε r ) and negative relative permeability (µ r ). This theoretical curiosity became a real field of research in 2000 after Pendry showed the potential of LHM to overcome the diffraction limit 2 and Smith et al. 3 proposed a first realization for such extra-ordinary materials based on periodic lattices combining Split Ring Resonators (SRR: Concentric annular rings with splits) and wires. The latter work can be considered as the experimental foundation of LHM (as the first experimental evidence of negative refraction) and it is based on a theoretical study by Pendry et al. which shows that negative permittivity could be obtained by a periodic arrangement of parallel wires and that a periodic lattice of SRR had a negative magnetic response around its resonance frequency 4 .
In the recent years, there has been a keen interest in wave propagation in periodically structured media 5 . Investigation of photonic crystals has paved the way to the theoretical prediction and experimental realization of photonic band gaps [6][7][8][9][10][11][12] i.e. ranges of frequencies for which light, or a light polarization, is disallowed to propagate. Soon after, the focus has been extended to the study of acoustic waves in periodic media, and the existence of phononic band gaps has been verified both theoretically and experimentally [13][14][15][16][17][18] . Recently, the interest was even extended to the study of different types of waves, e.g. liquid surface waves [19][20][21][22][23] or biharmonic waves [24][25][26][27][28] in perforated thin plates. It has been shown that complete bandgaps also exist for these waves when propagating through a periodic lattice of vertically standing rods or over a periodically perforated thin plate 22,26 . In addition, many interesting phenomena have been reported, including negative refraction 1,29-33 , the superlensing effect and cloaking [34][35][36][37][38] . The essential condition for the AANR effect is that the constant frequency surfaces (EFS: equifrequency surfaces) should become convex everywhere about some point in the reciprocal space, and the size of these EFS should shrink with increasing frequency 9-11 .
In this paper, we focus on the application of split ring resonator (SRR) structures 4,39,40 to the domain of elastic waves. We first derive the homogenized governing equations of bending waves propagating within a thin-plate with a doubly periodic square array of freely vibrating holes shaped as SRR, from the generalized biharmonic equation, and an asymptotic analysis involving three scales (one for the thickness of the thin-cut of each SRR, one for the array-pitch, and one for the wave wavelength). We then present an analysis of dispersion curves. To do this, we set the spectral problem for the biharmonic operator within a doubly periodic square array of SRR, homogeneous stress-free boundary conditions are prescribed on the contour of each resonator and the standard Floquet-Bloch conditions are set on the boundary of an elementary cell of the periodic structure. Such a structure presents an elastic bandgap at low frequencies. It turns out that the asymptotic analysis of our structure allows us to get analytically the frequency of the first localized mode and then the frequency of the first band gap. The aim of our work is actually to demonstrate the AANR effect at low frequencies for elastic thin perforated plates as well as their superlensing properties. Ultrarefraction is also considered and shows the versatility and power of using such structured media to realize new functionalities for surface elastic waves.
II. HOMOGENIZATION OF A THIN-PLATE WITH AN ARRAY OF STRESS-FREE SRR INCLUSIONS NEAR RESONANCE
The equations for bending of plates can be found in many textbooks 41,42 . The wavelength λ is supposed to be large enough compared to the thickness of the plate H and small compared to its in-plane dimension L, i.e. H ≪ λ ≪ L. In this case we can adopt the hypothesis of the theory of Von-Karman 41,42 . In this way, the mathematical setup is essentially two-dimensional, the thickness H of the plate appearing simply as a parameter in the governing equation.
We would like to homogenize a periodically structured thin-plate involving resonant elements. The resonances are associated with fast-oscillating displacement fields in thin-bridges of perforations shaped as split ring resonators (SRR), and we filter these oscillations by introducing a third scale in the usual two-scale expansion. We start with the Kirchhoff-Love equation and we consider an open bounded region Ω f ∈ R 2 . This region is e.g. a slab lens consisting of a square array of SRR shaped as the letter C.
When the bending wave penetrates the structured area Ω f of the plate whose geometry is shown in Figs. 1(c)-(d), it undergoes fast periodic oscillations. To filter these oscillations, we consider an asymptotic expansion of the associated vertical displacement U η solution of the With all the above assumptions, the out-of-plane displacement u η = (0, 0, U η (x 1 , x 2 )) in the x 3 -direction (along the vertical axis) is solution of (assuming a time-harmonic dependence exp(−iωt) with ω the angular wave frequency): inside the heterogeneous isotropic region Ω f (the platonic crystal, PC, in Fig. 1 are nondimensionalized spatially varying parameters related to the flexural rigidity of the plate, its Poisson ratio and the wave frequency, respectively. In most cases, D and ν take piecewise constant values, with D > 0 and −1/2 < ν < 1/2. Note that β 2 0 = ω ρ 0 H/D 0 , where D 0 is the flexural rigidity of the plate outside the platonic crystal, ρ 0 its density and H its thickness. Remark that (1) is written in weak form and we notably retrieve the classical boundary conditions for a homogeneous plate with stress-free inclusions (vanishing of bending moments and shearing stress for vanishing D η and ν η in the soft phase) 42 . Since there is only one phase in the problem which we consider (homogeneous medium outside freely-vibrating inclusions), it is also possible to recast (1) as since D η vanishes inside the inclusions Θ η = i∈Z 2 {η(i + C)} and it is a constant in the matrix. Bear in mind that the number of SRR in Ω f is an integer which scales as η −2 . Note also that the vanishing of bending moment and shearing stress deduced from (1) requires At the boundary ∂Θ η of Θ η , which is consistent with our former work on thin perforated plates 27 .
In the present case, perforations are shaped as split ring resonators, and each SRR C can be modeled as where a and b are functions of variables x 1 , x 2 , unless the ring is circular and is a thin ligament of length l = b − a between the ends of the letter C, see Fig. 1(a).
Our aim is to show that the homogenized multi-structured platonic structure within Ω f is characterized by an effective density which can take negative values near the fundamental resonance of the SRR. To do this, we need to perform a homogenization of a periodic structure involving resonant elements. The resonances are associated with fast-oscillating fields in thin-bridges of SRR perforations of the plate, and we filter these oscillations by introducing a third scale in the usual two-scale expansion: where is Y -periodic, and h denotes the thickness of the thin-cut of the SRR.
The differential operator is rescaled accordingly as ∇ = ∇ x + 1 η ∇ y + 1 η 2 ∇ ξ , so that Eq. (2) can be reexpressed as with the notations ∇ . Collecting terms of same power of η in Eq. (7), we obtain the homogenized problem 43 , in the limit when η tends to zero, in the platonic crystal Ω f . This is the homogenized biharmonic equation with the homogenized plate rigidity D hom = ( Y * D −1 (y 1 , y 2 )dy 1 dy 2 ) −1 where Y * = Y \C. We point out that the circular geometry of SRR is essential, if one would take elliptical SRR, then D hom would be a rank-2 anisotropic tensor.
The homogenized density ρ hom is given in the form where the eigen-solutions V m correspond to longitudinal vibrations of the thin cut Π η within the split ring resonator.
It is clear from Eq. (9) that √ ρ hom (β) takes negative values near resonances β 2 = β 2 m . For our purpose, it is enough to look at the first few resonant frequencies β 2 m (the higher the frequency the worse the asymptotic approximation). These frequencies are associated with vibrations V m of the thin domain Π η 39 : where ηh and l are the thickness and the length of the thin ligament Π η , and Ξ is the central disc within the SRR. The ligament Π η is connected to Ξ, hence V (l) = V , where V is the vibration of the stress-free air cavity Ξ. Note that the derivation of Eq. (12) required a boundary layer analysis, and we refer to 44 for more details.
The solution of the problem (10)- (12) has the form where β m (square root of frequency) is given as the solution of the following equation Note also that a similar problem to Eqs. (10)-(12) is deduced from Eq. (7) with a minus sign in (10), which leads to a sinh function in (14), but this does not give any additional resonant frequencies. The resonant frequencies of √ ρ hom also lead to negative values of ρ hom , in the same way that a product of two simultaneously negative square roots give a negative refractive index in the metamaterial's literature 1,33 .
In the sequel, we focus our analysis on the Floquet-Bloch bending wave problem and give some numerical results and comparisons with the homogenization theory. To investigate numerically the stop-band properties for out-of-plane bending waves propagating within the array of SRR, we use the Finite Element Method (by implementing the weak form of (2) in the commercial software COMSOL and its corresponding Floquet-Bloch boundary conditions as well as the PMLs: perfectly matched layers).
In Fig. 2(a) while the second one is due to a Bragg scattering phenomenon (macroscopic structure). We also plot in Fig. 2(b) the corresponding localized eigen-function (the first eigen-mode) which corresponds in the context of continuum mechanics, to oscillations of the central region of the SRR as a rigid solid connected by the thin cut Π ε to the fixed rigid region around it 44 .
In our context of bending waves, this can be interpreted as a collective vibration of the plate elements together, up and down in each SRR.
The transcendental equation (14) can be further simplified if we look at the first low frequency, for which we deduce the explicit asymptotic approximation Our numerical estimate is which is in excellent agreement with the finite element value for the plasmon frequency β 2 = 4.23 occurring at the M point when k = (π, π).
It is interesting to also analyze the dispersive properties of surface elastic modes (waves exponentially localized at the interface of a platonic crystal). These modes originate from interference effect in phononic or platonic crystals and are important, for example, for superlensing effect. The dispersion diagram of these modes is shown in Fig. 3(a), where the frequency is plotted versus Bloch vector in the ΓM direction. The red dotted-dashed line reprsents the surface mode's dispersion and shows that's standing in the elastic bandgap (between the first and the second propagating modes). This means, for instance, that this mode is of evanescent nature. Figure 3 We then consider a 2D finite platonic crystal perforating a thin elastic plate. The crystal consists of 230 holes with a C-shaped cross-section with identical parameters as those of Fig. 2(b). The crystal has a four-fold (square) symmetry with pitch d = 1. A flexural waves source of wavenumber β = 1.98 is located at a distance 0.1d above the top of the crystal.
The field-maps in Fig. 4 (a) show the existence of an image of the source, located twice the width of the crystal away in the lower side. As we can see in Fig. 4 (a), the amplitude of the field is the highest near the thin ligaments Π ε of the SRR. Formula (9) shows that the resonance of the field with the microscopic structure of SRR is responsible for the appearance of negative refraction through negative effective density ρ hom . Finally, the resolution of the image δ is enhanced compared to that obtained using an array of perforations with a circular or square cross-section, as demonstrated in Fig. 4(b), (c), whereby the full width at half maximum of the image point δ ≈ λ/3. We note that at the plasmon frequency, the vibration of the thin ligaments is enhanced, which is in accordance with the behavior of the field as observed on Fig. 4. It is therefore important to incorporate the resonances within the thin-bridges as we did, to model the frequency at which negative refraction occurs.
C. Gaussian beam at oblique incidence on a platonic crystal
The interaction of a Gaussian beam (GB) with the platonic crystal around the AANR is considered in this section. Refraction of GB (whose size is in the order of the wavelength) obliquely incident on the SRR platonic crystal is demonstrated to be negative. with λ and ω 0 is called the waist of the beam. The radius of the curvature is given by R(x) = x 1 + x R x 2 and the Gouy phase shift is ζ(x) = atan x x R . Figure 5 gives the snapshot of the displacement field U in the presence of the platonic crystal at oblique incidence with angle θ inc = 25 degrees; a reflected beam could be distinguished as well as a refracted one which turns out to be in the negative refraction regime. To further verify this claim, another simulation using, now, an homogeneous effective slab of the same size, with negative elastic index n e = ω/β 2 = 12ρ(1 − ν 2 )/(EH 2 ), is shown in Fig. 5(b).
The patterns of the wave are identical in 5(a) and 5(b) and show that the elastic index of the SRR crystal could be described as negative: n e = −1.5 + 10 −3 j. The robustness of the AANR was further verified by changing the angle of incidence θ inc and the wavelength of operation around their main values, and the effect still could be observed. To conclude this section, let us consider another anomalous refractive effect, namely, the ultra-refraction of bending waves. This phenomenon generally occurs when the effective elastic index n e defined above, tends to zero or equally speaking the group velocity v g ≈ 0.
In this case, independently of the angle of incidence of an incident ray, propagating towards the crystal, it will be refracted normally to it (parallel to the normal). And the operation point could be easily deduced from the dispersion diagram by picking a point where the mode's dispersion curve becomes flat (∇ k β 2 ≈ 0). This allows us to build an elastic antenna (for instance by placing a cylindrical source of flexural waves in the center of the crystal).
The wave will be then refracted along the normal to the crystal, permitting thus highly directive beam as could be seen from Fig. 6(a) and (b) where the displacement snapshot and its energy are respectively plotted around a wavelength λ = 2π/β ≈ 3. This corresponds as previously stated to the M point in the diagram of Fig. 2(a) at the maximum. Using formula (9), one can indeed achieve ultrarefraction when the effective density of the platonic crystal ρ hom is close to zero, which happens for β 2 2 ∼ 4.37. To get this estimate, one need replace the boundary condition (11) by V m (0) = −area(Σ)/area(Y − C) which amounts to assuming that the field takes the same non zero values on the opposite edges of the basic cell. This leads to the new frequency estimate which is a refinement of (15). The numerical result β 2 2 ∼ 4.37 is in good agreement with finite element computations (4.385).
IV. CONCLUDING REMARKS
In conclusion, we have proposed an original route towards elastic negative refraction (superlensing) and ultrarefraction based on excitation of flexural surface modes. To analyze these effects, we have derived the homogenized biharmonic equation using a multi-scale asymptotic approach. We found that the homogenized elastic parameters are described by a scalar flexural plate rigidity (since the perforations have a circular geometry owing to the fact that thin-bridges come unseen in the second scale asymptotics) and a scalar density, the latter taking negative values near thin-bridges resonances (unveiled by the third scale). We then performed numerical computations based on the finite element method which confirmed that the asymptotic model gives accurate predictions on dispersive properties of the elastic Lamb modes.
We believe that such a micro-structured plate could be manufactured easily, having in mind some potential applications in superlensing or directive elastic antennas. The range of industrial applications is vast, and our proof of concept should foster research efforts in this emerging area of acoustic and seismic metamaterials. | 2014-07-28T15:53:22.000Z | 2014-07-28T00:00:00.000 | {
"year": 2014,
"sha1": "38cb1833a788cb105a7a72fe18b266efbd5c26cc",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1408.0004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "38cb1833a788cb105a7a72fe18b266efbd5c26cc",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
54194485 | pes2o/s2orc | v3-fos-license | SENSITIVITY PATTERNS OF SOME FLOWERING PLANTS AGAINST sALMONELLA TYPHI AND PSEUDOMONAS AERUGINOSA
The medicinal plants have been collected form wild resources and only some species used in larger quantities are cultivated systematically. Many medicinal plants, which were ignored in the past years, but the number of plants still formed in the wild is progressively declining the medicinal value with this situation to acquiring from the wild sources in more important recent trends bening. Reported in cresingly the antimicrobial properties of medicinal plants from different parts of the world .On this basis, the present investigation was focused on the screening of antimicrobial properties ofAsystasia indica Asystasia gangetica Thunbergia alata on selected Pathogenic bacteriaThe main objectives of this work use as follow To Screen the antimicrobial properties of these plants on selected micro organisms.To determine the effectiveness of inhibition on microbes by comparing its activity with known antibiotics.
INTRODUCTION
"Life is gift of God". This proverb proves that life is very precious and valuable than anything else. Although infectious diseases are the world"s leading cause of premature deaths, killing almost 50000 people every day. In recent years, drug resistant to human pathogenic bacteria has been commonly reported from all over the world Radhika (Iyengar, 1985;Chopra et al., 1992;Harborne and Baxter, 1995). The substances that can either inhibit the grown of pathogens or kill them and have no or least toxicity to host cells are considered criteria for developing new antimicrobial drugs.
India is blessed with diversity of plant species, which have good medicinal value. Ayurveda, is of plant origin, the Indian indigenous system of medicine, dating back to the Vedic ages (1500-800 BC) has been an integral part of Indian Culture . In Ayurveda, either the whole Plant or Certain parts of leaf, flower, fruits, seeds, heartwood, stem bark, root, gum and resin, have been used for treatment of diseases. The plants over 7000 different species found in the different ecosystems are said to be used for medicinal purposes in India All India Ethnobiological Research project under the ministry of Environment and Forests, Government of India has recently estimated that nearly 8000 medicinal plants are put in used in one way or other in Preparation of medicine and medicinal Compounds in India itself.
Plants containing valuable secondary metabolites and a very wide variety of chemicals that are not directly concerned with primary metabolic Process. They extracted from plants to be used as medicinal agents, Vitamins, insecticides and fine chemicals. Those of particular chemicals are often characteristic of certain genera or families of plants.Medicinal plants produce secondary metabolites namely terpenes and Terpenoids against different pathogenic bacteria .Over the past 10 years there has been a considerable interest in the use of herbal medicines all over the world. Plants may offer new potential sources of active compounds against bacterial, fungal and viral diseases all over the world.
Plant material
The following plants were collected from their natural habitats and used for their activity.Asystasia indica Asystasia gangetica Thunbergia alata at the College campus, Kodaikanal in Tamil Nadu.
The plants were shade dried at ambient Temperature (31 C) and the dried materials were crushed into fine powder using an electric blender.
Extract preparation
100 g of dried powder materials of Leaf, Stem, Root, Tuber were soaked separately in 500 ml of each solvent viz. ethanol, chloroform and petroleum ether in conical flasks. Each mixture was stirred at 24 hr using sterile glass rod. It was kept for 72 hrs. At the end of 72 hrs each of the extract was passed through the Whatman No.1 filter paper and the filtrates were concentrated by opening the cover to 1 hr at room temperature in order to reduce the volume. The paste like extracts were stored in labeled screwed chapped bottles and kept in refrigerator at 4 C. Each of the extract was individually reconstituted using minimal amounts of the tracking solvent prior to use.
Microorganisms selected for study
The microorganisms used in this study were collected from Eumic lab, Tiruchirappalli. The selected microbial strains were, Salmonella typhi,Pseudomonas auroginosa N o v e m b e r 2 4 , 2 0 1 4 The bacterial strains were maintained in Nutrient agar plate. After adding all the ingredients into the distilled water boiled to dissolve the medium completely and sterilized by autoclave at 121 C, 15 lbs pressure for 15 minutes. After adding all the ingredients into the distilled water boiled to dissolve the medium completely and sterilized by autoclave at 121 C, 15 lbs pressure for 15 minutes.
Preparation of the media a) Nutrient agar medium
The following methods were selected for antimicrobial studies: 1. Disc diffusion method
Principle
The disc-diffusion method provides a simple and reliable test in routine clinical bacteriology in order to find out the effect of a particular substances on a specific bacterium.
This method consists of impregnating small circular discs of standard filter paper with given amount of a chosen concentration of substances. The discs are placed on plates of culture medium. Previously spread with a bacterial inoculum to be tested. After incubation the degree of sensitivity is determined substances from the discs into the surrounding medium.
Preparation of Discs
Discs usually consists of absorbent paper impregnated with the compound (Plant extract). It is most convenient to use Whatman No.1 filter paper for preparing the discs. Dry discs of 6 mm diameter were prepared from Whatman No.1 filter paper and Sterilized in an auto clave. These dry discs were used for the assay.
Procedure
Circular discs of 6 mm diameter were prepared from Whatman No.1 filter paper and Sterilized in an autoclave. These paper discs were impregnated with test compounds (Plant extracts) in the respective solvents for over night and placed on nutrient agar plates seeded with the test bacterium. Preparation of the test solution of different concentration for streak plate method.
Each extract is taken for activity studies, the extracts concentration is made into solution of different concentration as give bellow.
Preparation of control
The control was prepared with Nutrient Agar medium without plant extract. Each of above concentration was tested for antibacterial activity using the Asystasia indica Asystasia gangetica Thunbergia alataagainst test bacteriaSalmonella typhiPseudomonas aeruginosa The plates were incubated at 37 C for 24 hr, after 24 hr the zone of inhibiton around each disc was measured and the diameter was recorded. The tests were repeated 4 times to ensure reliability of the result.
Preparation for control
The control was prepared with Nutrient Agar medium without plant extract. Tested for antibacterial activity using standard antibiotic disc. i.e, ciprofloxacin.
Streak plate method
The streak plate assay was generally used to obtain discrete colonies and pure colonies. This assay was used to determine the growth inhibition of bacteria by plant extracts. A sterilized loop was dipped in the suitable organisms and streaked on the surface of already solidified nutrient agar plates with plant extracts to make a series of parallel nonoverlapping streaks. The effects of the plant extracts was expressed as the growth variation found in the Agar plates after incubation at 37 C for 24 hours.
The extracts used for the study were prepared namely,
Disc diffusion method
In effect of different solvent extracts of Asystasia indica leaf, stem and tuber on the test bacterium Salmonella typhi and Pseudomonas aeruginosa by disc diffusion method The effect of various solvents namely ethanol, chloroform and petroleum ether also with the standard antibiotics Among the three solvents extracts used, the ethanolic solvent was found to more effective when compared to other solvents. The plant part tuber has shown higher inhibitory effect among other parts used.
The Asystasia gangetica , the results indicated that the leaf have shown better inhibition than other parts such as stem and root by each of the solvents. Antibacterial activity of Thunbergia alata showed better results in ethanolic leaf extract as well as stem than root Control plates were prepared for the comparison with the standard antibiotics namely ciprofloxacin.
Streak plate method
The effect of tuber extract in all solvents of Asystasia indica Radhika, et al 2002. On the growth of Slmonella typhi and Pseudomonas aeruginosa by streak plate method The results shows that in control plate excessive growth of both bacteria was observed whereas in the experimental plates gradual increase in concentration of the extract showed decrease growth. That is at 25% concentration moderate growth was observed while in 100% concentration there was no growth. However the solvents of stem and leaf did not show any significiant effect when compared to tuber. The streaking of test organism effect on Asystasia indica in leaves, stem and root The leaf extract have shown significant effect than stem and root in all solvents of test organism. No growth was occurred in 100% concentration, while excessive growth was showed in control of both organisms.
The streaking effect of ethanolic and chloroform leaf extract of Asystasia gangetica , showed that the lower concentration itself inhibited growth of both organism, that is, at 25% concentration itself the growth was less and in 50% only trace growth was noticed on salmonella typhi whereas the Pseudomonas aeruginosa has shown no growth in 75% concentration on chloroform and petroleum ether solvent The ethanolic and petroleum ether stem extracts of Asystasia gangetica have shown gradual decrease of growth of Salmonella typhi from 25% concentration as moderate, 50% concentration with trace amount of growth. No growth was occurred in 75% and 100% concentration Pseudomonas aeruginosa have shown less growth on 25% concentration of ethanolic and petroleum ether solvents were observed.
The streaking effect of ethanolic leaf extract of Thunbergia alata showed excessive growth in 25% concentration less growth in 50% concentration, whereas no growth was occurred in both 75% and 100% concentration. Chlorofom stem extract of Thunbergia alata have shown significant effect on the growth of Pseudomonas aeruginosa indicated that trace growth occurred in 50% concentration and no growth was found in 75% and 100% concentration than other solvents.
The results indicate that the rate of inhibition was directly proportional to the concentration of the extracts of most of the experimental plants. At lower concentration inhibition was insignificant whereas at higher concentration the growth inhibition was slightly higher.
SUMMARY AND CONCLUSION
The present experimental study concludes with the following: 1.
All the experimental plants were found to possess antibacterial properties to some extend especially on few bacteria.
2.
Among the plant parts used for antibacterial assay tuber, stem and leaf were found to contain inhibitory effect on the growth of test bacteria.
3.
Among the solvents uses for extraction, ethanol and chloroform showed more activity than the petroleum ether solvent.
4.
With regard to the microbes tested, both organisms were found to be more susceptible to the plant extracts. | 2019-03-19T13:14:10.775Z | 2015-10-31T00:00:00.000 | {
"year": 2015,
"sha1": "7fe44a3f0d008822b79b80a2937730991fcce52d",
"oa_license": "CCBY",
"oa_url": "https://cirworld.com/index.php/jns/article/download/5018/pdf_25",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fae13bcbb8aea94725f7396c1de72cdb4fa96da1",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
240904995 | pes2o/s2orc | v3-fos-license | Downregulation of hsa_circ_005243 induces trophoblast cell dysfunction and inflammation via the β-catenin and NF-κB pathways in gestational diabetes mellitus CURRENT
Background : Gestational diabetes mellitus (GDM) is a common complication in pregnancy that poses a serious threat to the health of both mother and child. While the specific etiology and pathogenesis of this disease are not fully understood, it is thought to arise due to a combination of insulin resistance, inflammation, and genetic factors. Circular RNAs (circRNAs) are a special kind of non-coding RNA that have attracted significant attention in recent years due to their diverse activities, including a potential regulatory role in pregnancy-related diseases, such as GDM. Methods: We previously reported the existence of a novel a circRNA, hsa_circ_005243, which was identified by RNA sequencing. In this study, we examined its expression in 20 pregnant women with GDM and 20 normal controls using quantitative reverse transcription PCR analysis. Subsequent in vitro experiments were conducted following hsa_circ_005243 knockdown in HTR-8/SVneo cells to examine the role of hsa_circ_005243 in cell proliferation and migration, as well as the secretion of inflammatory factors such as tumor necrosis factor alpha (TNF-α) and interleukin 6 (IL-6). Finally, we examined the expression of β-catenin and nuclear factor kappa-B (NF-κB) signaling pathways to assess their role in GDM pathogenesis. Results: Expression of hsa_circ_005243 was significantly reduced in both the placenta and plasma of GDM patients. Knockdown of hsa_circ_005243 in trophoblast cells significantly suppressed cell proliferation and migration ability. In addition, increased secretion of inflammatory factors (TNF-α and IL-6) was observed after hsa_circ_005243 depletion. Further analyses showed that knockdown of hsa_circ_005243 reduced the expression of β-catenin and increased nuclear NF-κB p65 nuclear translocation. Conclusions: Downregulation of hsa_circ_005243 may be associated with the pathogenesis of GDM via the regulation of β-catenin and NF-κB signal pathways, suggesting a new potential therapeutic target for GDM.
Introduction
Gestational diabetes mellitus (GDM) is a form of diabetes characterized by glucose intolerance and insulin resistance beginning or first recognized during pregnancy (1, 2). GDM may affect as many as 3 17.5% of all pregnant women in China (3), resulting in a significantly increased risk of developing metabolic syndrome and type 2 diabetes after delivery (4). Therefore, timely diagnosis and appropriate therapeutic intervention are important for reducing the risk of adverse pregnancy outcomes in GDM patients.
While the specific etiology and pathogenesis of GDM are not fully understood, the disease is thought to arise as a result of insulin resistance, inflammation, dysfunction of islet beta cells, and genetic factors. The placenta is a highly specialized organ that serves as the interface between maternal and fetal circulation during pregnancy. The mechanisms of placental pathology during diabetes are still largely unclear. However, research has demonstrated a significant inflammatory response in the placental tissues of GDM, characterized by an increase in the number of macrophages and the content of saturated fatty acids, leading to the release of inflammatory factors interleukin (IL)-6, IL-8, and toll-like receptor 2 by trophoblast cells (5).
Circular RNAs (circRNAs) are a form of RNA consisting of a closed loop, and have attracted major attention in recent years (6,7). With the development of high-throughput sequencing technology, increasing numbers of circRNAs have been discovered, revealing a wide array of biological characteristics and regulatory functions. circRNAs are highly conserved across species in terms of stability, diversity, and tissue specificity (6,8). They participate in the regulation of gene expression at both the transcriptional and post-transcriptional levels, through which they have been shown to play roles in numerous physiological and pathological processes (9,10). Emerging evidence has shown that circRNAs are strongly associated with the occurrence and development of preeclampsia, GDM, and other pregnancy-related diseases (11,12). Previously, we reported a significant decrease of hsa_circ_0005243 in the placenta of GDM pregnant women by high-throughput RNA sequencing (13).
Here, we investigated the possible regulatory function of this circRNA in the trophoblast cell line HTR-8/SVneo and explored its potential mechanisms of action.
Patients
From April 2017 to December 2018, we identified 20 parturient women diagnosed with GDM and 20 4 parturient healthy control women in the obstetrics department of the Changzhou Maternal and Child Health Care Hospital, an affiliated hospital of Nanjing Medical University. Placenta tissues were obtained within 10 min after delivery, while the maternal plasma specimens were collected on an empty stomach early in the morning on the day of hospitalization during the third trimester (37-40 weeks). After collection, the plasma and placenta samples were stored at −80°C. All GDM and control women were matched by age and body mass index. The diagnosis of GDM was conducted by 75-g oral glucose tolerance test at 24-28 weeks. Exclusion criteria included multiple births, premature delivery, delivery age < 20 years old or > 40 years old, diabetes, chronic liver and kidney diseases, thyroid and other endocrine diseases, and hypertension prior to pregnancy. Informed consent was obtained from each participant and this study was approved by the ethics committee of the hospital.
CCK8 assay
Cells were trypsinized and seeded into 96-well cell culture plates (Corning Inc., Corning, NY, USA) at a concentration of 3 × 10 3 cells/mL. Cell viability was measured after culture for 24, 48, and 72 h by adding 10 μL of CCK8 reagent (DOJINDO Laboratories, Kumamoto, Japan). Cells were then incubated at 37°C for 3 h, after which the optical density at 450 nm (OD 450 ) was assessed using a microplate reader (BioTek, Winooski, VT, USA).
Colony formation assay
Cells in logarithmic phase growth were trypsinized, re-suspended, and inoculated in 6-well cell culture plates containing 2 mL of medium. After plating, the culture plates were gently shaken to ensure even distribution of the cells within the wells and placed in an incubator at 37°C and 5% CO 2 for 24 h until full adherence was obtained. After 12 days, the medium was discarded, and the cells were carefully soaked twice with phosphate-buffered saline (PBS). Cells were then fixed for 15 min with 5 mL of absolute ethanol. After discarding the fixative solution, the cells were treated with Giemsa dye solution (Thermo Fisher Scientific, Waltham, MA, USA) for 10-30 min, followed by slow washing with running water. Finally, cells were air dried, photographed, and counted.
EdU assay
To evaluate the proliferation ability of trophoblast cells, an EdU assay was performed using a keyFluor 555 Click-iT EdU imaging detection kit (Keygen Tec, Nanjing, China) according to the manufacturer's protocol. Briefly, cells were fixed with 4% paraformaldehyde, then incubated with 2 mg/mL glycine for
Migration assay
For the in vitro transwell migration assay, the transfected cells were trypsinized, adjusted to a density of 1 × 10 5 cells/mL, and 100 µL of cell suspension and 700 µL of medium containing FBS were added to the upper and lower chambers of a transwell plate (Corning Inc.). The cell culture plates were then placed in an incubator at 37°C with 5% CO 2 for 24 h. Cells in the upper chamber were removed using a cotton swab, while the cells on the lower surface of membranes were fixed with formaldehyde and stained with 0.1% crystal violet (Sigma-Aldrich, St. Louis, MO, USA). After incubation at 37°C for 30 min, cells were washed with PBS, and three to five fields were randomly selected and photographed, 6 with the number of migrated cells counted under an inverted microscope (Olympus, Tokyo, Japan).
For the wound-healing assay, cells in logarithmic growth phase were trypsinized and inoculated into a 6-well plate. After 24 h, when the cell aggregation reached ~60%, a sterile nozzle was used to evenly draw lines in the plate. Floating cells were removed by washing with PBS, and then fresh medium added for further culture. After 24 h, the cells were taken out and photographed (200× magnification), and the migration distance of cells was measured.
Enzyme-linked immunosorbent assay
Cells were seeded in 6-well plates (Corning Inc.) and transfected as described above, after which the medium was collected and replaced with fresh culture medium. The culture medium was then centrifuged for 20 min at 1000 × g to remove cell debris and impurities. The concentrations of tumor necrosis factor alpha (TNF-α) and IL-6 in the medium were detected using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (Mlbio, Shanghai, China) according to the manufacturer's protocol. The absorbance (OD 450) of each group was measured using a microplate reader (MD SpectraMax M3; Molecular Devices, San Jose, CA, USA).
Western blot
Transfected cells were harvested and lysed in lysate buffer containing protease inhibitors. Protein concentration was determined using a BCA kit (Thermo Fisher Scientific). After denaturation, the proteins were separated using 12% sodium dodecyl sulfate polyacrylamide gel electrophoresis, and then transferred to a polyvinylidene fluoride membrane (Merck Millipore, Darmstadt, Germany) and blocked with 5% skim milk. Membranes were then incubated in the presence of primary antibodies at
qRT-PCR
Total RNA was extracted using TRIzol reagent (Thermo Fisher Scientific). The expression of circRNA and GAPDH were detected using the SYBR Premix Ex Taq system (Takara, Madison, WI, USA) following the manufacturer's instructions. RT-PCR was performed using the following primers: hsa_circ_0005243, forward, 5'-TTATCTACATGCACCTGCGCT-3', reverse, 5'-AAGTGACAAGCTAGCCCTCAT-3'; GAPDH, forward: 5'-CAAATTCCATGGCACCGTCA-3', reverse: 5'-AGCATCGCCCCACTTGATTT-3'. PCR reactions were conducted as follows: denaturation at 95°C for 10 min, amplification at 40 cycles of 95°C for 10 s and 58°C for 15 s, followed by elongation at 70°C for 30 s. Relative expression levels were determined by comparing the Ct values of the target genes to those of the GAPDH gene.
Flow cytometry
To assess cell apoptosis, transfected cells were digested with 0.25% trypsin (without EDTA) and collected, washed twice with PBS, stained using an annexin V-FITC apoptosis detection kit (Beyotime), and analyzed by flow cytometry (FACSCalibur; BD, Franklin Lakes, NJ, USA). The number of apoptotic cells was determined by counting and expressed as a ratio relative to live cells.
Immunofluorescence
Cells were fixed in 4% polyformaldehyde, washed with PBS, and treated with 0.5% Triton X-100, after which the slides were blocked with 5% bovine serum albumin for 30 min. Cells were then incubated with antibodies against β-catenin and p65 (1:100; Abcam) at 4°C overnight. Next, slides were washed three times in PBS, after which FITC-conjugated secondary antibody (1:100; Abcam) was added and incubated at 37°C for 1 h in the dark. Finally, slides were stained with DAPI for 5 min, and the expression of protein in cells was observed under a laser confocal microscopy (LSM710; Zeiss, Oberkochen, Germany). Three photographs were randomly taken per slide.
Statistical analysis
All data were analyzed using SPSS ver. 22 software (IBM Corp., Armonk, NY, USA), with continuous variables expressed as the mean ± standard error. A two-tailed Student's t-test was used to compare the mean of the two sets of samples. The diagnostic value of hsa_circ_0005243 for GDM was 8 established by a ROC curve and the AUC was calculated (adjusted by BMI and age). P values < 0.05 were considered statistically significant.
Characterization of hsa_circ_0005243 in GDM
Previously, we found that the expression of hsa_circ_0005243 was significantly lower in GDM placenta relative to the control group by high-throughput RNA sequencing (13). qRT-PCR analyses verified these results, showing that the expression of hsa_circ_0005243 was significantly lower in both the placenta (Fig. 1A) and plasma (Fig. 1B) of GDM patients compared with controls. Based on these findings, the diagnostic value of hsa_circ_0005243 in plasma was further evaluated by receiver operating characteristic (ROC) curve analysis (Fig. 1C), revealing an area under the curve of 0.75, indicating significant overexpression in GDM patients (p < 0.01). hsa_circ_0005243 originates from the TMEM184B (transmembrane protein 184B) gene and consists of the head-to-tail splicing of exons 2, 3, and 4 with a total length of 517 bp (Fig. 1D). In addition, we found that hsa_circ_0005243 was resistant to RNase R treatment compared with linear mRNA (Fig. 1D).
Downregulation of hsa_circ_0005243 suppresses trophoblast proliferation and induces apoptosis
To further explore the potential functions of hsa_circ_0005243 in trophoblast cells, siRNAs (si-circRNA and si-NC) targeting hsa_circ_0005243 were transfected into the human trophoblast cell line HTR-8/SVneo. Interference efficiency was verified after transfection, with all three si-circRNAs constructs shown to significantly reduce the expression of hsa_circ_0005243 ( Fig. 2A). Of these three constructs, si-circRNA-2 exhibited the most potent effect, so it was chosen for further experiments.
CCK8 analyses revealed a significant reduction in cell viability following si-circRNA treatment relative to the si-NC group at both 48 and 72 h after transfection (Fig. 2B). EdU staining revealed significantly fewer EdU-positive cells in the si-circRNA group compared to the control group (Fig. 2C). Similar results were shown in the clone formation experiment (Fig. 2D), with significantly fewer clones in the si-circRNA group compared to the si-NC control group. Flow cytometry analysis revealed a higher proportion of apoptotic cells after hsa_circ_0005243 knockdown (Fig. 2E). 9
hsa_circ_0005243 knockdown inhibits migration of trophoblast cells
Normal migration of trophoblast cells is important for the maintenance of placental function (14).
Therefore, we investigated the effect of hsa_circ_0005243 on the migration ability of trophoblast cells.
The transwell assay showed a significant decrease in the migration ability of trophoblast cells after transfection with si-circRNA (Fig. 3A), with fewer migratory cells in the knockdown group relative to controls (Fig. 3B). These observations were further supported by the results from the wound-healing assay, which showed a significantly shorter migration distance in the si-circRNA group compared with the control group ( Fig. 3C and D).
Knockdown of hsa_circ_0005243 elevates TNF-α and IL-6 levels
Inflammation plays a significant role in GDM pathogenesis, with numerous inflammatory mediators regarded as risk factors for GDM development (15). To assess the role of these factors in the context of hsa_circ_0005243, ELISA was used to detect inflammatory factor levels in culture medium. After hsa_circ_0005243 knockdown, the levels of TNF-α (Fig. 4A) and IL-6 ( Fig. 4B) were significantly increased in the culture medium compared with the si-NC group.
Potential regulatory mechanism of hsa_circ_0005243 in trophoblast cell function and inflammation
To further investigate the potential molecular mechanisms underlying hsa_circ_0005243 activity in trophoblast cells, the expression of various signaling pathway proteins was detected by Western blot.
β-catenin was significantly decreased after hsa_circ_0005243 knockdown, along with the expression of its related downstream molecules, including c-myc, cyclinD1, and survivin ( Fig. 5A and B). In addition, nuclear NF-κB expression was increased after hsa_circ_0005243 depletion ( Fig. 5C and D), with evidence of increased nuclear translocation of its p65 subunit via immunofluorescence assay (Fig. 5E).
Discussion
Gestational diabetes is characterized by varying degrees of abnormal glucose metabolism during pregnancy, which can significantly affect both maternal and infant health. About 2-5% of all pregnant women develop GDM, with considerable increases in disease prevalence observed during the last decade (16). GDM is associated with a wide range of serious complications, including dystocia, macrosomia, and neonatal hypoglycemia (2,17). Therefore, timely diagnosis and appropriate therapeutic intervention are important for reducing the risk of adverse pregnancy outcomes in GDM patients. Although the pathogenesis of GDM is not completely understood, recent studies have shown considerable overlap with the pathogenesis of type 2 diabetes mellitus, characterized by a synergistic effect of external environmental factors and permissive genetics. Pregnant women with a family history of type 2 diabetes mellitus had a significantly increased risk of GDM (18).
CircRNAs are a class of non-coding RNA molecules characterized by a closed circular structure. With the rise of high-throughput sequencing technologies, the number of reported circRNAs has risen dramatically in recent years, revealing important regulatory functions (19,20) and indicating their potential value as both diagnostic and therapeutic targets for various diseases. From a mechanistic standpoint, circRNAs are thought to participate in gene regulation by acting as sponges for miRNAs, thereby limiting their inhibitory effects on their target genes (9). Alternatively, circRNAs have been shown to target RNA-binding proteins as a mechanism of gene regulation (21), while other circRNAs have protein-coding functions of their own (22).
The placenta facilitates the transport of nutrients, gases, and other compounds between mother and fetus. A wide array of placental changes have been reported in patients with GDM (16, 23), including the differential expression of circRNAs (11,13). While preliminary studies have suggested a link between circRNA expression, disease pathogenesis, and pregnancy outcomes (24), the role of circRNAs in GDM remains poorly understood. In this study we found that the expression of Placental trophoblasts are among the most active cells in pregnancy, with dysfunction of these cells resulting in abnormal placental exchange between mother and fetus and increased inflammation, which may lead to adverse pregnancy outcomes. Although the pathogenesis of GDM is still unclear, chronic inflammation has been shown to play an important role (25). We found the levels of the inflammatory factors TNF-α and IL-6 were increased after hsa_circ_0005243 knockdown. TNF-α is a cytokine secreted mainly by monocytic macrophages. During pregnancy, the placenta secretes TNF-α, which may result in increased aggregation and adhesion of inflammatory cells, and damage to the vascular endothelium. TNF-α plays an important role in glucose and lipid metabolism, and is closely related to insulin resistance and GDM; moreover, it is positively correlated with body mass index (26-28). IL-6 is not only involved in the regulation of immune and inflammatory responses, but also plays an important role in the balance of energy metabolism. IL-6 is directly involved in the development of GDM, with its expression significantly increased in the placenta and plasma of GDM pregnant women (29,30). In this study, ELISA analyses revealed significantly higher TNF-α and IL-6 levels in the hsa_circ_0005243 knockdown group, further implicating this circRNA as an immune regulator.
To explore the potential mechanism of hsa_circ_0005243 activity, we examined a variety of signaling pathway molecules related to GDM, revealing significant decreases in the expression of β-catenin and its downstream targets. β-catenin is a component of the Wnt signaling pathway, which plays an important role in cell proliferation, apoptosis, migration, and invasion (31). The WNT/β-catenin signaling pathway has been shown to play a role in trophoblastic stem cell differentiation, chorionic allantoic cell fusion, and placental morphological development in pregnant rats (32). Downregulation of β-catenin was observed in the placental tissues of patients with preeclampsia and during hypoxia/reoxygenation of HTR-8/SVneo cells (33).
Changes in other genes associated with the Wnt canonical pathway have also been observed, including downregulation of Wnts, Fzds, β-catenin, Apc, and GSK-3β, suggesting regulation of Wnt expression by hyperglycemia in different embryonic tissues (34). These findings suggest that maternal diabetes may suppresses Wnt signaling (35). Therefore, we speculate that trophoblast dysfunction after hsa_circ_0005243 depletion may be involved in the pathogenesis of placental dysfunction. In addition, a decrease in nuclear NF-κB p65 expression was observed. The NF-κB pathway is a central regulator of the immune system, controlling stress responses, apoptosis, and inflammation. NF-κB belongs to the nuclear transcription factor family, existing as a dimer composed of p50 and p65 protein subunits. NF-κB p65 can interact with cytokines such as TNF-α and IL-6, and 12 forms a positive feedback loop, thereby amplifying the inflammatory response in GDM (36)(37)(38). Over time, the chronic inflammatory state characteristic of GDM patients (42) leads to an increase in the number of placental macrophages, which further exacerbates the inflammatory response (5).
GDM placental morphology is characterized by a wide array of abnormalities, including increased placental volume and weight, fibroid necrosis, infarcts, immature villi differentiation, and capillary hyperplasia (39). In this study, we found that downregulation of hsa_circ_0005243 suppressed the proliferation and migration of trophoblast cells, combined with increased expression of IL-6 and TNFα. Based on these observations, we speculate that these attributes are associated with the long-term effects of inflammation on placental development. The structural and functional changes of the placenta in GDM patients are driven by a host of variables, including the quality of blood glucose control during the critical period of placenta development, the treatment mode, and the period and duration of metabolic disorder (43). The placental villi that arise during early pregnancy exhibit significant self-protection mechanisms, protecting normal trophoblast function and inhibiting the process of inflammation (42), because failure or serious defects in placenta formation will lead to the loss of pregnancy. Therefore, successful pregnancy depends on the feedback, regulation, and adaptation of cytokines secreted by the maternal immune system and placenta (44). Once the 13 duration or degree of diabetes driven by maternal hyperglycemia, hyperinsulinemia, or dyslipidemia exceeds the placental regulatory capacity, fetal overgrowth will occur (43).
Conclusions
In this study, we found that hsa_circ_0005243 was downregulated in GDM placenta. In vitro experiments verified that downregulation of hsa_circ_0005243 suppressed trophoblast proliferation and migration, while driving the production of the inflammatory cytokines IL-6 and TNF-α. Subsequent mechanistic studies showed that depletion of hsa_circ_0005243 significantly reduced the expression of β-catenin and its downstream targets. Furthermore, expression of NF-κB, which mediates the inflammatory response, was also increased, along with increased nuclear translocation of the p65 subunit. Although our research provides new potential molecular targets for the treatment of GDM, additional in vivo and in vitro studies are recommended due to the complexity of the regulatory mechanisms. protein in the nuclear was elevated after hsa_circ_0005243 knockdown. E, Confocal immunofluorescence assay showed that hsa_circ_0005243 deleption increased NF-kB p65 subunit nuclear translocation. Data are mean ± SEM, **p 0.01. | 2020-03-19T10:40:32.575Z | 2020-03-16T00:00:00.000 | {
"year": 2020,
"sha1": "660f8543f25e99fb16c02acfe4525ec0515e65ff",
"oa_license": "CCBY",
"oa_url": "https://rbej.biomedcentral.com/track/pdf/10.1186/s12958-020-00612-0",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "78d9753fe85d992600376e42d6df44128aca6956",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
96847706 | pes2o/s2orc | v3-fos-license | Nanoscale Silicon Dioxide Prepared by Sol-Gel Process
Nanoscale silicon dioxide, with the presence of hydroxyl and absorbed water on the surface, is an amorphous, non-toxic, odorless and eco-friendly white powder. It is characterized by fine particle size, high purity, low density, large specific surface area, and good dispersing performance. As a result, it has excellent stability, reinforcing ability, thixotropy, optical and mechanical properties. Therefore, it is widely applied in ceramics, rubber, plastics, paint, pigment and catalyst carrier, and plays an important role in improving the properties of final products (Milin, Zhang et al, 2003, p. 11-13; Lide Zhang et al, 2001; Liangzhen Cai et al, 2002, p. 20-23). At present, there are quite a few methods for preparing nanoscale silicon dioxide, such as precipitation process, sol-gel process, micro-emulsification process, as well as vacuum distillation and condensation process (Chanli, Zhou et al, 2001, p. 22-24; Qishu, Zhai et al, 2000, p. 57-62; Junmin, Qian et al, 2001, p. 1-5). However, sol-gel process is easy to control, and the obtained particles are evenly distributed (Yu, Guo et al, 2005, p. 34-35; Ning, Zhang et al, 2003, p. 267-269; Yang, Li et al, 2007, p. 707-710).
Reagents
TEOS, hydrochloride, distilled water and absolute alcohol.
Preparation of nanoscale silicon dioxide
Add a certain amount of TEOS and absolute alcohol into beaker, and drop slowly the mixture of distilled water, absolute alcohol, and hydrochloride at a proper ratio under the conditions of constant temperature and magnetic stirring.Place the obtained sol in fume hood for 1 h to evolve gradually towards the formation of a gel, which is in turn grinded after drying for 4 h in an oven, and then calcined for 3 h in the box-type resistance furnace to achieve the nanoscale silicon dioxide powder.
Effect of the ratio of TEOS to alcohol on the preparation of nanoscale silicon dioxide
Table 1 shows the effect of the ratio of TEOS to alcohol on the preparation of nanoscale silicon dioxide.
The results indicate that the gel time becomes longer as the dosage of alcohol increases.It's because the alcohol-diluted sol reacts slower as the concentration decreases, which lead to the increase of gel time.However, TEOS is insoluble in water before reaction, and absolute alcohol, as a solvent, can increase the contact surface area between TEOS and water, which results in the shortening of sol-gel reaction time.Therefore, the dosage of absolute alcohol must be enough.A small amount of light blue powder is observed on the surface of porcelain boat, this is because the powder in porcelain boat is too much to be completely processed, and leads to overload.All powder is ivory white when the processing amount is relatively appropriate.It is concluded that the proper ratio of TEOS to alcohol is 1 to 2.
Effect of the dosage of distilled water on the preparation of nanoscale silicon dioxide
Table 2 shows the effect of the dosage of distilled water on the preparation of nanoscale silicon dioxide.
The results indicate that the gel time becomes longer as the ratio of TEOS to water rises from 1/1 to 1/5, and the ivory white powder can be obtained.However, with the increase of the water dosage, the concentration of condensate polymer and viscosity of sol decrease, and gel time increases.The reaction procedure is expressed with following formula.
OR
RO OR OR It is concluded from the above hydrolysis procedure of TEOS that in the presence of acidic catalyst, hydrolysis rate of TEOS increases with the reduction of hindrance around silicon atom.In addition, electron donor group attached to silicon atom, such as −OR, can stabilize the cationic intermediate formed during the hydrolysis, and in turn increase the hydrolysis rate to some extend.Likewise, electron acceptor group next to silicon atom, such as −OH and −OSi, results in the decrease of hydrolysis rate of TEOS.For this reason, it is almost impossible for −OH substituted TEOS or TEOS polymer to be further substituted with −OH.Therefore, the hydrolysate of TEOS comprises no more than 2 hydroxyl group per molecule.
The hydrolysis rate of TEOS slows down and Si(OR) 3 OH dominates when the dosage of distilled water is not enough for the further hydrolysis of TEOS.Water produced from the dehydration condensation of silicon alkoxide can partially promote the hydrolysis of TEOS.Therefore, the dealcoholization condensation dominates over the dehydration condensation.However, it is directly affected by the hydrolysis of TEOS.Due to steric hindrance around silicon atom, the terminal Si−O bond is hydrolyzed much more easily.Hydrolysis promotes the formation of Si−O−Si linear chains, which in turn interact with each other, and form a linear polymer structure.
Excess distilled water can increase the hydrolysis rate of TEOS.However, excess distilled water results in low concentration of reactants, which, on the other hand, leads to the decrease of condensation rate.Thus, hydrolysis of TEOS is almost completed during the early stage of polymerization.As a result, the concentration of hydrolysate and hydroxyl group per molecule increase, and Si(OR) 2 (OH) 2 dominates over Si(OR) 3 OH in the reaction system.Therefore, dealcoholization condensation is dominant when excess water is used, and promotes the formation of short crosslinked network structures, which in turn interact with each other to form a gel.Ge, Manzhen et al analyzed the structure of silicate polymer by gel permeation chromatography, and observed that in the presence of acidic catalyst, the formed gel exhibited a linear polymer structure when water is not enough; whereas a crosslinked network structure when water is excess.
The test results indicate that the best volume ratio of TEOS to water is 1 to 2.
Effect of temperature on the preparation of nanoscale silicon dioxide
Table 3 shows the effect of temperature on the preparation of nanoscale silicon dioxide.
From Table 3, we can see that the higher the temperature, the shorter the gel time, and the lower the stability of sol.It is because collision probability of the condensation product of low molecular increases with the rise of hydrolysis temperature, and small particles grow bigger by aggregation.On the other hand, the higher the temperature, the faster the volatile compounds evaporate, and the larger the concentration of reactants and products.Consequently, the increase of concentration should induce further aggregation of condensation product, + thereby further accelerating the gelation.
The following formula exhibits the effect of reaction temperature on the particle size of monodisperse gel.The above formula indicates that the nucleation rate increases by geometric progression with the rise of reaction temperature, which leads to the formation of small particles in a narrow size range.The reaction temperature has an obvious effect on the particle size of silicon dioxide.It is concluded that the best reaction temperature is between 60 ℃ and 70 ℃.
Effect of the dosage of hydrochloride on the preparation of nanoscale silicon dioxide
Table 4 shows the effect of the dosage of hydrochloride on the preparation of nanoscale silicon dioxide.
The test results indicate that the gel time decreases with the increase of the dosage of hydrochloride.It is because water molecular directly attacks TEOS in an acid medium, and induces the hydrolysis reaction.The higher the concentration of HCl, the faster the rate of reaction, thus the less time it takes for the formation of gel.In an acid medium, H + attacks −OR group in TEOS molecular, its protonation induces the electron cloud to migrate towards −OR, and enlarges the surface interspace on the reverse side of silicon nuclei, which in turn increases the electrophilic ability of Si, and accelerates the attack of negative charged species, such as Cl -.It is concluded that the best dosage of HCl is 2 mL.
Conclusion
(1) The proper volume ratio of TEOS to alcohol is 1 to 2.
(4) The best volume ratio of TEOS to HCl is 5 to 2.
J=J 0 exp[-△G D /KT]exp[-△G * /KT] Here, J represents the nucleation rate; J 0 represents the initial nucleation rate; △G D represents the change of diffusion activation free energy; △G * represents the change of critical nucleation free energy; K represents Boltzmann constant; and T represents the reaction temperature. | 2018-11-19T04:24:38.367Z | 2010-08-25T00:00:00.000 | {
"year": 2010,
"sha1": "8a161df35196c2d987f6f01c0e417ba2de930423",
"oa_license": "CCBY",
"oa_url": "https://ccsenet.org/journal/index.php/mas/article/download/7377/5756",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8a161df35196c2d987f6f01c0e417ba2de930423",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
73635465 | pes2o/s2orc | v3-fos-license | Cultural Well-Being and GDP Per Capita in Italy: Empirical Evidence
Social and cultural participation has always been a means to promote high economic growth and the development of the individual in a given society. Public and private institutions often try to make available efforts and resources to promote the usability of free time and social participation. This paper uses data from the multipurpose survey on Italian households, conducted by the Italian Institute of Statistics, and it analyses the relationship between cultural capability and the income and economic means of Italian households through the construction of a composite indicator of cultural well‐being. The analysis of the individual indicators of education and cultural participation makes it pos‐ sible to study issues related to the development of social capital in the northern and southern areas, thus highlighting the gap and enabling the decision‐maker to apply pos‐ sible corrections.
Introduction
The concept of human well-being has been changing over time and space. It changes over time, because what defined it in past times is different from what guaranteed well-being a few decades ago, or what guarantees it today. The society of mass phenomena, consumer goods, and fashions has made it more complex to evaluate what ensures well-being: there are no longer only basic needs neither a certain degree of personal satisfaction to be achieved because, in advanced countries, the same needs are created by rapid economic and social dynamics.
The idea of well-being also changes through space: certainly, there are communities that are happy to just satisfy basic needs, and which do not suffer consumerist influences; others, instead, have to manage a complex set of relationships between health, needs, and what is necessary for their fulfillment.
For many years, quantifying and measuring the level of well-being of a society has been the object of research for many researchers, economists, international organizations, and institutions. Many conceptualizations of well-being have been provided over the years. The first conceptualizations were utilitarian and reduced well-being to a hedonistic dimension and to subsequently scalar-valued utility. Subsequently, it has become more common, and perhaps necessary, to consider well-being as a multidimensional concept. In this regard, McGillivray identifies some of the multidimensional conceptualizations of well-being [1], such as the capabilities approach by Sen [2], the basic human values approach [3], the intermediate needs approach [4,5], the universal psychological needs approach [6], the axiological categories approach [7], the universal human values approach [8], the domains of subjective well-being approach [9], the dimensions of well-being approach [10], and the central human capabilities approach [11]. Therefore, the term "well-being" is used to refer to all those areas (or dimensions, indeed) taken into consideration for an assessment of one's life. Seligman adds a positive sense, defining well-being as a positive assessment of an individual's life, which includes positive emotions, engagement, satisfaction, and meaning [12].
For decades, the idea has been accepted that this kind of research was useful when it referred to third world countries, later labeled "developing," which did not enjoy high enough levels of income and consumption to guarantee a sufficient degree of well-being. It was believed that the Western countries, given the well-established high standard of living and the ability to access a wide range of services in all areas of life (health, education, social security, etc.), did not need such studies. For this reason, the attention of scholars and economists has often been given to the measurement of strictly economic variables, such as production, consumption, and per capita income. In this way, the increase of wealth and social progress was closely linked to the growth of GDP per capita [13].
However, the political and economic events that have occurred since the early 1970s and 1980s have questioned this paradigm, highlighting an increase of inequalities even in highincome countries, both in access to resources and to services [14].
The relationship between the individual's well-being and economic well-being, while being in part undeniable, is therefore not so solid; the interpretation proposed in the literature by Amartya Sen shows that well-being depends on many other factors [2,[15][16][17][18][19][20]. The attempts to identify and quantify these factors, developed over the years, are an integral part of welfare economics and statistics, but they are now also very important for the initiatives of national and international institutions, which have the primary goal to provide guidelines and directions for policy interventions [21]. It is possible to make a distinction based on the origin of the different measures; the first category is based on the idea that access to certain goods or services constitutes a prerequisite of well-being, and this access is quantified through the use of "Social indicators of objective variables." A second type of measure directly regards the psychological states of individuals, through the results obtained from investigations on the subjective well-being, and from research on the dynamics concerning emotional experiences; these measures are called "Subjective Indicators of Well-being." Not only is the multidimensionality of well-being taken for granted (multidimensionality is now widely accepted in the literature: for example, see Ref. [22][23][24][25]), but also primary importance is attached to dimensions closely related to the participation of citizens in activities that allow the growth of the person and of human relations. Therefore, a concept of "cultural well-being" arises, which is seen as a means of fostering high economic growth and the development of the individual in a given society [26].
The growth of an individual's cultural well-being is therefore based on two different areas that contribute to its formation; on the one hand, education, because everything that a society has made for itself is made available through education to its future members [27]. On the other hand, the factors of cultural participation constitute the specific and personal resources used to meet a more general need called "cultural appreciation," together of course with some degree of personal taste [26] Several studies have shown that participation rates to events of cultural nature can be associated with urban residence [28], income [29,30], and pressures stemming from membership in a particular social group [28] and age [31].
The measurement of cultural well-being is therefore essential to define one of the components of well-being that affects the material and social aspects of a person. Following the path traced by Dasgupta [32] in wanting to quantify well-being using indicators [33], one has to remember that in processes of aggregation it is possible to use different types of indicators and variables. In this case, we prefer to provide a measurement of cultural well-being at the regional level, which is a sufficiently uniform level to ensure consistent conclusions in order to understand and analyze how Italian citizens in different contexts have reached different levels of education and use their leisure time by investing resources in cultural activities.
To achieve the aim, we will build an indicator of cultural well-being, assuming that a higher level of education and greater participation in educational and cultural activities result in a better quality of life. It will be interesting to see whether an active participation in sociocultural activities and lifelong learning can be related with the strictly economic aspects of life through the use of the variable of disposable income per capita. An examination of this kind assumes considerable importance because, as noted, the reference economic measures used as standards for measuring the quality of life, do not take into account the activities covered by this analysis.
Materials and methods
The analysis of the literature offers several solutions to derive a priori what should be the most appropriate variables to be included in an indicator. The choice, of course, is conditioned both by the availability of data and by the purposes of the indicator itself [34][35][36][37][38][39][40]. The choice of variables depends on several considerations, although there seem to be no strict and validated criteria prevailing. However, we may highlight some common practices. The availability of the data constitutes one of the most difficult problems to solve; in fact, it conditions the choice of variables to be included, and so, ultimately, the very composition of the indicator [41,42].
To define the scope of investigation, we performed a preliminary analysis on the data availability and opted to build an index on a regional basis, using variables included in the multipurpose survey on Italian households, carried by the Italian Institute of Statistics (ISTAT).
With the aim of a subsequent aggregation of the variables, we chose to maintain the same weight to each indicator, reducing interference to a minimum, even if the choice to assign the same weight to all the variables implies an implicit judgment on the variables [43]. We also note that some authors [44,45] identify the equal weighting of the variables as the preferred procedure adopted in most applications, failing to agree [46] on the adoption of alternative procedures [47,48].
In order to proceed with the construction of the indicators, after identifying the reference area to select the variables and build the indexes, we proceeded according to the classic additive model, which represents the most used model in the literature. The classic methodology aims to realize an index [34,36,49,50], consisting of the weighted [34] or non-weighted sum [36,49,50] of the previously selected partial indicators.
The indicators were obtained by adding the contributions of selected variables ( Table 1). Due to the lack of homogeneity of the selected variables, in terms of units of measurement, it was considered necessary to carry out standardization. The variables with different scales, in fact, would have a different weight in the calculation of the total score compared to the other variables [51]. Therefore, for each observation, we calculated the z-scores for each of the variables under consideration, obtained by subtracting at each observation the value of the average of the regions and dividing the result by the standard deviation of the regions. The index of education consists of the non-weighted sum of three z-scores, and the one related to cultural participation, instead, uses six z-scores [36,49,50,52].
The index of education is calculated as the non-weighted sum of Z i : given: and being x i and σ x i (i = 1, 2, 3), the averages and standard deviations of the variables under consideration for the area are equal to: Similarly, the social participation index is calculated as the non-weighted sum of the remaining Z i given: and being μ x i and σ x i (i = 4, 5, 6), the averages and standard deviations of the variables under consideration for the area, the index is equal to: Cultural participation index: ∑
Results
In this section, using the set of indicators and the above methodology, we proceed to the analysis of the results obtained through a Composite Index of Education and Cultural Participation (ICIC) built ad hoc from the synthesis of the two thematic indicators of education and cultural participation.
The ICIC index has both positive and negative values indicating better education and participation, and vice versa. Therefore, in order to allow better reading, we show the rank value in a ranking that assigns 1 to the best region and 20 to worst ( Table 2).
First, in order to evaluate the correct choice of the indicator, we proceeded to evaluate the relationship existing between the two thematic indicators. Then, we observed the link between the composite indicator and the GDP per capita.
The link between the indicators of education and cultural participation is measured by the Pearson coefficient of linear correlation-which is equal to 0.73636-indicating that the two variables express a direct relationship between them.
Thematic areas Variables
Education Percentage of population aged 25-64 who received education or training in the four weeks, preceding the interview (excluding self-learning activities)-
Lifelong learning
Ratio-for each region-of people aged 15-24 enrolled in a course of studies In the variables that make up the indicator, we chose to exclude cinema and concerts-except for those of classical music-because they are considered mainly as leisure activities. It appears useful at this point to compare the ICIC index with a variable that expresses the economic well-being. As a proxy, we chose the variable of GDP per capita. The data is provided by ISTAT, with reference to the year 2011 (see Appendix Table 1).
The existing relationship can be measured on the actual values of the variables by using the Pearson coefficient. This turns out to be 0.80473, indicating that cultural well-being is higher in regions where the GDP per capita is higher. Figure 1 relates the rank of the two classifications, highlighting the situations of greater disparity between the two quantities analyzed.
We observe a difference between the position in the ranking of Valle D'Aosta as regards GDP per capita (2nd) and the level of the ICIC (14th), of Umbria, 12th and 6th, respectively, of Friuli-Venezia Giulia, 9th for GDP per capita and 3rd for the ICIC index, of Sicily, 17th for GDP per capita and last for the ICIC. The other regions have positions that deviate slightly from one variable to the other.
Discussion
First, it is helpful to look at some situations closely linked to the Italian context. Contrary to what is suggested by Checchi [53], the Italian regions have neither an enrolment rate nor a cultural participation rate homogeneous from north to south. The regions are not arranged randomly but, with the exception of the Valle d'Aosta/Vallée d'Aoste, all northern regions appear in the top positions above the average; on the contrary, most of the regions of Southern Italy appear in the last places with considerable differences between them and the northern regions. The strong presence of criminal organizations could partly explain this result, at least for Sicilia, Calabria, and Campania, located in the last three rankings [54,55]. The geographical stratification is similar to that shown by the disposable income per capita, as confirmed by the calculation of the Pearson coefficient between the two values (0.80), which implies a very close relationship between education and cultural participation, and income. On closer analysis, separating the two thematic indicators and later correlating them with GDP per capita, it can be observed that cultural participation is the driving indicator, with a linear correlation coefficient of Bravis Pearson equal to 0.89, versus a thematic education index of 0.61. Both indicators appear to be influenced by the economic possibilities of the households. Costs, such as the price of a ticket, are elements that prevent participation in an event, just like education can be heavily influenced by economic aspects. In addition to these economic barriers, there are also some informational and social ones: certain activities (such as attendance to theater performances and classical music concerts), used as variables in the indicator, are in fact traditionally linked to more affluent social classes. This interpretation also explains the good relationship recorded between educational and cultural participation indicators and GDP per capita, because people from the upper classes tend to continue more easily their studies for economic reasons [56,57]. The relationship between the ICIC index and the GDP per capita implies a clear relationship between education, cultural participation, and available resources; one can easily observe that, in the richest regions, people have more resources to be allocated for this purpose, while in the poorest regions that possibility is more difficult to realize.
However, a rather important difference remains, which is not explained by income availability and that persists in the identification of cultural well-being, income per capita being equal. Although this conclusion confirms a concept related to cultural participation already found in the literature [28][29][30], it can be said that such relationship can also be applied to a more general idea of participation, which includes social participation and the use of free time and continuing education.
Conclusion
This research developed from the assumptions that basing the measurement of cultural wellbeing only on economic parameters can be misleading, and that the use of social indicators can be a way to overcome this obstacle; they also provide a useful tool for evaluating policies implemented by policy-makers.
In this light, the results have produced conflicting directions: on one hand, even if GDP per capita can be considered a reasonable approximation to the well-being, it is still not enough to give a complete and comprehensive description, making it necessary to expand the amount of essential information to complete the evaluation as much as possible. However, the good value of the Spearman's Rho coefficient leads to think that, all in all, GDP per capita gives an approximately similar result to the index of cultural well-being; on the other hand, it does not account for a number of essential elements related to cultural training, which can create more or less useful bases for an improvement in well-being.
The assumptions that form the basis for this research are that basing a measure of cultural well-being only on a few parameters can be misleading, and that using social indicators can be a way to overcome this obstacle. Indicators are a sensitive tool and each of them represents a number that can have greater or lesser significance depending on the importance that is given to them and on the knowledge available to interpret them. On the other hand, it is evident that in societies that are oriented towards performance, the indicators are valuable data. Furthermore, an indicator is the only means that we have at our disposal to give a numerical form to the efforts and results of economic and social policies, so it would be a mistake to deny their usefulness or underestimate their use.
It is necessary to point out that the quantitative exercise carried out is structurally based on the use of statistical data. The information provided by the data should not be considered an indisputable truth, nor the best description possible of the phenomenon; it should instead be taken for what it is, i.e., the representation of the phenomenon analyzed, extrapolated from a set of proxies. This representation, however true and accurate, is still only a representation. What statistical data gathers is only a small portion of the truth, an approximation of it. A large amount of data can, therefore, guarantee a satisfactory approximation of reality, but it is not a substitute for it. This is particularly true when the intention is to provide a synthesis of the data available, which leads to a further loss of information. It is only by bearing in mind these aspects that it is possible to evaluate what the indicator reveals.
It is possible to open debates and make hypotheses, considering cultural well-being either a universal or totally subjective concept; it is possible to discuss abilities and applications. However, social policies, public investment, and government decisions are real quantities, economic means, and resources that are distributed among the population. Here lies the importance of the indicator: it is a tool to direct policies and its aim is to provide information to decision-makers.
In this perspective, the analysis of the cultural well-being distribution in the Italian regions makes it possible to study issues related to the development of social capital in the northern and southern areas, thus highlighting the gap and enabling the decision-maker to apply possible corrections. | 2018-12-28T04:16:49.021Z | 2017-08-23T00:00:00.000 | {
"year": 2017,
"sha1": "6c0364bb59750944c8e8657b182efbe6aee9c804",
"oa_license": "CCBY",
"oa_url": "https://www.intechopen.com/citation-pdf-url/54311",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "72600319656cd948ce2a9e7593715298d6f44fc9",
"s2fieldsofstudy": [
"Economics",
"Sociology"
],
"extfieldsofstudy": [
"Economics"
]
} |
249215609 | pes2o/s2orc | v3-fos-license | DARTBOARD BASED GROUND DETECTION ON 3D POINT CLOUD
LiDAR (Light Detection And Ranging) laser scanners acquire 3D point clouds of real environments. The process consists in sampling the scene with laser beams rotating around an axis. By construction, the point density decreases with the distance to the scanner. This density heterogeneity is a major issue, in particular for mobile systems in the context of autonomous driving, as usually a single scan is processed simultaneously (instead of mapping applications that can integrate several scans, reducing the density heterogeneity). We propose a dartboard grid with cell size increasing radially in order to adapt the grid size to the point density. The effectiveness of this strategy is demonstrated by means of a ground detection task, a fundamental step in many workflows of analysis of 3D point clouds.
INTRODUCTION
Ground detection is a fundamental problem to solve for several applications such as 3D modeling process and mobile robot navigation. In autonomous-driving applications, the interest for ground detection is twofold. The first is to narrow down the zone of the scene where the vehicle can navigate through. The second is the fact that once removed the ground from the set, other objects can be identified as isolated components. Indeed, this strategy is employed in many object detection or object classification algorithms.
Some approaches project the 3D point cloud into 2D images, highly reducing the computational complexity. However, the uniform grid usually used for this purpose is not adapted to the acquisition configuration because the point density decreases with the distance to the scanner. In this paper we propose a dartboard grid that fits the sampling scheme of the scanner. The aim is to avoid the over-segmentation of far away objects if the cell size is too small or the loss of details of close objects if the cell size is too large. This paper is organised as follows. Section 2 reviews previous works on ground detection tasks. Section 3 introduces our proposal, including a dartboard representation that simulates the acquisition system configuration, resulting in an ideal representation for further processing. Then, section 4 demonstrates the effectiveness of our method and compares it to several state of the art methods like RANSAC, λ-flat zones, and CNN-based methods on Semantic KITTI dataset. Finally, section 5 concludes this work and provides some perspectives.
PREVIOUS WORKS
Many 3D object detection and classification approaches start with a ground detection step (Serna and Marcotegui, 2013, Serna and Marcotegui, 2014, Roynard et al., 2016. The idea is motivated by the fact that once removed the ground from the scene all the other objects in the scene appear as different connected components. The pipeline is illustrated in Figure 1. * Corresponding author. Among the approaches proposed in the literature for ground detection, many solutions rely on geometrical intuitions. A simple attempt to solve this problem is to model the ground as a flat surface and carry out a planar approximation using RANSAC paradigm introduced by (Fischler and Bolles, 1981). Examples of RANSAC based approaches are (Oniga et al., 2007, Schnabel et al., 2007, Gallo et al., 2011. In spite of the fact that those methods are robust to outliers, the assumption of the ground as a unique plane is not realistic, even in the urban context. To solve this problem, (Hernández andMarcotegui, 2009, Serna andMarcotegui, 2013) proposed to use λ-flat zones to detect the ground in dense point clouds. The method projects the point cloud on a regular grid parallel to the xy plane placed at the lowest value of z coordinate, and stores for each grid cell the value of the minimal elevation among all projected points on the same pixel. This is called the Bird's Eye View (BEV). Once obtained the projected BEV images the segmentation via λ-flat zones is carried out to obtain the ground. Similarly, (Roynard et al., 2016) project points on a discrete horizontal grid and the z value with the highest value in the histogram is selected as ground seed. Then a region growing approach is used to detect the ground. Both methods are very similar: lambda flat zone and region growing approaches rely on the same hypothesis of smooth height variation. The unique difference is the initialization step.
These methods were proposed for a mobile mapping application with a relative density homogeneity. Due to their sensitiveness to the grid resolution, the authors suggest to carefully pick a value that allows both to get at least one point per pixel in the average case and to obtain connected profiles in the projected image. Unfortunately, this assumption does not hold for standard autonomous driving applications. In that case, the scanner is mounted on top of the car and the axis of the scanner is orthogonal to the ground. The resulting point density decreases with the distance from the scanner. In this kind of scenario, it is not possible to find a good resolution value. On one hand, a sufficiently big resolution disconnects objects in the projected image. On the other hand, a small resolution accumulates too many points in pixels closer to the scanner where the point cloud density is high and the corresponding information aggregation may disturb the detection of small objects.
A more recent method has been introduced by (Zhang et al., 2016). Their idea is to turn upside down the point cloud and let drop a cloth to the inverted surface from above. The ground is then detected analysing the intersections between the nodes of the cloth and the inverted point cloud. Finally, in recent years several CNN-based methods have been introduced in the more general problems of semantic segmentation of a 3D point cloud (Landrieu and Simonovsky, 2018, Thomas et al., 2019, Hu et al., 2020. Concerning the ground detection task (Velas et al., 2018) propose to project the point cloud using a spherical view and generate 2D images containing range, z and laser intensity values. The resulting images are then used to train a fully CNN (FCNN) in order to obtain a binary segmentation. Finally the labels are back projected to 3D points. This kind of approach has been also used by and to carry out a semantic segmentation of the scene.
Polarnet (Zhang et al., 2020) is another interesting recent work that introduces an improved BEV image representation. The proposed grid contains two axes: radius and azimuth angle, assuming the matrix is connected on both ends of the radius axis. Polarnet demonstrates a more homogeneous distribution of points in the new grid representation compared to the Cartesian grid and achieves improved results compared to the state of the art. However, they do not consider different ring thicknesses, accounting for the higher sparsity in distant areas. Moreover, the larger size of distant sectors (due to longer azimuthal circular sector perimeter) is not taken into account in their polar representation. A similar idea is proposed in (Zhu et al., 2021), where a cylindrical partition, with a polar pattern in the horizontal plane, is proposed. As in (Zhang et al., 2020), the ring thickness is uniform in the radial axis.
In this paper, we go further in the BEV grid definition. We propose a dartboard shaped grid that better adapts to the scanner configuration. The resulting improved BEV representation is a better starting point for any analysis approach. We demonstrate the effectiveness of the method on a simple ground detection scheme that does not require an annotated database or a learning procedure. The resulting approach outperforms other classical techniques.
GROUND DETECTION ON POINT CLOUDS WITH HETEROGENEOUS DENSITY
Assuming that the height variations in ground area are smooth, (Hernández and Marcotegui, 2009) proposes to detect the ground as the largest quasi-flat (Meyer, 2001, Soille, 2008 of the BEV range image. A quasi-flat zone, also named λ-flat zone (λ-FZ), is defined follows: Definition 1. λ-flat zone: Two neighboring pixels p, q belong to the same λ-flat zone of a function f , if their difference |fp − fq| is smaller than or equal to a given λ value Let us now present our proposed method. It uses λ-flat zones to detect ground on Point Clouds and aims to solve the problems deriving from the high variation of point density in the scene. By construction, the point density of 3D Point Clouds decreases with the distance to the scanner. This means that projecting the points on a squared grid defined over the xy plane, the pixels far from the scanner have a higher probability to be empty. This causes a problem of connection between peripheral pixels. Figure 2 illustrates this problem. Left column shows BEV of ground pixels of a SemanticKITTI frame and right column the corresponding ground detection based on the largest quasi-flat zone ( (Hernández and Marcotegui, 2009)). The resolution of the images in the first row is 1 m side. 15% of 3D ground points are missing in this detection. The problem worsens with higher resolutions. The resolution of the images in the second row is 20 cm side, where 22% of 3D ground points are missing. Figure 2. Ground disconnection caused by peripheral lower density. Ground detected (right) by the method described in (Hernández and Marcotegui, 2009) and its corresponding ground truth (left). Two different resolutions: pixels of 1 m side (first row) and 20 cm side (second row).
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2022 XXIV ISPRS Congress (2022 edition), 6-11 June 2022, Nice, France Our method consists in splitting the xy grid as a particular polar grid that we will introduce in section 3.2. In order to adapt the cell grid to the point density, its size increases with the distance to the scanner. Then, BEV image is interpolated in each circular sector. The method uses Imin, Imax and Iacc BEV images that store respectively the minimal elevation, maximal elevation and number of points projected in each pixel. To obtain these images we use a resolution of 5 pixels/m for the xy grid, that is, the size of the pixel side is 20 cm. Along with this, the BEV images are 8-bit encoded images and the resolution used for the elevation is 10 levels-of-gray/m.
Differently from the original work, we interpolate and segment Imax image for a higher confidence in the detected ground. The method can be divided into the following steps: 1. identify the ground around the scanner, 2. build a polar grid and interpolate values, 3. compute λ-flat zones and extract ground on BEV image, 4. back project ground label from BEV image to 3D points. Figure 3. A zoom of the image Imax: in a narrow street, the road in front of the car is disconnected from the rear. In red, pixels in the closest ring around the scanner are detected as ground.
Identify the ground around the scanner
The first step is to retrieve the part of the ground closest to the car. The goal is to reconnect the road in front of the car with the one behind. In the original method, the ground is identified as the biggest λ-flat zone found after segmenting the image. In situations where the car is navigating through narrow streets, this assumption may not be verified, just because the ground in front of the car could not be connected with the ground in the rear, as shown in Figure 3. In the proposed example, pixels in the sides of the car represent either a wall or other cars, the ground in the front is disconnected from the one behind. To solve this problem, we detect the ground among the pixels in the closest ring around the car. These pixels will be used later on as markers to detect which λ-flat zones belong to the ground and merge them together. We start identifying the circle made of void pixels around the scanner using a morphological reconstruction by dilation. We use as marker image f : where (x0, y0) is the pixel corresponding to the position of the scanner in the image. Furthermore, we use as mask image g: Thus, the image Ic containing the identified circle is obtained as Ic = R δ g (f ). Then, we detect among the points in the closest ring around the car those belonging to the ground. To achieve this, we first locate the ring R around the car applying a morphological external gradient defined as: Ir = δB(Ic) − Ic, where B is a structuring element of size 1m 2 . The ring is the set R = {(x, y) | Ir(x, y) = 255}. Then we compute the smallest z value in Imax on the set R. Finally, we assign as ground only the pixels (x, y) in the ring R such that |Imax(x, y) − z| < 0.5m. Figure 3 illustrates in red the resulting detected ground.
Build Dartboard and Interpolate image
In the second step, we interpolate information contained in Imax image. This is a necessary step in the method because it fills information on void pixels. Namely, we define a polar grid and then we map pixels of Imax image onto its elements. To better explain our choice, let us analyze how points are spatially located in an ideal environment where the ground is a plane orthogonal to the axis of the scanner. The scanner spins around its vertical axis. Looking at points for a fixed yaw angle, as in Figure 4, we can see that the distance of the points from the scanner grows with the tangent of φi. Thus, in this context, a polar grid on the xy plane, where the length of intervals in the radial axis increases with a tangential trend, would be better suited than the Euclidean grid to prevent disconnections. To define the intervals in the radial axis, let first consider l1, . . . , ln layers in the scanner, and let 0 ≤ φ1 ≤ . . . ≤ φn ≤ π 2 their respective inclination angles. Furthermore, let h be the distance between the scanner and the ground. In the hypothesis of an ideal environment, we can estimate the radial distances ri of the scanned points as: ri = h · tan(φi), ∀i = 1, . . . , n.
Hence, we split the radial axis with intervals [ri, ri+1), for i = 0, . . . , n + 1, where r0 = 0, and rn+1 = ∞. In this way, the profile of the ground in the grid remains connected because for each cell we have at least one point that falls in. Differently from the radial axis, we choose 0 ≤ θ1 ≤ . . . ≤ θm ≤ 2π angle to evenly split the polar axis. Thus, each element S of the dartboard is defined as the set of pixels in the product [ri, ri+1) × [θj, θj+1). Figure 5 shows the dartboard obtained for the KITTI Benchmark (Geiger et al., 2012) (scanner height h = 1.73m).
The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2022 XXIV ISPRS Congress (2022 edition), 6-11 June 2022, Nice, France Once generated the dartboard, it is used to interpolate information on void pixels in Imax image and obtain the interpolated imageÎmax. Then for each pixel x, y in the euclidean grid we compute its polar coordinates r, θ as: where (x0, y0) is the scanner location. The coordinates (r, θ), determine a circular sector S in the polar grid. In this way, we map each element in the Imax domain to an element in the dartboard. Now, let us define the imageÎ obtained assigning the minimum in each dartboard circular sector where S k is the circular sector containing pixel (x, y). Only Imax null (empty) pixels are interpolated. Thus, the final interpolated imageÎmax is computed as the maximum between Imax andÎ: Imax(x, y) = max(Imax(x, y),Î(x, y)), Figure 6 illustrates an interpolation example. Observe how peripheral pixels that were disconnected in Imax image (figure 6 (a)) are reconnected inÎmax ( figure 6(b)).
Compute λ-flat zones and extract ground on BEV image
λ-flat zones are computed onÎmax image. Similarly to (Hernández and Marcotegui, 2009), we use λ = 0.20m. An example of the obtained λ-flat zones is illustrated in Figure 6. Note that in this example, the car is navigating through a narrow street and the road is divided into two main λ-flat zones. To merge the two connected components the marker extracted in section 3.1 is used. Figure 6 (d) shows a zoom of the obtained segmentation around the car. The red ring in the center of the image is the ground marker detected in section 3.1. The detected ground is composed by the union of λ-flat zones whose intersection with the detected ground marker is not empty. At the end of this step, the method returns a binary label BEV image Ig, whose non-zero pixels represent the detected ground. (d) Zoom around the car. Red pixels represent the ground detected in section 3.1. Figure 6. Example of analysis in interpolated image obtained on the frame 3721 in sequence 08 of SemanticKITTI dataset.
Back project labels
Finally, the detected ground pixels must be projected back to 3D point clouds. As said before, ground is detected on Imax image instead of Imin image proposed in (Hernández and Marcotegui, 2009). Even though the two approaches seem similar, there is a significant difference between them that has to be considered before the back projection of the labels. Using the Imax image, ground points close to objects or under them (such as ground under a tree) are not detected as ground. The reason of this problem is that Imax refers to the object elevation and the λ flat zone propagation can not reach it from the ground. An example of this issue is illustrated in Figure 7. Red points represent true positive ground points, blue points represent true negative and white points represent false negative points, ground points close to objects. This effect is stronger for lower resolutions. To solve the problem, ground pixels are extended on Imin λ-flat zones. First of all we compute λ-flat zones on Imin image. Then, let us defineĪg the extended ground image as: for each quasi flat zone C obtained from Imin. Intuitively, we propagate ground labels on λ-flat zones computed on Imin image. In this way, we assign as ground pixels containing both ground points and points belonging to objects. Hence, during the back projection, we need to separate this last group of points. To achieve this, let us consider F = {(x, y)|Ig(x, y) = 1} subset of the image domain made by pixels initially marked as ground, and let us considerF = {(x, y)|Ig(x, y) = 0 ∧ Ig(x, y) = 1} subset of the image domain made by pixels where the ground label has been extended. Let p be a point and let (xp, yp) the pixel in the image domain where p is projected. If (xp, yp) ∈ F ∪F then the point p may belong to the ground. To decide if p belongs to the ground or not, we consider the difference between its elevation and the corresponding value in Imin image, i.e. |pz − Imin(xp, yp)|. Two different threshold values, respectively δF = 20 cm and δF = 5 cm, are defined according to whether an object has been detected in the pixel initially detected as ground (F) or not (F). If so, the tolerance is lower in order to prevent the inclusion of the lower part of the object into the ground. The label l(p) assigned to the point p is: (a) Ground detected on Imax. Ground points close to vertical objects are missing.
(b) Results obtained while including the ground expansion on I min before the back projection.
EXPERIMENTS ON GROUND DETECTION
The proposed method is compared against two state of the art algorithms and a naive RANSAC method on the Semantic KITTI dataset . We include RANSAC in the analysis as a baseline benchmark. Cloth Simulation Filter (CSF) (Zhang et al., 2016) proved great adaptability to a wide range of different environments, either urban and rural. In addition, we use a FCNN method similar to the one proposed in (Velas et al., 2018). The main difference with the original is the network used. Instead of employing the architecture proposed by the authors, we use a U-Net architecture (Ronneberger et al., 2015), for its great versatility to different applications. In order to train and validate the U-Net model, we select one scan over ten in the sequences from 0 to 10 except for the sequence 08. We adopt this last sequence as a test set for all the methods. The split between training and test has been done following directives in .
Since the dataset does not contain an explicit ground class, we derived it by aggregating multiple classes (Road, Parking, Sidewalk, Other-Ground, Lane-Marking and Terrain). Furthermore, to have an overview of classification errors made in the predictions we created a total of eight categories aggregating all classes. The categories that we created are Ground, Building, Vehicles, Cycles, Person, Vegetation, Fixed-Objects, and Moving Objects.
We use the following metrics to benchmark our experiments, P = T P T P +F P as precision, R = T P T P +F N as recall, A = T P +T N T P +T N +F P +F N as accuracy, and Intersection over Union (IoU) also called Jaccard Index IoU (A, B) = |A∪B| |A∩B| where T P , T N , F P , F N indicate respectively the number of true positives, true negatives, false positives and false negatives, and A, B are any two sets. The sets used to compute the Jaccard Index are the set of predictions and the ground truth. In all the cases the scores have been measured using the predictions on the 3D point clouds. Table 1 lists the scores obtained by the methods. The table is divided in two parts: the first lists the unsupervised methods and the second the supervised approach. All the methods analysed achieve great performances, and the FCNN achieves the highest score in almost all the metrics. Note that our proposed BEV λ-FZ method shows a good trade off between precision and recall, and among the unsupervised methods is the one with the highest Jaccard Index. Moreover, this method needs just a few parameters to work and this makes it much easier to explain why it fails compared to FCNN. Along with these metrics, we analyze the confusion matrix in order to evaluate which categories are confused with the ground . Thus, Figure 8 shows the confusion matrices. The vegetation is the class with the highest rate of points classified as ground. This category contains low plants and separating them from terrain with propagation approaches is cumbersome. (Zhang et al., 2016). FCNN (Velas et al., 2018) BEV λ-FZ is the method proposed in this paper.
Let now analyse qualitatively the results and see some examples in which our proposed method fails. To visualize the predictions in the following figures we use the color code hereby, green for T P , red for F P , blue for F N and gray for T N . Figure 9 shows stairs classified as ground by our method (λ value is larger than the step height). CSF and FCNN methods do not prevent completely this error. Figure 10 misses the detection of a garden because it is behind a bush. This bush prevents the garden λ-FZ to reach the ground marker around the car. From an autonomous driving application this is not an issue. It can even be seen as an advantage, as the garden behind the bush is not reachable by the car. Moreover, this kind of missing detection happens only in isolated zones of the scene that cannot be easily reached. This is confirmed also by the high recall rate of the method. Finally Figure 11 shows an example in which our method detects a terrain zone while FCNN method misses it.
CONCLUSIONS
In this paper we propose a BEV grid in the form of a dartboard with radial sectors of increasing size with the scanner distance. This grid fits the acquisition system, taking into account the height of the scanner with respect to the ground as well as the number of laser layers and their corresponding inclination angles. The resulting representation is ideal for further processing, avoiding object splitting due to low resolution of faraway objects as well as losing details for nearby objects. The improved BEV representation is a better starting point for any BEV analysis approach. We revisit a simple ground detection method based on the assumption of small elevation variations. We introduce the dartboard grid and we demonstrate good performances. We have compared our method with two state of the art methods (CSF and FCNN) and a naive RANSAC on the SemanticKITTI dataset. Results show that our method is comparable with other state of the art algorithms, even though FCNN is more precise. In our opinion, the few parameters used and the greater explicability in case of error compared to FCNN make our algorithm a good candidate for potential applications. Moreover, the proposed BEV representation can also be used in other learning strategies. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2022 XXIV ISPRS Congress (2022 edition), 6-11 June 2022, Nice, France | 2022-06-01T15:26:30.019Z | 2022-05-30T00:00:00.000 | {
"year": 2022,
"sha1": "ef24e75e82ce59802b4cf6561436e4b4d13a1163",
"oa_license": "CCBY",
"oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2022/185/2022/isprs-archives-XLIII-B2-2022-185-2022.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c0f58dc06ad05a8b77e7af4ed4e283e317849141",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
} |
267383708 | pes2o/s2orc | v3-fos-license | Simplifying Multimodal Clinical Research Data Management: Introducing an Integrated and User-friendly Database Concept
Background Clinical research, particularly in scientific data, grapples with the efficient management of multimodal and longitudinal clinical data. Especially in neuroscience, the volume of heterogeneous longitudinal data challenges researchers. While current research data management systems offer rich functionality, they suffer from architectural complexity that makes them difficult to install and maintain and require extensive user training. Objectives The focus is the development and presentation of a data management approach specifically tailored for clinical researchers involved in active patient care, especially in the neuroscientific environment of German university hospitals. Our design considers the implementation of FAIR (Findable, Accessible, Interoperable, and Reusable) principles and the secure handling of sensitive data in compliance with the General Data Protection Regulation. Methods We introduce a streamlined database concept, featuring an intuitive graphical interface built on Hypertext Markup Language revision 5 (HTML5)/Cascading Style Sheets (CSS) technology. The system can be effortlessly deployed within local networks, that is, in Microsoft Windows 10 environments. Our design incorporates FAIR principles for effective data management. Moreover, we have streamlined data interchange through established standards like HL7 Clinical Document Architecture (CDA). To ensure data integrity, we have integrated real-time validation mechanisms that cover data type, plausibility, and Clinical Quality Language logic during data import and entry. Results We have developed and evaluated our concept with clinicians using a sample dataset of subjects who visited our memory clinic over a 3-year period and collected several multimodal clinical parameters. A notable advantage is the unified data matrix, which simplifies data aggregation, anonymization, and export. This streamlines data exchange and enhances database integration with platforms like Konstanz Information Miner (KNIME) . Conclusion Our approach offers a significant advancement for capturing and managing clinical research data, specifically tailored for small-scale initiatives operating within limited information technology (IT) infrastructures. It is designed for immediate, hassle-free deployment by clinicians and researchers. The database template and precompiled versions of the user interface are available at: https://github.com/stebro01/research_database_sqlite_i2b2.git .
Background and Significance
][3][4] Thus, efficient research data management (RDM) is essential in modern medicine and research, 5 and provides researchers with access to routinely collected data.It provides the basis for scientific research, promotes scientific data publication, and increases reproducibility. 1,5,6In general, RDM describes the organization, storage, preservation, and sharing of scientific data.
Especially in neuroscience, RDM is indispensable.The complexity and volume of heterogeneous (multimodal) neuroscience data (e.g., laboratory, microscopy, imaging, clinical examination results and scores, electrophysiological data, etc.) require good documentation, processing, and standardization.Many neurological diseases, especially neurodegenerative diseases, need follow-up observations.In the most common neurodegenerative disease, dementia, the diagnosis is made from multimodal data including neuropsychological testing, imaging, functional imaging, cerebrospinal fluid levels, etc. 7 Patients are evaluated annually, and documenting the large amount of longitudinal and multimodal data generated is challenging.With optimal RDM, the collected data are prepared for further relevant analysis methods and can be efficiently used for retrospective studies. 2 Unfortunately, there is a large gap in RDM in practice, especially in the neurosciences.A recent online survey by the National Research Data Infrastructure (NFDI-Neuro) on the state of RDM clearly shows the existing problems. 1Routinely collected clinical and scientific data are usually incomplete or not retrievable. 2Knowledge of and adherence to data and metadata standards are often limited. 6Many researchers and clinicians lack the time to invest in standards-compliant data processing and management.In addition, there is a lack of secure data management in terms of privacy and secure data sharing. 1 How can the Gaps in Research Data Management be Closed to Ensure Optimal Data Storage, Processing, and Standardization?
Recognizing the urgent need for efficient and user-friendly data management systems tailored for clinicians working with multimodal and longitudinal data, our initial research aimed to assess the suitability of existing platforms.However, a gap immediately became apparent: the needs of clinical researchers were not being adequately met by current solutions.For example, while platforms such as Longitudinal Online Research and Imaging System (LORIS) offer rich feature sets, their complexity and steep learning curve make them unsuitable for smaller projects or for clinicians with limited technical skills.This discrepancy is exacerbated by the unstructured and haphazard data management methods commonly used in clinical settingsoften relying on rudimentary folder structures and Excel spreadsheets.Furthermore, existing platforms rarely address the specific challenges posed by GDPR regulations around sensitive data, or the ethical imperatives that require localized data storage and restricted access.Finally, the constraints of local IT infrastructures designed with data security in mind can make server-based solutions impractical for routine operations.These multiple, intertwined challenges served as the primary catalyst for our project, highlighting the urgent need for a system that reconciles usability with the complex ethical and regulatory landscape of clinical RDM. 8 Objectives Initially, we researched the most appropriate RDM solution for clinical researchers, particularly in neuroscience, where the need to capture a variety of data types and manage their longitudinal nature is paramount.However, we pinpointed a gap in existing solutions: they frequently fall short in catering to small-scale projects and offline capabilities, while also failing to balance ease of implementation with adherence to Findable, Accessible, Interoperable, and Reusable (FAIR) principles 5 and General Data Protection Regulation (GDPR) regulations. 9his prompted us to develop a specialized solution aimed at • storing diverse, longitudinal clinical data along with metadata; • prioritizing user-friendly design and intuitive data capture; • ensuring offline functionality and seamless IT integration; • focusing on small-scale projects while maintaining data interchangeability with other applications.
Methods
In the following section, we describe the steps for concept development, as shown in ►Fig. 1.
exchange and enhances database integration with platforms like Konstanz Information Miner (KNIME).Conclusion Our approach offers a significant advancement for capturing and managing clinical research data, specifically tailored for small-scale initiatives operating within limited information technology (IT) infrastructures.It is designed for immediate, hassle-free deployment by clinicians and researchers.The database template and precompiled versions of the user interface are available at: https://github.com/stebro01/research_database_sqlite_i2b2.git. Step
1: Analysis of User Demands and Definition of Minimal Technical Requirements for Research Data Management
As a first step, we defined the requirements for the target group of clinical researchers (users) working in patient care on a daily basis, based on a set of minimum technical requirements as follows.
User Demands
To define the requirements for a clinical research data storage system from a user perspective, we conducted a standardized expert telephone interview with a total of 41 (open and closed) questions (duration: approximately 35-45 min).We conducted a total of 22 expert interviews, 20 of which were finalized and analyzed for our study.Of the final 20 participants, 11 were aged between 25 and 34 years, 8 were between 35 and 44 years, and 1 participant was over 45 years.
In terms of their professions, 14 were physicians, 2 were psychologists, and 4 were researchers; the interview is available in ►Supplementary Table S1 (available in the online version only).
Minimal Technical Requirements
Following the German Research Foundation guidelines, we established minimum requirements for an optimal RDM system. 10,11n establishing specific criteria, we focused on elements such as FAIR principles, data security, validation, HL7 integration, and offline functionality.To bolster data interoperability and sustainability, we mandate the incorporation of extensive metadata, persistent identifiers, and standardized data formats, such as HL7. 12 Additionally, we aligned with clinical classification systems like logical observation identifiers names and codes (LOINC) and systematized nomenclature of medicine and clinical terms (SNOMED-CT).For data security, we added role-based access control and potential pseudonymization techniques to the criteria.The specific criteria are listed in ►Table 1 under the category of "Technical requirements." Step 2: Use Case: Clinical Research Question with Multimodal Neuroscientific Data Based on the defined user needs and technical requirements for RDM, we aimed to investigate how these criteria apply to a research question involving a large amount of multimodal data.We decided to use the clinical course data (retrospective) of patients who visited our memory clinic at the Department of Neurology during the period 2014 to 2022 with a diagnosis of mild cognitive impairment (MCI).For illustration, please refer to ►Supplementary Fig. S1 (available in the online version only).
Step 3: Literature Research and Search for Suitable Existing Research Data Management Solutions In our study, we conducted a literature search following preferred reporting items for systematic reviews and metaanalyses (PRISMA) statement, 13 focusing on "database solutions for RDM."We included 25 tools, categorized into Electronic Data Capture (EDC), clinical trial data management systems (CTMS), specific data management systems, and Electronic Laboratory Notebooks (ELN).Categorization is challenging because there is no universally accepted nomenclature, leading to the use of different terminologies in the field.We also distinguished between well-known and lesser known EDC and CTMS.Our research included an extensive literature review to ensure that we covered prominent systems and presented a holistic view of these essential tools.►Table 2 lists the available tools with their technical requirements and a brief functional description.
Step 4: Applying Research Data Management Criteria and Evaluation: Towards a Custom Solution We conducted an internal review of various existing solutions, to assess their suitability for our specific needs.While many of these platforms offer a broad range of features and applications, they frequently fell short in several key areas.These included data storage options, either cloud-based or requiring a local server, as well as ease of implementation and administration.Even seemingly straightforward solutions like DataLad 14 or research electronic data capture (REDCap) 15 come with their own challenges, such as the need for administrative rights and complex setup processes.Additionally, cost considerations were a factor.We also noted that many of these tools lack support for clinical classification systems, which would enable the use of international standards for data capture, despite the feasibility of implementing such features.A detailed rationale for the selection of various tools under consideration is elaborated upon in the "Results" section.
This outcome prompted the development of our custom solution, tailored to meet our defined requirements.SQLite was chosen as a serverless and self-contained database management system.It is well-suited for small applications and research projects due to its lightweight and portable design.The entire database is stored in a single file, making it easy to manage, backup, and share.SQLite, compatible across platforms and adhering to the atomicity, consistency, isolation und durability (ACID) standard, 16 is a continuously evolving open-source project supported by the community. 17Leveraging HTML5/CSS for front-end development, we utilized Electron to create a cross-platform desktop application, ensuring consistent user experience and simplifying development and maintenance due to the extensive use and documentation of these web technologies.Simplifying Multimodal Clinical Data Management Schweinar et al.Step 5: Implementation of the Database Design We have based our design on the i2b2 Common Data Model (CDM) 18 star schema, prevalent for its efficient querying and analysis of clinical research data.This model enhances our database by offering scalability to accommodate growing clinical research datasets.Despite relying on join operations, its normalized structure ensures data consistency, efficient storage, and easier updates, thereby preventing data redundancy.Crucially, its global, unified representation accelerates data access and expands analytical capabilities across multiple subject areas by eliminating the need for separate tables.
Step 6: Implementation of Data Security GDPR compliance was a key focus during development, necessitating data security at two levels.Access to the SQLite database (DB) is controlled by assigned read/write permissions within the local or network drive.Second, the user interface (UI) utilizes a JavaScript class to dynamically generate user-specific views and handle "Create, Read, Update, Delete" operations, thereby ensuring secure management of database entries.To comply with GDPR regulations, all data and data operations are tagged with a specific date, ensuring that research data are only retained for as long as is necessary to fulfill its intended purpose.At the end of this period, or at the data subject's request, the data can be securely deleted, in-line with the GDPR's "right to be forgotten" provision.
Step 7: Implementation of Data Quality Control To reduce erroneous data entry and improve data quality, we implemented type validation and duplicate detection methods directly in the input and import functions within the UI.
In addition, we implemented logic-based rules based on Clinical Quality Language (CQL) 19 that provide direct user feedback on any rule violations during data entry and import.
Integrating CQL using the CQL Framework (GitHub repository: https://github.com/cqframework)into our research database involves a two-step process.Firstly, CQL statements are transformed into JSON ELM (JSON expression logical model) representations.Secondly, the converted rule is interpreted and executed through JavaScript code.►Supplementary Fig. S2 (available in the online version only) provides a graphical illustration of the process.
Step 8: Designing the Standard Views for the Graphical User Interface (Front-end) During development, we delineated a data pathway comprising (1) new subject entry, (2) new visit creation, (3) individual Step 9: Validation of Data Consistency and User Satisfaction Survey For data validation during import and manual data entry, we utilized the data collected as part of our use case (Step 2).We progressively compared the representation of the data in the database with the collected data in terms of concept, data type, values, and temporal alignment, and made step-bystep adjustments.A detailed description of this can be found in the "Results" section and in ►Fig.3.
To evaluate user experience, we conducted a streamlined questionnaire-based survey, focusing on our DB front-end (see ►Supplementary Table S2 [available in the online version only]).After a 30-minute introduction and an hour of independent work with UI, 10 participants responded to 18 questions on a 1 to 5 agreement scale.
Results
This project involved developing an SQLite-based research database template and a user-friendly front-end for data entry.The database template, along with precompiled front-ends for Windows, macOS, and Linux, are available at the given GitHub repository: https://github.com/stebro01/research_database_sqlite_i2b2.git.
In this section, we present (1) expert interview findings on ondemand analysis, (2) DB structure, (3) implementation of data validation tools using real-time clinical data, (4) user feedback on our solution, and finally, (5) specific use cases for our solution.
Expert Interview Findings on On-demand Analysis
All respondents were directly involved in the design and conduct of experimental and clinical trials.Their level of experience was subjectively considered to be at least intermediate.The essential requirements identified by the respondents are presented in ►Table 1.A key finding from the interviews was that none of the respondents used a standardized method or system for managing clinical research data.All participants expressed a strong interest in a software solution easily implemented within their existing IT infrastructure.
Evaluation of Existing Tools and Rationale for Selection
To assess existing solutions, we conducted comprehensive research, primarily focusing on EDC tools; the findings are summarized in ►Table 2. While we identified several potential candidates, the following outlines our reasoning against their selection.
REDCap, 15 for instance, is a renowned tool for capturing and managing data, particularly in large clinical studies.However, it falls short on several of our criteria.Firstly, REDCap operates as a server-based solution with cloud storage, which contradicts our requirement for offline capabilities.Secondly, any changes to the data structure require approval from the REDCap team, limiting flexibility.Lastly, it lacks built-in support for clinical classification systems like SNOMED-CT and LOINC and has a steep learning curve.DataLad, 14 on the other hand, is a free, open-source platform allowing users to manage data on their local machines.While it offers advanced features like data versioning and structured storage, its technical demands and command line-based features make it less user-friendly.Moreover, it lacks specialized support for clinical classification systems.OpenClinica 20 is another notable tool that offers both free and paid versions.We ruled it out primarily due to the lack of explicit support for clinical classification systems and their IT requirements.
DB Structure
Built upon the i2b2 CDM 18 star schema, our database structure, detailed in ►Supplementary Table S3 [available in the online version only], integrates specific tables, views, and triggers.A comprehensive technical description is available in both the ►Supplementary Materials and the associated GitHub repository.
Utilizing the i2b2 schema, our data structure centers around the OBSERVATION_FACT table, with extensions for auditing purposes, such as IMPORT_DATE, DOWNLOAD_DATE, UPDATE_DATE, and UPLOAD_ID.In the current release, auditing primarily involves timestamping new data (via IMPORT_DATE) and tagging it with the creator's ID (UPLOA-D_ID), as well as logging any data modifications (UPDATE_-DATE).We have designed a table called "NOTE_FACT" Fig. 3 Step-by-step import process demonstrating the reduction in error rates throughout each stage of the data import procedure.CQL, Clinical Quality Language.
intended for more comprehensive auditing, which would capture differential data changes, data deletions and restorations, and database query access.This feature is currently inactive in the UI but is on the roadmap for future releases.
Data Concepts
Data points are standardized in the CONCEPT_DIMENSION table, utilizing globally recognized classifications like international statistical classification of diseases and related health problems (ICD-10), LOINC, and SNOMED-CT for interoperability, promoting data consistency and exchangeability across health care systems. 21
User Management
We manage user access via the USER_MANAGEMENT and USER_PATIENT_LOOKUP tables, controlling data access and permissions, ensuring data are only accessible to authorized users.
Clinical Quality Language
To facilitate quality management, a dedicated CQL_FACT table stores CQL 19 rules and their JSON ELM representations.The CONCEPT_CQL_LOOKUP table links these rules to clinical concepts in the CONCEPT_DIMENSION.
User Feedback
Finally, the NOTE_FACT table collects user feedback and allows note creation for individual subjects, paving the way for future enhancements like reminder or appointment management systems or a more extensive auditing system.
Front-end (Graphical User Interface)
We have developed a UI that simplifies data input through comma-separated-values (CSV) imports or a CRF-like interface, with the graphical user interface (GUI) designed to focus on individual clinical visits, as demonstrated in ►Fig.2. Further, the GUI grants administrative control over user, provider, and location tables, in addition to managing CON-CEPTS, inclusive of a SNOMED application programming interfaces (API) link.Our UI also streamlines data exchange processes by facilitating the import/export of all nonobservational data in JSON format.
Observational data can be imported and exported via a standardized CSV file.Additionally, we have incorporated HL7 JSON support for interchange.In the current version, we specifically focus on the HL7-CDA (version 2.0.1)standard, limiting our support to the "Composition" resource type.Within this resource, we employ properties such as "subject," "event," and "section" to encapsulate relevant patient and observational data.
By default, observational data are exported in a pseudonymized manner (creating a UID for each exported object).
Versatile Data Capture for Clinical and Neuroscience Research
Our RDM system has been designed with a strong focus on versatility, particularly for the collection of diverse clinical and neuroscientific data types.The database accepts standard text, numeric, and date formats, providing a foundation for the collection of behavioral data, psychometric assessments, and clinical findings.It is also equipped to handle raw data types, facilitating the storage of images, PDF documents, and specialized reports.Our data schema covers a wide range of research needs, including laboratory values and subject-specific documents such as privacy statements.Using SQLite's capabilities, the database is theoretically capable of storing more complex datasets such as neuroimaging and genetic information.However, it is important to note that we have not yet validated the system's performance with these larger datasets.
Implementation and Application of Data Validation to Real-world Clinical Data (Use Case)
We have introduced real-time data validation, incorporating type checking, CQL rule enforcement, and duplicate checks to ensure data integrity directly into the data entry and import routines.In the current version, the class responsible for adding data to the database performs double-entry checks.If a given subject already has a specific concept with the same value, the user is notified within the UI.At this point, the user has the option to either skip the entry or proceed with adding the data.However, since the data are stored in an SQLite database, double entries can also be managed directly through structured query language (SQL) statements.
We implemented these features incrementally using real clinical data from our use case.In total, we used a data matrix with 8,985 data entries from 56 subjects, consisting of 82 different types (concepts).The iterative development of the import function, illustrated in ►Fig. 3,systematically addressed data types, concept-conforming data, and erroneous data like invalid character sequences or unsupported special characters.
We logged errors and their frequency at each optimization stage, only modifying the initial data matrix when necessary (e.g., incorrect data type or significant misspelling of concepts/answers).Alterations to the original Excel spreadsheet were successively stored and documented via the KNIME analytics platform. 22
User Feedback on Our Solution
Overall, the survey (►Supplementary Table S4, available in the online version only) results indicated a high level of user satisfaction.Most respondents found the system easy to use and the majority expressed confidence in their ability to use the system without technical support.The integration of different functions within the system was well received and users reported that they could learn to use the system quickly.Users found the process of entering new subjects, visits, and observations simple and straightforward.The layout was considered user-friendly and provided a good overview of relevant data.Users also found it easy to export data from the system.
Specific Use Cases for Our Solution
Herein, we delineate potential application scenarios of our database concept for clinical research queries, with the decision-making process illustrated in ►Supplementary Fig. S1 (available in the online version only).First, a data scheme (like a CRF) is created akin to designing an Excel sheet.More concepts can be added to the database, currently comprising over 800 in the CONCEPT_DIMENSIONS table, if required.The researcher can then store data in the database through the front-end, with three primary constellations: 1. Single user: Ideal for clinical researchers with limited subjects, where the user creates a CRF, store data in their local SQLite database, and directly manage the data with our solution's database structure and front-end.2. Multiple users: A shared central data repository stored on a network drive, where users only access their created subjects, while an administrator can collate and evaluate all data.Role-specific rights control data access, and our solution also facilitates data merging from multiple users for joint analysis.3. Multiple users/separate DB: Each user processes different subjects in a local DB version, with data later combined into a main database using the HL7 export and import function.Suitable for users at disparate locations needing later database consolidation, our solution provides an HL7 export function for this purpose.
Challenges in scenarios 2 and 3 may involve ensuring data consistency and integrity when merging data from multiple users and databases, necessitating clear data management and validation rules.In scenarios where multiple users are working on the same database, there is the potential for data conflicts when modifying, adding, or deleting the same data.A possible solution to this problem is to implement a locking mechanism for individual subjects or visits.When a user is editing, the data would be set to "read-only" for other users; this feature is currently under development.In addition, the current version lacks an "undo" function, which will be included in future updates.Merging data from different sources also presents challenges, particularly when dealing with new data, matching data with identical identifiers, or managing different versions of the same data.Currently, the import process is straightforward: new data are added if no matching subject exists, and existing data are appended if it does.If conflicting data with the same creation date are detected, the version with the most recent update is used.However, this approach may not be suitable for all cases, so users are encouraged to implement custom SQL statements or code as needed.
Finally, as the database can be stored on a local or network drive, it can be accessed remotely by various means, including remote desktop or network drive access, making it suitable for home office setups.
Discussion
Our work addresses the pressing need for a comprehensive data platform that simplifies the management of clinical research data, with a special emphasis on the needs of clinicians involved in research.Our SQLite-based database solution emerges as a user-friendly, secure, and efficient platform that particularly shines in scenarios where sophisticated server architectures or complex infrastructures are not readily available.
At the inception of our project, we embarked on a comprehensive needs analysis and rigorous internet research to identify the specific requirements and shortcomings of the existing alternatives.This inquiry revealed a pressing demand for a straightforward and intuitive system for data collection and management.The prevalent solutions often suffered from complex installation processes and operational challenges, making them ill-suited for small-and mediumsized projects.They frequently turned out to be excessively complicated and cumbersome, thereby diminishing their appeal for projects of smaller scale.
Acknowledging this gap, our aim was to develop a simpler, user-friendly alternative that offered ease of installation and manageability.Our concept, tested and validated in a clinical use-case scenario, was designed to fulfill the exigencies of real-world practice.To fine-tune our system and cater to user needs effectively, we conducted a user satisfaction survey at the culmination of the project.The invaluable feedback thus gathered will aid us in further refining and optimizing our system.
What do Researchers Need?
Scientific researchers often play multiple roles including data collector, manager, and analyst. 23However, researchers usually lack proficiency in data management, even though it is crucial in the latter part of the research workflow. 4lthough researchers are experts in their respective fields, their grasp of RDM is usually subpar, as revealed by studies and interviews.Many are unaware of standardized RDM tools and methods.Our survey and another study 1 revealed a need for defined data and metadata standards, set workflows, and a reliable RDM infrastructure.The lack of standardization and automated data exporting and anonymizing often hinders public data sharing.For instance, less than 40% of functional neuroimaging data are openly shared. 23Implementing RDM standards can enhance the quality and volume of scientific publications. 24onetheless, many researchers see themselves as key to improving RDM and implementing open science practices. 25Efforts are underway in Germany, like the establishment of an NFDI, targeting fields such as clinical neuroscience to ensure GDPR-compliant research. 2The complex process of adopting an RDM culture integrates technological, economic, and political aspects.The increasing relevance of the topic is evidenced by a surge in scientific publications since 2016. 26Our local project aims to connect to evolving data infrastructure interfaces, emphasizing the use of standard classifications and established representation models for integration.
Our expert interviews with clinical scientists engaged in daily patient care highlighted a persistent need for research tools that seamlessly integrate into existing workplace infrastructures.These tools must effectively compete with ubiquitous applications like Excel.For many researchers, transitioning between their professional and research workspace is not as straightforward as one might assume.Consequently, there is a high demand for a bespoke database solution that functions as a "one-click" alternative to Excel, simplifying the user experience while facilitating scientific inquiry.
Research Data Management in Scientific Practice
In a recent review from 2016, Perrier and colleagues 27 analyzed a total of 301 articles that examined RDM procedures in academic institutions.They were able to show that most of the work deals with the creation and initial storage of data (creating data and processing data as described, for example, in the Research Data Lifecycle Framework of the UK Data Archive).It is precisely this aspect of the scientific workflow that we are trying to improve here.
When it comes to managing clinical research data, researchers have a range of options at their disposal.These options include simple spreadsheet-based solutions to more complex database systems that demand advanced technical expertise to operate.Excel spreadsheets are one of the most used solutions by clinical researchers. 28However, despite being a popular tool for data analysis, Excel lacks the required structure and functionality to manage complex clinical research data.Large datasets can quickly render Excel sheets unwieldy and error-prone, with error rates reported to be as high as 7 to 80%. 29,30Moreover, spreadsheets do not provide the necessary security and backup functionality needed for sensitive clinical data.
To address these limitations, several popular web-based solutions are available, such as REDCap, 15 OpenClinica, 20 and LORIS. 8►Table 2 in the methodological section provides a systematic overview of the most common solutions.
Implementing Findable, Accessible, Interoperable, and Reusable Principles and Data Sharing
When collecting clinical research data, researchers must adhere to well-defined standards and rules.The use of different International System of Units (SI) units alone presents a common nontrivial challenge.Only when a high standard is established during data collection can general concepts such as the FAIR principles 5 be effectively implemented. 31ur SQLite-based database solution uniquely addresses the challenges of clinical RDM by integrating user needs, technical requirements, and industry standards.It prioritizes clear storage structure, ease of access, data security, and rights management, ensuring an uncomplicated and efficient RDM process.
The solution aligns with the FAIR principles, employing standardized data and metadata formats, facilitating data sharing and integration, and securing role-based access control.Each data observation is linked directly to a clinical concept and a visit, enriching it with valuable metadata, thus aligning with the researchers' "minimum requirement" approach for metadata entry. 27ur choice to integrate HL7, particularly HL7/JSON, into our SQLite database improves functionality and adherence to FAIR principles.This globally recognized standard ensures seamless data exchange with other systems, facilitating efficient data sharing and collaboration. 32The HL7/JSON integration also enhances data accessibility, allowing data to be easily parsed across platforms and applications due to its human-readable format.
Our SQLite database system offers a structured storage system for diverse clinical data types, enabling efficient search and filtering.This minimizes errors due to manual data manipulation.The user-friendly interface allows easy data management without demanding extensive technical knowledge, while the provision for CSV export supports further analysis.
Security and backup features safeguard sensitive clinical data against unauthorized access and loss.The database is stored on a network drive with defined user access, enabling secure, easy access from researchers' clinical workplaces.This is especially beneficial for research in resource-limited settings where advanced server infrastructure may not be available.
The system is also designed for straightforward adoption.By simply customizing the SQLite DB template and installing the front-end, researchers can start data collection and always maintain direct control over their data, making our solution an effective tool in current scientific practice. 2Moreover, the system's flexible design can easily adapt to future requirements and be repurposed for subsequent studies.Its standardized data template is not only conducive to data exchange but also effectively addresses the crucial need for interoperability.
It is important to note that the current version of the database solution is a small, local project.The current iteration can be seen as a feasibility study designed to directly investigate its implementation, technical architecture, and user satisfaction.While it is possible to use this solution in larger clinical trials with a multicenter approach, such as shared network drives, it is not currently designed for this purpose.If the project is accepted and actively utilized by our local research groups, we would like to expand it further and implement new features.However, it is crucial to acknowledge that in the context of regulated clinical trials, an extensive certification process for the application would be required.
Meeting a Real-world Scenario: Use Case with Longitudinal Data from Mild Cognitive Impairment Patients
To demonstrate the applicability of our database solution, we selected a use case involving the longitudinal tracking of data from patients with MCI.This real-world scenario includes a variety of data types, including sociodemographic, clinical, laboratory, and neuropsychological data, as well as scores and parameterized neuroimaging data from MRI, CT, and electrophysiological studies such as EEG.We aimed to accommodate multiple classification standards, such as ICD-10, LOINC, and SNOMED-CT, and manage data collected at different points in time.
While the SQLite database is theoretically capable of storing large files, our initial implementation focuses on storing structured reports of processed neuroimaging data.Specifically, we are focusing on VBM, SBM, or Functional magnetic resonance imaging (FMRI) analyses presented in JavaScript Object Notation (JSON) or Extensible Markup Language (XML) formats.These structured reports can be fully integrated into the database using custom concept definitions.
Future Directions and Improvements
Looking to the future, there are several potential enhancements and improvements that could be made to the proposed database solution.One possible avenue for development is the direct integration with open science data hubs, allowing for seamless data sharing and collaboration across multiple institutions.This would enable researchers to easily contribute and access data from a centralized repository, promoting greater transparency and reproducibility in research.
In addition to open science integration, another potential area for improvement is the export functionality of the database solution.Currently, the solution supports CSV and HL7 JSON export formats, but there is room for expansion to other popular analysis tools such as SPSS, Python, and R. By expanding the range of supported export formats, users will have greater flexibility in conducting further analysis on the collected data.
Exploring the implementation of a Fast Healthcare Interoperability Resources (FHIR) interface could be very beneficial.As an established standard for electronic health information exchange, FHIR could significantly improve the interoperability of our database.This would streamline the exchange and integration of data across different health care systems and platforms.In this context, Translational Research Informatics and Data-management grid's focus on service-oriented architecture, and proven interoperability strategies offer valuable insights. 33nother potential area for improvement is the implementation of automatic lifecycle management and backup routines.Currently, these tasks are performed manually by the database administrator, which may be time-consuming and prone to human error.By automating these tasks, the database solution can ensure greater data consistency and reliability.
One significant limitation of our database solution is that our primary consultation was with clinical scientists who have limited experience in RDM.However, the inclusion of many senior scientists in our consultations lends credence to the expressed need for a solution that is easy to implement.This would enable them to integrate RDM practices seamlessly into their existing workflows.Additionally, the transparency of data storage in a local database file may alleviate concerns around data privacy and ownership, potentially making our system more readily acceptable than preexisting, highly integrated alternatives.
Conclusion
In this study, we have developed and evaluated a userfriendly SQLite database with a front-end for streamlined clinical RDM.
While we do not consider our database solution to be superior to existing systems that address FAIR principles, it does offer distinct advantages in certain contexts.Our system is designed as a "one-click" implementation with a locally stored SQLite database, offering a straightforward setup for clinical scientists lacking database management expertise.By lowering the entry barriers in this way, our solution serves as a catalyst for establishing RDM practices in labs that might otherwise be hindered by technical complexity or budget constraints.Consequently, our system adds a layer of FAIR compliance to research environments that may currently lack it, enhancing the overall FAIR landscape.
A user-satisfaction survey confirmed high acceptance among our target group.However, further refinement and evaluation are needed to optimize performance, usability, and data security across varied research applications.
Clinical Relevance Statement
• Our SQLite-based database delivers a user-friendly, effortlessly installable solution for research data storage, processing, standardization, and sharing.• This database proficiently manages multimodal and longitudinal data, and seamlessly integrates clinical classification systems like SNOMED-CT and LOINC.• By adhering to FAIR principles and implementing standards for data storage and exchange like HL7-CDA JSON, our database not only streamlines RDM but also enhances the reproducibility and publication of scientific research.Question 2: Which of the following statements best represent one of the FAIR principles for data management?a.The "Findable" principle emphasizes the use of persistent identifiers (e.g., DOIs) and rich metadata to ensure that datasets are uniquely identifiable and easily discoverable by both humans and machines.b.The "Secure" principle focuses on safeguarding data integrity, confidentiality, and availability.c.The "Exclusive" principle emphasizes controlled access to data, allowing only a select group of individuals or organizations to access and use the data.
Multiple Choice Questions
d.The "Closed" principle refers to restricting data access to a specific group or organization, thereby limiting its availability, and preventing broader reuse.
Answer: a.
Protection of Human and Animal Subjects
The study adhered to the World Medical Association Declaration of Helsinki's ethical guidelines for research involving human subjects and received approval from our local Ethics board (Reference: 2022-2658-Daten).
Fig. 1
Fig. 1 Flow chart illustrating the steps from the assessment of user and technical requirements, the definition of a concrete use case, the literature research, the development, and implementation of a solution concept to the validation of the result.RDM, research data management.
Fig. 2
Fig. 2 Illustration of the main views for entering clinical research data into the database using the UI.UI, user interface.
Question 1 :
Which statement accurately reflects the use and limitations of Excel spreadsheets in managing clinical research data?a. Excel spreadsheets are ideal for managing complex clinical research data due to their advanced functionality and structure.b.Large datasets can make Excel sheets prone to high error rates, ranging from 7 to 80%.c.Spreadsheet-based solutions provide the necessary security and backup functionality for sensitive clinical data.d.Clinical researchers seldom use Excel spreadsheets for data management.Answer: b.
Table 1
User demands and technical requirements for the data management solution
Table 2
Overview of existing data management solutions based on literature research • Cloud-based or on-site installation • Clinical data management software • Data collection and management for research studies • Drag and drop form design • Support for web-based interfaces • Supports multiple field types • Clinical research form (eCRF) generation Applied Clinical Informatics Vol. 15 No. 2/2024 © 2024.The Author(s).
Table 2 (
Continued) Abbreviations: CTMS, clinical trial data management systems; eCRF, electronic case report form; EDC, electronic data capture; ELN, electronic laboratory notebooks; GUI, graphical user interface.Applied Clinical Informatics Vol. 15 No. 2/2024 © 2024.The Author(s).Simplifying Multimodal Clinical Data Management Schweinar et al. 241 visit observations, or (4) fixed observations within a visit, resembling a Clinical Research Form (CRF).►Fig. 2 provides an illustration of this data path alongside the final UI. | 2024-02-03T06:17:44.990Z | 2023-07-14T00:00:00.000 | {
"year": 2024,
"sha1": "cefc000e0024404d2a937aab4d840156a8e8258d",
"oa_license": "CCBYNCND",
"oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/a-2259-0008.pdf",
"oa_status": "HYBRID",
"pdf_src": "Thieme",
"pdf_hash": "21310781c22fb8121d3df85ac19b8a9522bc26ac",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
244797519 | pes2o/s2orc | v3-fos-license | Markov Decision Process approach in the estimation of raw material quality in incoming inspection process
The incoming inspection process in any manufacturing plant aims to control quality, reduce manufacturing costs, eliminate scrap, and process failure downtime due to defective raw materials. Prediction of the raw material acceptance rate can regulate the raw material supplier selection and improve the manufacturing process by filtering out non-conformities. This paper presents a raw material acceptance prediction model (RMAP) developed based on the Markov analysis. RFID tags are used to track the parts throughout the process. A secondary dataset can be derived from the raw RFID data. In this study, a dataset is simulated to reflect a typical incoming inspection process consisting of six substations (Packaging Inspection, Visual Inspection, Gauge Inspection, Rework1, and Rework2) are considered. The accepted parts are forwarded to the Pack and Store station and stored in the warehouse. The non-conforming parts are returned to the supplier. The proposed RMAP model estimates the probability of the raw material being accepted or rejected at each inspection station. The proposed model is evaluated using three test cases: case A (lower conformities), case B (higher conformities) and case C (equal chances of being accepted and rejected). Based on the outcome of the limiting matrix for the three test cases, the results are discussed. The steady-state matrix forecasts the probability of the raw material in a random state. This prediction and forecasting ability of the proposed model enables the industries to save time and cost.
Introduction
Materials purchased from providers and then utilized as inputs to the manufacturing process are referred to as raw materials in the inventory. Raw material management reduces production stoppages by having the appropriate quantity of material put in the proper location in the raw material inventory at the right time at a reasonable cost. These expenses can be kept as low as feasible with proper management, and material flow can be linked with production [1].
Companies use inventory management to maintain a competitive advantage, stay in business, and grow their market share. It has been determined that inventory management accounts for 30-35 % of the material value [2].
Inventory, according to Taiichi Ohno, is one of the seven wastes that should be avoided. The most visible component of a company's overall assets, it accounts for 5-30 % of total assets [3]. It is important to develop a model that fulfils companies' varied material management demands of companies costeffectively.
One of the most significant wastes in lean manufacturing is a defect. A poor procedure or a defective raw material can both cause product faults. Reducing takt time and enabling Lean manufacturing could be accomplished by addressing raw material faults at the manufacturing process.
Quality endurance includes the inbound inspection procedure and verifies that the raw material quality meets the agreed-upon requirements between the vendor and the buyer. This human-centric process has been automated in modern times employing various competitive technologies, with Radio Frequency Identification (RFID) deployment being one of the first. RFID tags are a significant advancement in the field of automated identification systems. Intelligent bar codes, or RFID tags, have a chip that records information about the goods. An RFID reader is used to read these tags and thus enables the tracking of objects. This technology was originally designed to track cattle, but their applications have subsequently broadened towards vehicle tracking [4], pets [5], and items in the manufacturing process [6]. Modelling manufacturing processes that include humans can result in erroneous findings. When examining a highly manual process, a study by [7] investigated improving the inaccuracies in discrete event modelling. Decision models use the data log from these RFID readers better understand the process parameters. The Markov Decision Process is one such model.
Markov chains and Markov processes model stochastic processes and are classified as Discrete-time and continuous-time processes. They mathematically depict a process by showing its likelihood of moving between phases. This study's goal was to help assure right-first-time production. This study related discrete event simulations to human performance models. Each station contains RFID readers, and the raw materials passing through them have RFID tags.
The balance of the paper is divided into parts. Section 2.1 describes the incoming inspection case study, and the substation decomposition is explained in 2.2. The Markov chain, Markov process and the stochastic process are detailed in Section 2.3. Estimation of Transition Matrix is elaborated in Section 2.4. The proposed raw material acceptance prediction model (RMAP) model is used to estimate the probability of the acceptance and rejection of the raw materials are detailed in Section 2.5. Section 2.6 examines the predicting of raw material at random, while Section 3 discusses the suggested model's outcomes, and the findings are concluded in Section 5.
Model Development
A quality prediction model is developed based on the RFID data captured in an automotive manufacturing pipeline. In a typical automotive manufacturing plant, the substations are installed with an RFID reader, which logs the parts traversing through the substations. The data from the RFID reader is taken as a raw dataset. Based on the daily performance, the weights are estimated for Markov analysis is detailed in section 2.6.
Incoming Inspection Case study
The process flow diagram of the incoming inspection station of an automotive supply chain environment is shown in Figure 1. The workstation contains seven sub-sections packaging inspection, visual inspection, gauge inspection, rework station 1, rework station 2, return, pack and store. The material received from the supplier is expected to meet the specified material standards. The material undergoes a series of visual and dimensional checks for qualification.
The functions of the seven substations and their process are explained visually in Figure 1. For minor changes, the materials will be subject to corrections in rework stations. Upon corrections at rework 1 and rework 2, the materials will be sent for inspection at visual inspection and gauge inspection, respectively. The minor defects are addressed and re-inspected at respective substations. When the defects are irreparable, the materials are returned to the vendor through the return gateway substation. The raw materials are unloaded at the incoming bay. The package is checked for its adherence to the specified packaging standards regarding a signed Packaging and Delivery Standard (PDS) document by the experts from both parties. The packages that do not abide by the standards are returned to the vendor, and the inventory is updated accordingly. The accepted packages are sent for visual inspection.
At Visual inspection, the skilled professional unboxes the material from the package and checks for colour, texture, and appearance conditions with the master sample. The materials with minor deformities are sent for rework. The reworked materials are re-inspected. Upon qualifying, they are sent to the next stage, where the dimensions are checked. The materials with irreparable changes after rework are returned to the vendor. The materials with major deformations are also returned to the vendor. The reworked and the returned materials are updated in the inventory accordingly.
The visually qualified materials are passed to gauges to check if it meets the specified dimension with tolerance. This process is very much necessary to address the fit and finish issues with mating parts. The OK parts are packed and stored at the warehouse for future use. Whereas the major non-conformities are returned, while the minor non-conformities are reworked and rechecked for acceptance. The reworked returned and OK conditions of the materials are updated accordingly in the inventory.
Finite State Machine Decomposition
A finite state machine (FSM) diagram decomposes the seven substations of the input inspection. Each substation is a state and is represented in a state machine diagram, as shown in Figure 1. The FSM model developed contains seven states. The transition of the states from one state to the other is represented using arrows. During the transition to the next state, only the current state and the input are considered, whereas; the previous state is not considered. The complete transition of each substate, along with the description, is tabulated in Table 1. The outcome of each transition is expressed in the output column.
Markov Process / Markov chain for incoming Inspection
The functions in the substations are explained using a finite state machine diagram. Further, the functions are converted into a Markov chain. A discrete-valued Markov process is known as a Markov chain. The state-space of potential Markov chain values is finite or quantifiable when the chain is discrete-valued. A Markov process is a stochastic process in which the process's history is irrelevant if the system's present state is known. The current state has all knowledge about the past and present that may predict the future [8 -13]. The Markov model is the one in which the present state is solely dependent on the preceding state thus.
Stochastic Process and Markov chain
The study of how a random variable changes over time is known as a stochastic process. The stochastic process through which the system's state may be monitored at discrete points in time refers to the discrete-time stochastic process. In a continuous-time, stochastic process, the system's state may be seen at any moment.
An s x s matrix depicts the transition probability.
Initial probability distribution
The raw material enters the factory after quality inspection. At the inspection bay, every material is checked for its packaging standards as per the PDS. The state flow diagram with transition probabilities is depicted in Figure 2. Only if the material conforms with the PDS, the material is taken for the next step of quality checking. So, the only entry point to the raw material is Packaging inspection (PI), So the initial distribution probability P0 is given by
Transition Matrix
In engineering, operations research and time series, Markov models are frequently used. It is utilized to explain transitions between states in an automotive assembly supply chain's arriving inspection station in this work. A transition matrix ( P ) is a matrix in which the number of states is represented by the letter ' n '. The transition probability of the Markov process is used to create the matrix. The chances of transitioning from state 'j' to 'i' are equal to each element in the transition matrix ' ij P ' which is depicted as an ' n n x ' matrix. Therefore 0 ≤ ≤ 1 must be true for all ' i ' and ' j '. Consider the state machine diagram shown in Fig 2. The aggregate of the transition probabilities from one state to another state is the total of the entries in a row. As a result, the sum of the row values must be equal to 1. This type of matrix is referred to as a stochastic matrix. The developed model is computed to estimate the following results. a. Probability of the raw material being accepted or rejected. b. Forecasting the probability of the raw material at an arbitrary state c. Estimate the steady-state probability for the given transition matrix.
Probability of the raw material being accepted or rejected at each inspection state.
Three test cases were considered to evaluate the proposed model and the weights are tabulated in Table 2. The three test cases are, Case A (lower conformities), Case B (higher conformities) and Case C (equal chances of being accepted and rejected). The following states from the incoming inspection case study are considered: PI, VI, GI, RWI and RW2.
During the intial stages, when the process has just begun, the measurables (Attribute and Variable data) may have large differences from expected and thus the error value. Moving forward, the difference would be soughted out and thus the error would come down to minimum. For each of the above test cases, the limiting matrix is calculated, and the results are discussed in the following section.
Results and Discussion
This section presents the results of the model for three scenarios. They were limiting probability to estimate the probability of the raw material being accepted or rejected at each inspection state. N-step probability to forecast the probability of the material at an arbitrary state. To find the steady-state probability for a given transition matrix. The probabilities in each state of raw materials moving to Return and Pack and Store is depicted in Figure 3. In the long run, 77% of the raw materials would be returned to the supplier for their bad quality and packaging; thereby, only 23% of the raw materials would reach the Pack and Store. This procedure intends to drastically improve the packing quality of the raw material. Almost every chance of the raw material getting accepted and rejected at the visual inspection stage 55% and 45% respectively. Once the material reaches the gauge inspection, there is 64% that the material would get through this checkpoint and reach the desired destination, Pack and Store. Interestingly, there are almost equal chances of the raw material being accepted and rejected (46%) after rework. So, 54% of the parts are of major nonconformities. Table 3 lists the state transitions for the assumed three scenarios.
Conclusion
This paper presented a raw material acceptance prediction model (RMAP) developed based on the Markov analysis. The proposed RMAP model estimates the probability of the raw material being accepted or rejected at each inspection station (Packaging Inspection, Visual Inspection, Gauge Inspection, Rework1, and Rework2). The accepted parts are forwarded to Pack and Store station and stored in the warehouse. The non-conforming parts are returned to the supplier. The incoming inspection process in any manufacturing plant aims to control quality, reduce manufacturing costs, eliminate scrap, and process failure downtimes due to the non-conforming raw materials. Prediction of the raw material acceptance rate can regulate the raw material supplier selection and improves the manufacturing process by filtering out non-conformities. Further to the limiting matrix estimation, the trajectory of every accepted raw material flowing from Packaging Inspection to Pack and Store can be used in ranking the material into subcategories. The ranking of the materials can enable quality output. | 2021-12-02T20:06:46.883Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "90a62f0962f8c40bbfe95b4fed3fb0e7d05556d9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2107/1/012025",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "90a62f0962f8c40bbfe95b4fed3fb0e7d05556d9",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics"
]
} |
225648882 | pes2o/s2orc | v3-fos-license | Fate of Diclofenac and Its Transformation and Inorganic By-Products in Different Water Matrices during Electrochemical Advanced Oxidation Process Using a Boron-Doped Diamond Electrode
: The focus of this study was to investigate the efficacy of applying boron-doped diamond (BDD) electrodes in an electrochemical advanced oxidation process, for the removal of the target compound diclofenac (DCF) in different water matrices. The reduction of DCF, and at the same time the formation of transformation products (TPs) and inorganic by-products, was investigated as a function of electrode settings and the duration of treatment. Kinetic assessments of DCF and possible TPs derived from data from the literature were performed, based on a serial chromatographic separation with reversed-phase liquid chromatographyfollowed by hydophilic interaction liquid chromatography (RPLC-HILIC system) coupled to ESI-TOF mass spectrometry. The application of the BDD electrode resulted in the complete removal of DCF in deionized water, drinking water and wastewater effluents spiked with DCF. As a function of the applied current density, a variety of TPs appeared, including early stage products, structures after ring opening and highly oxidized small molecules. Both the complexity of the water matrix and the electrode settings had a noticeable influence on the treatment process’s efficacy. In order to achieve effective removal of the target compound under economic conditions, and at the same time minimize by-product formation, it is recommended to operate the electrode at a moderate current density and reduce the extent of the treatment.
Introduction
The growing consumption of pharmaceuticals worldwide is resulting in an increasing threat to human health and the aquatic environment, as many of these polar and semi-polar micropollutants are not removed, or only incompletely removed, by traditional biological treatment technologies in wastewater treatment plants [1]. Advances made in environmental analytical chemistry have made it possible to monitor such persistent pollutants in water bodies, such as surface and groundwater, in wastewater effluents, and in drinking water at very low levels [2][3][4][5].
Among the large variety of pharmaceutical compounds present in the environment, the nonsteroidal anti-inflammatory drug diclofenac (DCF) is one of the most-consumed substances; the annual intake worldwide is estimated to exceed 1000 tons [6]. The annual consumption of DCF per capita varies between 195 and 940 mg [5]. DCF is known to be highly soluble in water. Its log Kow value of 4.5-4.8 suggests a potential for adsorption into mixed liquor during wastewater treatment. However, at a pH value above its pKa of 4.0-4.5, the carboxyl group of DCF dissociates and the molecule becomes negatively charged, exhibiting a lower tendency to adsorb into sludge (Log kD: 1.2-2.1). Additionally, DCF is only moderately biodegradable under suboxic and anoxic conditions (kbiol: ≤ 0.1 L/g•day) [5][6][7]. Thus, conventional wastewater treatment, as summarized by Lonappan et al., is limited due to these properties [6]. Other technologies, such as ozonation, membrane filtration methods or adsorption in activated carbon, are highly effective against DCF, but are also costintensive [3,8].
DCF has been detected in effluents of wastewater treatment plants (WWTPs) in concentrations of up to 10 µg/L, in surface (up to 15 µg/L) and in groundwater (up to 0.15 µg/L), and in drinking water in the low ng/L range [5,[8][9][10][11]. In combination with other drugs present in the water, the toxicity of DCF was reported to increase considerably [10][11][12][13]. Due to the demonstrated risk to the aquatic environment, the European Union (EU) considered emerging contaminants, and put DCF on the first watchlist for monitoring [14]. In addition, environmental quality standard values of 0.1 µg/L for inland waters and 0.010 µg/L for coastal waters were proposed for DCF [14,15].
Nowadays, researchers are seeking new and efficient technologies to improve the situation, resulting in innovative and low-cost alternatives for the treatment of drinking water and posttreatment of wastewater efflue nts. In recent years, advanced oxidation processes (AOPs) have gained attention as very effective technologies in the oxidation of numerous organic micropollutants [16][17][18]. AOPs are based on the generation of free radicals, mainly the hydroxyl radical with high oxidizing power, which can successfully attack most organic molecules with elevated reaction constants ranging from 10 6 to 10 10 M −1 s −1 (e.g., [19,20]). Regarding the method for generating hydroxyl radicals, AOPs are divided into four groups: chemically (e.g., O3/H2O2; H2O2/Fe 2+/3+ (Fenton)), sonochemically (e.g., H2O2/ultrasonic), photo-chemically (e.g., H2O2/UV; H2O2/UV/ Fe 2+/3+ (Photo-Fenton)) and electrochemically [e.g., boron-doped diamond electrodes (BDD)] [18,[21][22][23][24]. A variety of advanced treatment methods, such as application of UV or UV/H2O2, radiation, ozonation, sonolysis and electrochemical treatment, has been discussed by Schröder et al., regarding their efficiency in degrading DCF in different water sources [5]. Most of these technologies require cost-intensive chemicals, have to overcome environmental and safety issues, or have to apply energy-rich filtration or irradiation. The advantages of electrochemical AOPs are based on their in situ generation of a variety of oxidants under economically and environmentally friendly conditions [5].
The application of BDD in electrochemical AOP has been successful in the removal of a variety of organic pollutants from water and wastewater. Several studies relate to the effectiveness of DCF treatment using BDD electrodes [11,[25][26][27][28]. Due to the inertness of the diamond, the chemical stability and the high oxygen evolution overpotential, these electrodes show high treatment efficiencies compared to other materials [17,19,[25][26][27][28][29][30][31][32]. Depending on the water composition, various oxidative species, such as hydroxyl radicals, hydrogen peroxide, ozone and eventually chlorine-based compounds, are responsible for the transformation of the target compounds using BDD [28,[32][33][34][35]. In general, there is no mineralization during oxidative treatment under realistic conditions [36]. However, oxidants can also carry out reactions with other organic water constituents, which result in the formation of sometimes-toxic by-products, such as bromate and chlorate [37]. In addition, transformation products with a higher toxicity than the parent compound can be generated during the oxidative treatment of micropollutants, as observed in the photocatalytic degradation of DCF [38] as well as N-oxides via treatment with ozone [9,39,40]. So far, the degradation of DCF and production of possible transformation products (TPs), as well as of unwanted inorganic by-products, may be quite heterogeneous, due to competitive reactions with organic and inorganic constituents at high concentrations in drinking water and WWTP effluents [23,41].
The aim of this study was to characterize the oxidative degradation process of DCF as a model target, including assessment of the kinetics, the formation of organic TPs and inorganic by-products, and the energy consumption in water matrices with increasing complexity. This approach allows for the first time a comprehensive examination of DCF degradation, as an exemplary persistent pollutant in electrochemical water treatment, linking DCF transformation with by-product formation and energy expenditure. For this purpose, DCF was spiked into deionized water, synthetic hard drinking water and a real wastewater effluent, and treated using a BDD electrode under varying electrode settings. Based on the generated data, the extent of oxidation and possible optimization steps of the process, with respect to minimizing potentially unwanted by-products and the specific energy demand, are topics of discussion.
Water Matrices
Experiments were conducted in three water matrices: (a) deionized water (deionizer "Milli-Q Plus 185"); (b) synthetic hard drinking water; and (c) a wastewater effluent collected from the municipal WWTP Garching, Germany (31,000 population equivalents, two-step aerobic biological treatment). Concentrations of organic and inorganic water constituents and other relevant parameters are summarized in Table 1. Each water matrix was spiked with DCF to reach a final concentration of 50 µM.
Experimental Setup and Sampling
Experiments were conducted with a CONDIAPURE ® test system (CONDIAS GmbH, Itzehoe, Germany) using a DIACHEM ® electrode stack (CONDIAS, Itzehoe, Germany). The stack consisted of a single anode/cathode pair of niobium, coated with a 5-to 7-µm-thick boron-doped diamond layer with an electrode surface of 24 × 50 mm 2 per electrode. A nafion cation exchange membrane was placed in direct contact with the cathode and anode in order to create a gap-free sandwich structure and thus enhance the current density locally, which promoted ozone formation [19]. The electrode stack was integrated into an optically accessible glass reactor (Esau & Hueber GmbH, Schrobenhausen, Germany). The experimental setup has been described elsewhere [31,42]. Three liters of DCF solution of each water matrix were prepared in a glass vessel (tempered to 20 °C ± 1 °C), which was connected to the operation unit with an inlet and outlet tube. The reaction solution was circulated through the glass reactor by a centrifugal pump (PY-2071, Speck Pumpen, Hilpoltstein, Germany) at a flow rate of 4 L/min. Electrolytic treatment was carried out for an overall time of 30 min for deionized water, and 60 min for drinking water and wastewater, using current densities of j = 42, 167 and 292 mA/cm 2 , respectively. Electricity consumption was monitored using the Energy Monitor 3000 (Voltcraft, Hirschau, Germany).
Samples (2 mL each) were collected directly from the glass vessel after a period of 0, 1, 2, 3, 5, 7.5, 10, 15, 20, 30 and 60 min. The oxidation reaction in the samples was terminated by adding 20 µL of 10 mM Na2SO3 directly after sampling. Samples were subsequently filtered through 0.22 µm polyvinylidene fluoride (PVDF) Pleomax filters before the analysis of DCF and TPs.
Analysis
DCF and TPs were analyzed with two-dimensional liquid chromatography (LC) by serial coupling of reversed-phase liquid chromatography with hydrophilic interaction liquid chromatography (RPLC-HILIC) and an electrospray ionization time-of-flight mass spectrometer (ESI-TOF-MS).
Oxidation mixtures were injected on a serial RPLC-HILIC-ESI-TOF-MS system containing two Agilent HPLC systems series 1260 Infinity (Waldbronn, Germany) coupled to an Agilent TOF-MS system series 6230 with Jet Stream ESI interface (Agilent Technologies, Santa Clara, CA, USA). Chromatographic separation was performed by serial coupling of a Poroshell 120 EC-C18 column (50.0 mm × 3.0 mm, 2.7 µm) (Agilent Technologies, Santa Clara, CA, USA) and a ZIC ® -HILIC column (150 mm × 2.1 mm, 5 µm, 200 Å) (Merck Sequant, Umeå, Sweden) based on a method described by Greco et al. [43]. Injection volume was 10 µL. The RPLC mobile phase was a mixture of ammonium acetate 10 mM/acetonitrile (90:10, v/v) and ammonium acetate 10 mM/acetonitrile (10:90, v/v) at a flow rate of 0.1 mL/min. The HILIC mobile phase was a composition of acetonitrile, water and the solvent from the RP column at a flow rate of 0.4 mL/min. The Jet Stream ESI source was used in negative mode with the conditions described elsewhere [23]. The HPLC systems, the ESI interface and the mass spectrometric detector were controlled and data were acquired and processed by MassHunter software (Agilent Technologies, Waldbronn, Germany) using the extraction ion chromatogram (EIC) technique within a maximum mass tolerance of 10 ppm. The accurate mass data of the TPs were processed as previously described by Stadlmair et al. [44].
The pH value was controlled based on Standard Method 4500-H + [45]. Ozone concentrations were quantified as residual ozone by photometric measurement at 610 nm after decolourization of indigo bisulfonate based on the descriptions given by Bader and Hoigné [46] with modifications described before [47]. Instead of phosphate buffer, phosphoric acid was employed. Concentrations of the inorganic ions chloride (Cl − ) and bromide (Br − ) as well as chlorate (ClO3 − ) and perchlorate (ClO4 − ) were analyzed by ion chromatography with a DIONEX ICS-1000 device (Thermo Scientific Dionex, Sunnyvale, CA, USA) according to Standard Method 4110 [45]. Detection limits for Br − , Cl − and ClO3 − were 0.05 mg/L in deionized water, and 0.5 mg/L in drinking water and wastewater effluent, respectively. ClO4 − could be detected to 0.1 mg/L in deionized water and 1 mg/L in drinking water and wastewater effluent. Additionally, formation of ClO3 − and ClO4 − as well as the inorganic by-products bromate (BrO3 − ) and perbromate (BrO4 − ) was analyzed semi-quantitatively with RPLC-HILIC-ESI-TOF-MS as described above.
Removal of Diclofenac
BDD electrodes generate hydroxyl radicals at the anode surface from water molecules.
In subsequent reactions, other oxidants, such as ozone, hydrogen peroxide and further radical species as well, are formed [48,49]: In complex water matrices, using chlorine-containing molecules, a variety of reactive molecules, including active chlorine species, can also arise. At the BDD electrode, the amounts of oxidants increase when applying higher current densities [49,50].
The electrochemical oxidation of DCF with a BDD electrode was investigated, following procedures reported in a previous study [23]. For this purpose, DCF was spiked at a concentration of 50 µM into the three different water matrices of deionized water, drinking water and wastewater effluent. These water matrices were treated with different current densities j of 42, 167 and 292 mA/cm 2 , respectively. Degradation of DCF over the applied cumulative current per volume, expressed as %-variation of the peak area of the respective EIC, for the three water matrices is displayed in Figure 1. The degradation of DCF in deionized water ( Figure 1a) was almost identical for the three applied current densities, with a relatively quick removal of the parent compound. Complete degradation of DCF could be achieved within approximately 350 mAh/L (cumulative charge input at a certain reaction time, depending on the respective reactor volume) at 167 mA/cm 2 , which corresponds to a treatment time of 30 min. Applying 292 mA/cm 2 , or around 310 mAh/L, was required for complete elimination of the target compound. At the lowest current density, the maximum removal level was around 40%, as the cumulative current per volume (93 mAh/L) was not sufficient for complete elimination of DCF. Thus, the degradation did not depend on the applied current density, and required a total current per volume of approximately 310 mAh, which was not achieved for the lowest current density within the applied treatment time.
In drinking water (Figure 1b), complete degradation could be observed after approximately 450 mAh/L for both higher current densities, whereas using 42 mA/cm 2 resulted in a significantly slower DCF degradation, with a maximum removal rate of 30% at 195 mAh/L. In wastewater effluent ( Figure 1c), application of 292 mA/cm 2 resulted in the highest slope of all degradation experiments, but the results did not vary much from using 167 mA/cm 2 . Complete removal of the compound required approximately 670 mAh/L, which could be achieved at 292 mA/cm 2 . The variation between both curves at 292 mA/cm 2 and 167 mA/cm 2 , from 400 mAh/L onwards, might derive from the fact that at different current densities, samples were not taken at the same Q/V values, as sampling was timedependent. Thus, the progress of the curves, from 400 mAh/L until complete degradation of DCF, can only be estimated. Degradation experiments for the lowest current density in wastewater effluent resulted in a significantly lower degradation rate of only up to 25%, compared to the higher current densities. Therefore, the removal of DCF in wastewater effluents under 42 mA/cm 2 is not feasible.
The observed variations in DCF removal between the different water matrices are most likely caused by the presence of competing inorganic and organic compounds in the complex water matrices, which generally reduce the availability of hydroxyl radicals and ozone, as those compounds scavenge the oxidizing agents and therefore lower the degradation rates of the target compound [2,37,51]. In the case of deionized water, there is no competition between target molecules and other inorganic and organic components, leading to a maximum efficiency of DCF removal. The drinking water contained inorganic ions in relatively high concentrations, which extended the overall treatment process. The additional presence of organic compounds in the wastewater effluent (at lower concentration of inorganic compounds compared to the drinking water) further prolonged the electrochemical oxidation process until complete removal of DCF. Overall, the impact of inorganic constituents, in particular chloride, nitrate and sulfate, on DCF transformation during BDD treatment was more noticeable than the competition between the target compound and organic matter, which strongly depends on the respective concentrations in the water matrix.
DCF reacts with both ozone and hydroxyl radicals, as both can be formed by the BDD electrode used in this study [31]. The reaction constants for the oxidation of DCF with non-selective hydroxyl radicals (kOH 7.5 ± 1.5 × 10 9 M −1 s −1 ) are higher than those with ozone (kO3 around 1 × 10 6 M −1 s −1 ) [2,37]. As the compound is showing a good, selective reactivity towards ozone [41], it is possible to monitor the matrix-driven depletion of oxidants through the measurement of dissolved ozone in the different water matrices. The maximum ozone values in drinking water were 0.068 mg/L for 42 mA/cm 2 , 0.354 mg/L for 167 mA/cm 2 , and 1.07 mg/L for 292 mA/cm 2 . In wastewater effluent, the highest values were significantly lower, resulting in 0.022 mg/L at 42 mA/cm 2 , 0.185 mg/L at 167 mA/cm 2 and 0.553 mg/L at 292 mA/cm 2 .
Non-selective oxidation is present through hydroxyl radicals, and selective oxidation through ozone and chlorine species (in complex water matrices). In complex water matrices containing chloride ions, the formation of hypochlorite, chlorine dioxide and other chlorine-containing reactive oxygen species is also possible. Since active chlorine species are highly reactive (but less effective compared to ozone) towards DCF (reaction constant kClO2 = 1.05 × 10 4 the electrochemical treatment of DCF can be highly effective in water with high chloride concentrations, through the release of chlorine compounds resulting from the oxidation of DCF. Oxidant concentrations increase with the applied current density. However, other critical inorganic by-products, such as chlorate and perchlorate, arise during the process [2,[51][52][53], which is discussed later in this study. Since electro-oxidation with BDD electrodes produces a mixture of oxidants, including ozone and hydroxyl radicals, it is possible to combine the selective and non-selective degradation of the parent compound and generated TPs. For DCF with a high reactivity towards ozone, this combination seems to be ideal for achieving a complete removal of the target compound. Vogna et al. [54] reported a complete decomposition of DCF (initial concentration 1 mM) after 8 min of treatment, by introducing an initial ozone concentration of 48 mg/L of ozone in the gas phase into deionized water. In the present study, concentrations of residual dissolved ozone in deionized water were measured at a maximum of 1.8 mg/L, using the highest current density of 292 mA/cm 2 . Initial ozone concentrations, however, could not be examined during the experiment.
In the present study, residual ozone concentrations in the chloride-containing drinking water were significantly lower than those in deionized water. DCF degradation, however, appeared comparable within those two matrices. Initial degradation of DCF was faster in drinking water, most likely caused by oxidants arising from high chlorine concentrations. In wastewater effluent, other organic compounds could also react with active chlorine species, resulting in prolonged DCF degradation time.
Apparently, the effect of oxidant depletion through the high concentrations of the inorganic and organic constituents of wastewater effluent is predominant at the lowest current density. In this case, the formation of ozone can be neglected, and oxidation through hydroxyl radicals is instead driven by mass transport [35], leading to competitive reactions between the target molecule and other water constituents. This effect is represented by the low degradation rate at 42 mA/cm 2 , which differs significantly from the higher current densities. In drinking water with a certain content of inorganic constituents, the ratio of hydroxyl radicals and ozone depleted through water matrix components was less noticeable. Thus, the organic matter fraction of wastewater effluent is likely to be the major hydroxyl radical scavenger, which further represses ozone formation at low current densities. Lee et al. [51] reported that hydroxyl radicals are likely to be consumed by any water component in the wastewater matrix.
Formation and Fate of Transformation Products
DCF removal will also result in the simultaneous generation of DCF TPs. These compounds can arise through chemical reactions between the parent compound and other TPs with oxidants, or after chemical reactions such as de-chlorination, de-carboxylation and the cleavage of molecules. During electrolysis in deionized water under the present electrode conditions, TPs from reactions with the main oxidants' hydroxyl radicals and ozone can be expected.
In more complex water matrices, other oxidative species formed during BDD treatment, such as chlorine and hypochlorite, might play a major role in the generation of TPs. Thus, possible TPs were derived from the literature's data based on the oxidation reactions of DCF during ozonation [9,12], as well as the degradation through hydroxyl radicals using photo-Fenton [13,54], through photocatalysis/TiO2 [38], through BDD treatment [11,25,55] and through chlorination [53]. The list derived from this literature search included 47 known compounds (see [23]) with chemical structures including two aromatic ring systems, one aromatic ring and linear structures. The formation and degradation of these organic oxidation by-products was compared for the different water matrices. Selected identified TPs are summarized in a degradation scheme in Figure 2.
Overall, five TPs were identified in all three water matrices at a current density of 292 mA/cm 2 . These contain three structures with two aromatic rings, derived from hydroxylation or decarboxylation of the original compound DCF. Further oxidation leads to the formation of 2,5dihydroxybenzaldehyde (TP_34), a one-ring molecule derived from cleavage of the carbon-nitrogen (C-N) bond. This TP could be identified in deionized and drinking water, but not in the wastewater effluent. Two linear structures were also found in all three water matrices: a mixture of maleic acid and fumaric acid (TP_41), and one product identified as oxalic acid (TP_46). Thus, DCF was almost degraded to the point of mineralization. For further details, see Supplemental Table S1. For all water matrices, early-stage TPs containing two aromatic rings appeared in the initial phase of the treatment process. In deionized water, the primary TP 5-hydroxy-DCF (TP_4) was formed directly after initiation of the oxidation with, a maximum yield after approximately 100 mAh/L, and a complete degradation after 420 mAh/L (Figure 3a), whereas in drinking water and wastewater effluent it took almost 200 mAh/L to reach the maximum yield (Figure 3b and 3c).
In wastewater effluent, small amounts of 5-hydroxy-DCF could still be detected even after oxidation for 1,400 mAh/L. For all water matrices, the increase and decline of the formation of the two other two-ring TPs was slightly delayed compared to 5-hydroxy-DCF. This might be due to the decarboxylation step additional to the hydroxylation.
The one-ring structure 2,5-dihydroxybenzaldehyde was detected at its maximum ratio at 200 mAh/L in deionized water, and was almost fully degraded at 640 mAh/L. In drinking water, the maximum of the curve was reached after 637 mAh/L, and only slightly declined when the oxidation was terminated after 1,285 mAh/L. This TP could not be detected in the wastewater effluent. Thus, in the more complex water matrix (wastewater effluent), under the applied conditions, the formation of early-stage DCF TPs with two aromatic rings might be preferred.
During extended oxidation, further ring opening resulted in the formation of the small and highly oxidized molecules fumaric acid and maleic acid/oxalic acid (TP_41 and TP_46). In deionized water, the linear TP oxalic acid appeared after 200 mAh/L, and increased in its signal even when the reactor was turned off after 644 mAh/L. A similar behavior could be observed in the more complex water matrices, where small amounts of oxalic acid were already noticed after brief oxidation, and a significant and consistent increase was noticed after 200-300 mAh/L. Even after 1,400 mAh/L, the maximum yield was not reached for this molecule. Oxalic acid is reported to be the final product in ozone oxidation, as its reaction with ozone is very slow, whereas the rate constant of the oxidation with hydroxyl radicals is in the region of 10 7 m −1 s −1 [56]. The presence of this compound even after a longer treatment time indicates that the predominant oxidant present in the solution might be ozone rather than hydroxyl radicals. Thus, under the applied electrode conditions, the target compound DCF was almost mineralized. The second linear product, maleic acid/fumaric acid, could be successfully degraded in all water matrices (for further details, see Supplemental Figure S1). Determination of the ozone values during the oxidation process supported these findings, since the ozone values exponentially increased up to 1.81 mg/L in deionized water after the majority of the TPs were degraded (after 400 mAh/L). After the electrode was turned off, still no plateau value, and therefore no saturation, was reached for ozone.
The formation and degradation of the five DCF TPs common in the three water matrices was further investigated by varying electrode settings (j = 42, 167 and 292 mA/cm 2 ). Formation and degradation of the detected TPs were comparable for the applied current densities, with a lower maximum charge per volume input for 42 and 167 mA/cm 2 compared to 292 mA/cm 2 . In general, application of the lowest current density resulted in the detection of only a few TPs in each water matrix, as a result of low oxidant production rates. In deionized water, the charge input during the oxidation process generated three primary TPs with two-ring structures. By increasing the current density, the overall number of detected structures, as well as the oxidation state, was enlarged, and degradation of the TPs could also be observed.
As structures with greater oxidation states, such as small linear molecules, require a certain charge input, such TPs were instead formed at higher current densities. In deionized water, fumaric acid/maleic acid and oxalic acid could only be detected at 292 mA/cm 2 . Application of 42 mA/cm 2 in the wastewater effluent resulted in a single TP, whereas the same electrode condition in drinking water generated four molecules, including hydrobenzoic acid and fumaric acid/maleic acid. In wastewater effluent, high concentrations of hydrogen bicarbonate preferentially scavenged hydroxyl radicals, whereas organic compounds also reacted with the radicals. Instead, oxalic acid could not be detected at 42 and 167 mA/cm 2 in any treated water matrices. As reported before, these short-chain organic acids can be easily biodegraded, in contrast with the original compound [12].
Inorganic By-Product Formation
Another aspect of electrochemical water treatment is the formation of inorganic by-products, resulting from oxidation reactions in the presence of chloride and bromide. These oxidation products, such as bromate, perbromate, chlorate or perchlorate, carry a potential health risk [50,[57][58][59].
The experiments focused on the time-dependent formation of the oxohalogenides bromate and chlorate, as well as their peroxo-compounds, in the three water matrices containing varying concentrations of chloride and bromide. Quantitative analysis of bromide and chloride, and the oxidation products chlorate and perchlorate, was performed using ion chromatography analysis ( Table 2). The detection limits for the inorganic by-products increased ten-fold when using deionized water instead of the more complex water matrices containing high chloride concentrations. Bromate and perbromate could not be detected with IC.
In order to receive more robust data regarding those by-products, a semi-quantitative analysis on the formation of bromate and perbromate, as well as chlorate and perchlorate, could be performed by RPLC-HILIC-MS, and this could also be done in those cases where the respective concentrations for chlorate and perchlorate were below the detection limit of the IC analysis. The formation of inorganic by-products is presented in Figure 4, and the chromatograms of bromate, perbromate, chlorate and perchlorate, obtained from the drinking water matrix, can be found in Supplemental Figure S2.
RPLC-HILIC-MS data revealed that chlorate was quickly formed in the initial phase, whereas perchlorate increased only slightly (Figure 4a). After approximately 60 mAh/L, chlorate formation reached an almost steady state, which was maintained until the complete degradation of DCF. Perchlorate formation strongly increased after the initial phase, resulting in a higher formation rate for perchlorate compared to chlorate. The highest concentrations were reached after 644 mAh/L, with a maximum of 0.108 mg/L for chlorate and 0.921 mg/L for perchlorate, as derived from the IC data.
Chlorate and perchlorate formation strongly depends on the presence of hydroxyl radicals [60,61]. Thus, compared to the more complex water matrices, relatively high amounts of perchlorate can originate in a water matrix with less radical scavenging molecules. Similarly, in the absence of bromide, formation of the oxidation products bromate and perbromate could not be observed in deionized water. In drinking water, brominated by-products were also expected. However, these compounds could not be detected quantitatively with IC analysis. As a result of the RPLC-HILIC-MS analysis, those oxidation products could be identified as well, and the signal of chlorate and perchlorate was about 1000-fold higher than that of bromate and perbromate ( Figure S2). Initially, there was a strong increase in the bromate curve, which flattened after approximately 60 mAh/L. Chlorate values increased constantly, similar to perbromate, and were always lower than those of bromate and perchlorate (Figure 4b).
A reduction of perchlorate was observed when the current density was decreased ( Table 1). As the stepwise oxidation of chloride to perchlorate is driven by hydroxyl radicals, this effect can be explained via the increase of hydroxyl radicals in direct proximity to the electrode at higher current densities, which favors the formation of perchlorate as the highest oxidation state of chloride [59][60][61]. Other data from the literature also confirm these findings [50,57,62].
In wastewater effluent, the formation of chlorate was comparable to that in drinking water. The ratio of perchlorate to chlorate was remarkably lower, probably resulting from the presence of radical scavenging organic molecules in the wastewater effluent. Even though no bromide and no brominated oxidation products could be detected in the IC analysis, as observed with RPLC-HILIC-MS, bromate formed up to about 80% of its maximum amount immediately upon initiation of the treatment (Figure 4c). Calculated from the peak areas, the estimated amount of bromate was about 1000 times lower than that of chlorate. Perbromate could be detected in traces after 300 mAh/L, with a steady increase afterwards. The formation of brominated oxidation products can occur in the presence of ozone and chlorine in solution [59,63]. Another study reported an inhibition of the bromate formation due to the high chloride concentrations in water [64]. This tendency can be reproduced by comparing the results of the drinking water with the highest chloride content with those of the other water matrices. Overall, the concentrations of the oxidants in wastewater effluent were not sufficient to fully oxidize bromide to perbromate, especially under the condition of high chloride concentrations in the water matrix.
In wastewater effluent, the maximum amounts of chlorate and perchlorate, derived from the IC data, are lower than those in drinking water, considering their concentrations in relation to the treatment time. The measured maximum values are indeed extremely high; however, they result from the long treatment times after the complete removal of DCF and the majority of the early-stage TPs.
Due to their carcinogenic potential, oxidation by-products, such as perchlorate and bromate, must definitely be kept at minimum levels. By legislation, bromate concentrations are limited to a threshold value of 10 µg/L in drinking water according to WHO recommendations [57,63]. Perchlorate discharge is limited to 15 µg/L, according to the US Environmental Protect Agency [65]. In addition, it has to be stressed that in the present study, by adding high concentrations of DCF to the water matrices, the scenario is made to represent the conditions of industrial water treatment.
Recommended Operating Conditions
The aim of the present study was to investigate the efficiency of BDD in a worst-case scenario (highly contaminated water matrices with increasing complexity), in order to highlight the effects of varied electrode conditions and water composition on the degradation of DCF, and at the same time the formation of TPs as well as inorganic by-products. For this purpose, the concentration of 50 µM DCF was selected based on a comparison with data from the literature [11,66]. The successful removal of persistent pollutants potentially contributes to the elimination of critical molecules from the aquatic environment. However, toxicological and economical aspects have to be considered as well in order to meet the requirements. Thus, the findings may give indications for the optimization of operating conditions when using BDD electrodes for the removal of persistent chemicals, such as DCF, from a certain water matrix. As a consequence, electrode settings and the extent of treatment should be discussed to achieve optimum conditions.
Through the formation of TPs and oxidation byproducts, an increase in the overall toxicity can certainly not be excluded, and might limit the application of BDD itself. Based on data from the literature, it was reported that metabolites of the initial stage seem to be even more toxic than DCF. Continued oxidation of those compounds quickly reduced this toxicity even further, especially after the complete degradation of the original compound [11,[66][67][68]. Thus, electrochemical treatment should be performed, at least to an extent, where less toxic products and more biodegradable compounds are formed [69], provided no excessive amounts of oxidation by-products are formed. In this instance, the treatment could be executed until the point where about 70% of DCF are degraded. At this stage, the parent compound is removed to a high degree, and at the same time the degradation of early-stage TPs is already in progress. Cleavage of the aromatic ring clearly enhances the biodegradability; some studies even report a positive effect of hydroxylated compounds, as compared to their unsubstituted molecules [70].
On the other hand, at this stage, the amount of inorganic by-products in all water matrices is still very high. Thus, the conducting of the degradation of DCF is recommended only under conditions where by-product formation can be kept at low levels, which requires short treatment times and, at the same time, low current densities. In order to meet the requirements for sufficient DCF removal conditions, the recommendation can be given of operating the electrode using moderate current densities (j = 167 mA/cm 2 ), which has sufficient DCF removal efficiency. This is in accordance with data from the literature, which show a clear reduction in bromate, chlorate and perchlorate concentrations when using lower current densities during oxidative water treatment with BDD electrodes [52,57,58,62].
In order to link the data derived from oxidative degradation with economic considerations, the energy expenditure was monitored over the entire treatment process. The total energy demand E was calculated based on the following Equation (6): Ecell is cell voltage (V); I is applied current (A); T is treatment time (h); and V is volume of the treated water (m³). The total energy demand of the oxidation reactions during treatment with the BDD electrode in the different water matrices is displayed in Figure 5. The highest energy consumption in all three water matrices appeared at 42 mA/cm 2 , as the highest energy demand for the process resulted from the pumping and the control unit. In deionized water, the most advantageous electrode settings appeared at 167 mA/cm 2 (Figure 5a), whereas in drinking water (Figure 5b) and wastewater effluent (Figure 5c), the highest current density resulted in the lowest energy demand for the treatment process, when considering the total energy.
An explanation can be given by the high content of organic and inorganic compounds in the complex water matrices, resulting in a greater energy demand for the successful oxidation of the target molecule. In this case, the application of lower current densities requires prolonged treatment times and high energetic input. Only a fractional amount of the total energy was consumed by the electrode alone [62]. Thus, electrochemical oxidation using the diamond electrode has the quality of taking place in a competitive range, even for complex water matrices. Decreasing the current densities would also allow the reduction of the overall energy input for the process. However, the applied conditions, and their impact on the process, strongly depend on the water matrix. An optimization of the reactor geometry and the electrode surface can also contribute to a significant reduction of undesired side effects [31]. It also has to be mentioned that the experiments were performed in batch flow, in a small, laboratory-scale reactor. The results of flow-through experiments in a reactor with a higher throughput will certainly differ considerably from the present data, particularly with regard to the formation of undesired by-products. Further, different cell types for BDD electrodes have been shown to influence the overall performance, and thus also by-product formation [61].
Furthermore, it has to be mentioned that the BDD electrode should rather be considered as an additional treatment step, instead of a stand-alone technology for oxidative water treatment. In this context, it is sufficient to use the electrode up until the initial toxicity is decreased, and the amount of non-degradable compounds is reduced. In subsequent treatment steps, such as bioreactors, sand filters, etc., residual TPs are more accessible to biodegradation, compared to the original compound with relatively low biodegradability [71].
Conclusions
Results from the present study revealed that the application of the BDD electrode is effective in the removal of DCF from water matrices of different compositions. As the efficiency of the process depends on the presence of oxidative species, such as ozone and hydroxyl radicals, scavengers for those oxidants present in more complex water matrices, including inorganic and organic constituents, reduce the degradation efficiency. Independent of the water matrix, some early-stage TPs were formed and later degraded again during the treatment process. In addition, the subsequent formation of TPs resulting from cleavage of the aromatic ring systems, as well as small highly oxidized structures, could be identified.
Based on the present data, the aim of this study was to highlight the effects of various electrode settings and the influence of water composition on the degradation process of DCF. Both factors had a major impact on the removal of the original compound, but also on the appearance of TPs, as well as inorganic by-products. Thus, it is necessary to find a balance between successful treatment and toxicological and economic aspects, and to optimize the process in a manner dependent on the composition of the water. Especially in less complex water matrices, the application of relatively low current densities can be sufficient for effective removal of DCF and its TPs. Thereby, by-product formation can clearly be reduced, and at the same time the energy demand can be kept relatively low. Besides the investigation of inorganic by-products and the energy demand, toxicological considerations of the oxidative treatment should also be carried out in the future.
Supplementary Materials: The following are available online at www.mdpi.com/2073-4441/12/6/1686/s1, Figure S1: Formation and degradation curves of DCF TPs expressed as %-variation of the peak area of the respective EIC over the applied charge per volume (Q/V) in deionized water (a), drinking water (b) and wastewater effluent (c) at 292 mA/cm 2 . Figure Acknowledgments: G. Greco is grateful to the Foundation BLANCEFLOR Boncompagni-Ludovisi, née Bildt, for financial support. The authors thank Merck Sequant for the column as a gift, and JAS for the HPLC as a loan.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-06-18T09:08:31.464Z | 2020-06-12T00:00:00.000 | {
"year": 2020,
"sha1": "86a11a55320ddf09efb66d9bc5def8641639d637",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/6/1686/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f33e7cadeb273ae05cf77fb605b2c4155edd1be6",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
4125617 | pes2o/s2orc | v3-fos-license | Curious creatures: a multi-taxa investigation of responses to novelty in a zoo environment
The personality trait of curiosity has been shown to increase welfare in humans. If this positive welfare effect is also true for non-humans, animals with high levels of curiosity may be able to cope better with stressful situations than their conspecifics. Before discoveries can be made regarding the effect of curiosity on an animal’s ability to cope in their environment, a way of measuring curiosity across species in different environments must be created to standardise testing. To determine the suitability of novel objects in testing curiosity, species from different evolutionary backgrounds with sufficient sample sizes were chosen. Barbary sheep (Ammotragus lervia) n = 12, little penguins (Eudyptula minor) n = 10, ringtail lemurs (Lemur catta) n = 8, red tailed black cockatoos (Calyptorhynchus banksia) n = 7, Indian star tortoises (Geochelone elegans) n = 5 and red kangaroos (Macropus rufus) n = 5 were presented with a stationary object, a moving object and a mirror. Having objects with different characteristics increased the likelihood individuals would find at least one motivating. Conspecifics were all assessed simultaneously for time to first orientate towards object (s), latency to make contact (s), frequency of interactions, and total duration of interaction (s). Differences in curiosity were recorded in four of the six species; the Barbary sheep and red tailed black cockatoos did not interact with the novel objects suggesting either a low level of curiosity or that the objects were not motivating for these animals. Variation in curiosity was seen between and within species in terms of which objects they interacted with and how long they spent with the objects. This was determined by the speed in which they interacted, and the duration of interest. By using the measure of curiosity towards novel objects with varying characteristics across a range of zoo species, we can see evidence of evolutionary, husbandry and individual influences on their response. Further work to obtain data on multiple captive populations of a single species using a standardised method could uncover factors that nurture the development of curiosity. In doing so, it would be possible to isolate and modify sub-optimal husbandry practices to improve welfare in the zoo environment.
INTRODUCTION
Individuals in a single population will often have different behavioural responses when faced with the same conditions (Coleman, 2012;Mehta & Gosling, 2008). Individual variation has been flagged as an important factor for captive animal welfare, as success both in captivity and in the wild comes down to individual coping styles (Koolhaas et al., 1999;McDougall et al., 2006;Watters & Powell, 2012). Determining the traits that allow animals to adapt to life in captivity is difficult, as multiple factors from both an evolutionary level and an individual level are at play. Innate drives common to all within a species can be difficult to accommodate for in captivity and the inability to express this normal behaviour can lead to increased stress (Mason, 2010). Variation in individual traits such as boldness and previous experience further influence how an individual can manage challenges faced in captivity (Franks et al., 2013;Stoinski, Jaicks & Drayton, 2012;Tetley & O'Hara, 2012). To be able to assess how different species manage life in captivity, it is important to identify traits that are shared between animals that thrive in a captive setting and then create an efficient and effective way to detect these traits both between and within species. Parallels between coping with captive life and ''human-induced rapid environmental change'' in the wild have already been made, suggesting that species which exhibit high behavioural plasticity in the wild are also able to cope well with captive housing with the inverse also being true (Mason et al., 2013).
Curiosity has been described as a driving force behind increased interaction with one's environment (Lilley, Kuczaj & Yeater, 2017). In humans, curiosity has been linked to an increased perception of positive outcomes (Maner & Gerend, 2007), decrease in depression (Kaczmarek et al., 2014), improved psychological and emotional wellbeing (Wang & Li, 2015) and an enhanced ability to deal with distress (Denneson et al., 2017). It can be argued that, as many species are known to share neurological similarities with humans, such as in the case of chimpanzees and their resting brain activity (Rilling et al., 2007) and vocalization interpretation in dogs (Andics et al., 2014), similar functions could also be shared with other species. Within the published literature on animal behaviour, curiosity is often referred to as ''exploratory behaviour '' or ''coping style'' (Dingemanse et al., 2002;Murphy, 1978). Multiple species have been successfully assessed for curiosity using their behavioural responses to novel objects (Coleman, Tully & McMillan, 2005;Fox & Millam, 2007;Frost et al., 2007;Glickman & Sroges, 1966;Powell & Svoke, 2008;Stöwe et al., 2006;Svartberg, 2005). As curiosity is a trait found in many species, the relationship between high levels of curiosity and positive welfare might be found in species other than humans. Some evidence exists in the field of animal behaviour to suggest a positive relationship exists between the bold-shy continuum and stress axis in zebra fish (Danio rerio) (Oswald et al., 2012), further supporting this idea.
Powell & Svoke (2008) suggested that curiosity towards novel objects could be used as a personality assessment tool, a technique that has been shown to be successful in dogs (Svartberg, 2005) and in horses using open field tests (Napolitano et al., 2008). Curiosity can be assessed by observing an animal's aversion or attraction to a novel sound, smell or object, as well as assessing other behaviour alterations displayed while exposed to the novel stimulus (Powell & Svoke, 2008). For example, animals that are fearful, shy or not inquisitive will often show an increased latency to approach a new object (Forkman et al., 2007). For information on curiosity to be obtained, novelty tests need to be designed with the specific species in mind. Modifying objects to better suit a species' proclivities improves the chances that the animals will be motivated to interact with it and reduces potential negative responses such as anxiety (Heyser & Chemero, 2012). To do this, the anatomical, behavioural, social and physiological traits of a species should be considered when choosing novel objects (Goodrick, 1973;Heyser & Chemero, 2012). Differences in species perceptions can cause a seemingly non-threatening novel object to be fear-provoking to some species (Gray, 1987), therefore not giving accurate measures of curiosity. Events that are unexpected, such as objects that move and situations that are unpredictable, have been shown to elicit fear in some animals (Boissy & Bouissou, 1995) and may not always be approached if the level of fear outweighs the desire to investigate. Similarly, when used in animals without self-recognition, mirrors can provoke aggressive responses (Balzarini et al., 2014).
The aim of this study was to assess which types of novelty tests would be suitable to assess curiosity in a variety of zoo species. Here we use the following variables to determine the level of curiosity: latency to first orient towards the object, latency to make contact, frequency of interaction, and total duration of interaction with novel objects. Barbary sheep (Ammotragus lervia), little penguins (Eudyptula minor), ringtail lemurs (Lemur catta), red tailed black cockatoos (Calyptorhynchus banksia), Indian star tortoises (Geochelone elegans) and red kangaroos (Macropus rufus) were selected as there is currently a lack of research in curiosity involving these species, and there were sufficient sample sizes available for testing within the zoo. Having such a diverse range of species also ensures representation of animals with an assortment of sensory and behavioural differences.
METHODS
This study took place from October 2015-January 2016 at Taronga Zoo in Sydney, Australia. A total of 44 individuals from six species were observed in this study. Details on the species and individuals are presented in Table 1.
Animals were assessed for their response to novelty at the individual level, however in order to minimise disturbance, they were kept in their usual social groups, and presented with the objects as a group within their usual enclosures. Individual Barbary sheep and little penguins were identified using existing visual tags, red kangaroos, ringtail lemurs and red tailed black cockatoos were identified by discernible physical features, and star tortoises were identified using small coloured marks on their carapace. Due to three individuals moulting during the experimental period, data was collected from only seven of the 10 penguins housed in the enclosure.
This study was approved by the Taronga Conservation Society and was conducted in accordance with the Exhibited Animals Protection Act 1986. Notes. a Closed enclosure-animals are enclosed and have no close contact with visitors. b Visitor interactions-visitors can interact with the animals for short periods while supervised by keepers. c Walk through-animals enclosed in an enclosure with airlocks, allowing for visitors to walk along a path through the enclosure. Interactions between animals and visitors are always monitored.
MATERIALS
Characteristics of an effective novel object test were identified as being: (1) the ability to identify individual differences within the group; (2) have objects that appeal to the physical capabilities in a range of species; (3) generate a novel response, without eliciting high levels of visible stress (fleeing, erratic behaviour, aggression) or disrupting routines. Three different novel objects were used to fit these criteria: a stationary object, a moving object and a mirror. Three objects with different physical characteristics were chosen to increase the chances that animals with different sensitivities and interests would be motivated to investigate at least one of them. All objects were visual in nature as other sensors would be difficult to measure. The sizes of the stationary and moving objects were scaled to approximately one third of the average height of the species to control for the size differences between the species. The mirror was scaled down for the tortoises only as the larger mirror could not fit in the enclosure. The stationary objects, made by the primary researcher, were solid fluorescent orange rectangular prisms made from non-toxic recycled cork, which was soft enough to not damage either the animals or the enclosure. The softness of the object was a consideration for future work where particularly strong animals, like great apes, may throw it. The colour was chosen as it was not routinely found in the natural environment and for dichromats, the fluorescence would be different from the surrounding environment. It has been shown in horses (known dichromats), that they are able to discern differences in colour based on brightness (Geisbauer et al., 2004). Objects in four different sizes were created, with the diameters being 33 × 33 × 9 cm (Barbary sheep, red kangaroos), 14.5 × 14.5 × 9 cm (ringtail lemurs, red tailed black cockatoos), 8.5 × 8.5 × 4.5 cm (little penguins) and 4.5 × 4.5 × 2.2 cm (star tortoises) (Fig. 1). This object was designed to be the most benign as it was scaled to be smaller than the animals and stationary.
The moving objects, made by the primary researcher, were black and white striped stepped toroidal shapes made by layering expanded polypropylene foam (non-toxic and soft). As with fluorescence, sharp recurring lines are not often found in the natural environment, and the appearance of black and white should not be altered too much in the event of differing visual capabilities. A rope through a hole in the centre of the object was used to move it. The moving object was intended to attract animals that are more stimulated by movement or that are bold enough to overcome the uncertainty of the movement to investigate. Objects in four different sizes were created with the objects measuring 33 × 33 × 33 cm (Barbary sheep, red kangaroos) 14.5 × 14.5 × 14.5 cm (ringtail lemurs, red tailed black cockatoos) 8.5 × 8.5 × 8.5 cm (little penguins) 4.5 × 4.5 × 4.5 cm (star tortoises) (Fig. 2).
A 1 × 1.5 m silver acrylic mirror was used for the Barbary sheep, little penguins, ringtail lemurs, red tailed black cockatoos and red kangaroos. Two small circular mirrors of 11.5 cm diameter were used for the star tortoises. The reflective surface of the mirror allows for a feedback element that the other two objects lacked, adding a different characteristic to the test; possibly attractive to social animals.
PROCEDURE
Novel object testing was performed over three consecutive days with a different object being tested each day. The order in which the objects were presented was the same for each species (stationary object, moving object, mirror). On each day, the object was placed in the enclosure within sight of the normal feeding area by a keeper or experimenter depending on availability. Testing was performed before the zoo opened to ensure that responses were not influenced by the presence of visitors. As this is a feeding time, the animals were all present except for one case. Due to large enclosure size, only three of the five kangaroos were present for the first stationary object test. To rectify this problem, footage from the first test was used for the three individuals present and then the stationary object was represented on a separate day when the two missing individuals were present and the others were not. Reactions were recorded for the two missing individuals during the second test. Testing began once the keeper had left the enclosure, and ran for 15 min before the objects were removed. The moving object was manually moved, by the experimenter from outside the enclosure. The object was moved approximately 5 cm every 60 s by pulling the rope attached to it. Each object was presented only once to control for the possibility of habituation.
Conspecifics were assessed at the same time (with the exception of the stationary object in the red kangaroos) and behaviours of all individuals were digitally recorded for 15 min and analysed for the following: time to orient towards object (s), latency to make contact (s), frequency of interaction and total duration of interaction (s). In cases where an individual did not make physical contact with the object, a minimum distance in body lengths was estimated. The maximum estimate for this was four body lengths. Interaction with both the stationary and moving object was defined as when the animal made physical contact with the object, whereas interaction with the mirror included contact as well as the animal following its own movements in the mirror without contact.
DATA ANALYSIS
Means and standard deviations were calculated for latency to contact (s), time to orient (s) and duration of interaction (s) for each species during each of the three novel object tests. Graphical representations of the 'time to orient', 'latency to contact' and 'duration of interaction' were used to compare differences between species and the tests. Qualitative descriptions of the observations and comparisons between species were the primary reporting method used in this study and the graphical representation with summary statistics helps support these.
RESULTS
The results of the novel object tests for all six species are listed in Table 2 and are presented in Figs. 3 and 4. The objects effectiveness, in accordance to the characteristics of an effective test identified in the methodology, are displayed for each species in Table 1.
Stationary object
The stationary object elicited a range of responses between species. Neither the Barbary sheep nor the red tailed black cockatoos approached or interacted with the stationary object and variation in individual responses was negligible (Fig. 3). The time it took for the Indian tortoises to orient towards the stationary object varied but no contact was made (Figs. 4 and 3), and of the three penguins that orientated towards the stationary object, two made contact for a brief period. In contrast, the red kangaroos varied between individuals in their response to the stationary object, differing in both time to orient and time to make contact ( Figs. 4 and 3). On average the ring-tailed lemurs approached the stationary object the fastest out of the three objects; however, as with the kangaroos, individuals within the group varied in their response time (Fig. 3).
Moving object
Barbary sheep were slower to orient towards both the stationary object and mirror compared to the moving object (Fig. 4), though no physical contact was attempted (Fig. 3). The red tailed black cockatoos also displayed a faster orientation time to the moving object compared to the other objects. Still, none of the cockatoos made contact with the object (Fig. 3). The tortoises showed individual variation in orientation time towards the moving object (Fig. 4), with only one individual moving towards the object; however, no contact was made. The other tortoises made no movement towards the moving object from their original positions. The moving object caused the penguins to remain in the water making individual identification and orientation impossible to measure. No penguins approached and all maintained a large distance from the object. Both the kangaroos and lemurs showed individual variation in their interaction and orientation towards the moving object (Figs. 3 and 4).
Mirror
As with the stationary object, the Barbary sheep and the red tailed black cockatoos showed no approach or interaction with the mirror (Fig. 3). The time it took the tortoises to orientate towards the mirror varied (Fig. 4). Three out of five of the tortoises contacted the mirror, but none engaged with the other two novel objects. The mirror elicited a faster orientation for the little penguins (Fig. 4) compared to the other two objects, a shorter latency to approach the mirror as well as a longer duration of interaction. For the kangaroos, the mirror test yielded longer interaction times, compared to the other two objects ( Table 2). The ringtail lemurs however, were slower to orientate to the mirror compared to the other two objects, and only two of the eight lemurs interacted with the mirror. Despite this, the duration of interaction with the mirror was much longer than the stationary or moving objects (Table 2) even though only one lemur made physical contact with the mirror surface.
DISCUSSION
Using the three novel objects, the individual behavioural responses of six different species were evaluated to ascertain if it was possible to: (1) identify individual differences within the group; (2) motivate animals to notice and possibly interact with objects; (3) suit the physical capabilities of a range of species without eliciting high levels of visible stress (fleeing, erratic behaviour, aggression) or disrupting routines. By addressing these three criteria for the novel objects we decrease the likelihood that animals which do not interact with any of the objects do so due to a low level of curiosity and not due to the object being irrelevant to the species or due to fear. Using the three objects, individual variation in response to the objects within the normal enclosure was seen for four of the six species tested (little penguins, ring-tailed lemurs, Indian star tortoises and red kangaroos). For the red tailed black cockatoos and Barbary sheep, none of the novel objects elicited any interaction so individual variation could not be assessed. The variation in response between and within species shows how important it is to have multiple objects with varied characteristics to be able to correctly identify curiosity in animals across different species.
In studies involving animal motivation and response such as this one, evolutionary and ecological variation is important. The Barbary sheep and the red-tailed black cockatoos are the two species which displayed avoidance behaviours with all three objects. While these species are very different they do share a few similar ecological traits, which may have contributed to the similarity in responses. In wild conditions Barbary sheep have large home ranges (Hampy, 1978) as do red-tailed black cockatoos (Garnett & Franklin, 2014); however, in most zoos, the size of their enclosures are a fraction of the size of the natural range. Ungulates have been found to exhibit increased vigilance and fear towards external stimuli when habitat cover is decreased (Underwood, 1982a), when herd sizes are small (Underwood, 1982b) and when experiencing chronic stress such as repeated adverse handling (Destrez et al., 2013). Studies looking at exploration and neophobia in parrots show that seed eating parrots have longer latencies to explore novel objects than bud-eating parrots suggesting that evolving to suit their ecological niche has trade-offs in how they perceive their environment (Mettke-Hofmann, Winkler & Leisler, 2002). Other potential ecological pressures were also seen with the little penguins. Penguins displayed land-avoidance when presented with the moving object, but not with the stationary object, suggesting that the moving object was fear-provoking for the penguins. The moving object was seen only from a distance while the penguins were floating on the water's surface, so it is possible that its form could not be interpreted correctly as penguins are said to be near sighted on land and far sighted in water (Sivak & Millodot, 1977). However, it would serve the penguins well to be cautious of moving objects on land as they have native land predators such as sea birds and reptiles as well as introduced species such as cats, dogs and foxes (Stahel, Gales & Burrell, 1987). Both the lemurs and kangaroos showed quick interactions with all three objects. These species both have an ability to adapt to harsh environmental conditions in the wild (Gould, Sussman & Sauther, 1999;Sharman & Pilton, 1964) so curiosity towards novel food items and environments would be an advantage for both species.
The sociability of a species may also influence how an animal responds to a novel object. Social interactions could affect an animals response to novel objects in more complex ways than just the presence or absence of other individuals alone. As most penguins in the current study only interacted with the mirror, it seems likely that the reflective properties of the object appealed to them and this characteristic would be the most likely to elicit a curious response from the species. The mirror's appeal may hinge on the fact that little penguins are a highly social species where individuals travel between colonies and interact with unfamiliar conspecifics often (Reilly & Cullen, 1982). It is not possible to ascertain if social interaction factored into their interest as social behaviours towards the reflected images were not assessed. Further research into this possibility would shed light on what properties of the mirror were most appealing to the penguins. The ring-tailed lemurs, also social animals (Nakamichi & Koyama, 1997), did not show the same level of interest in the mirror as the penguins. The lack of interest seen is this study is contradictory to previous research which found that ring tailed lemurs prefer mirrored surfaces to standard surfaces even after repeated exposure (Fornasieri, Roeder & Anderson, 1990). As no evidence of self-recognition has been reported in ring-tailed lemurs (Inoue-Nakamura, 1997), it is likely reflected images would be perceived as a foreign individual. Interestingly there were no signs of aggression towards the image, which would be expected when males encounter males from other groups (Nakamichi & Koyama, 1997). Perhaps because of a lack of other sensory indicators such as scent, sound and touch, the reflected image was perceived as more perplexing than threatening. In contrast the star tortoises, who showed a preference for the mirror objects are not known for their social skills. Not only did they prefer the mirrors but once one tortoise approached the mirror, others came to join from areas of the enclosure which were not in direct line of sight. It is difficult to tell if the first individual to interact with the mirrors affected the response of the others. There is evidence that there may be some aspect of social learning among red-footed tortoises Chelonoidis carbonaria, (Wilkinson et al., 2010a), as well as the ability to follow gaze direction of conspecifics (Wilkinson et al., 2010b), so a social effect might have been involved in the widespread response observed in this study.
In addition to ecological factors, previous life experience is also likely to influence an animal's interactions with their environment. The cockatoos used in this study are provided with minimal ''unnatural'' materials in terms of housing, substrates or enrichment. This lack of previous experience with novelty and variation could explain the heightened neophobic reactions observed as environmental enrichment has been suggested to manage neophobia in parrots (Fox & Millam, 2004). Similarly, negative past experiences have been found to contribute to increased fear in sheep (Dwyer, 2004). The lemurs and kangaroos used in this study are both housed in ''walk though'' enclosures which affords them with more novel simulation through indirect interactions with visitors. From the results of this study alone, it is not possible to say whether the species propensity for curiosity is nurtured by this type of housing or if evolutionary history affords the ring-tailed lemurs and red kangaroos to be more curious and less fearful. Reduced life experience may also influence curiosity as shown by the increased time the juvenile kangaroo spent with the mirror object compared to the adults in the group. Juveniles have been found to have increased curiosity towards novelty compared to adults in rats Rattus norvegicus (Douglas, Varlinskaya & Spear, 2003), neotropical raptors Milvago chimango (Biondi, Bó & Vassallo, 2010) and vervet monkeys Chlorocebus pygerythrus (Fairbanks, 1993). Increased curiosity during early life may be an important aspect of learning, as critical learning periods occur during this time (Knudsen, 2004). Further investigation involving different populations of the same species with individuals of various ages housed in different enclosure conditions would help determine if enclosure design and exposure to varied enrichment protocols helps promote curious behaviour and reduces hesitation to interact with novel objects, or if evolutionary history is most important.
Due to the myriad of potential factors contributing to an animal's behavioural response it is easy to see how limited knowledge of a species makes creating appropriate object for testing curiosity difficult. For example, reptiles have historically been deemed as unintelligent. Even though recent studies have shed light on reptile navigation skills (Day, Ismail & Wilczynski, 2003;Wilkinson, Chan & Hall, 2007), social learning (Wilkinson et al., 2010a) and cognitive mapping (Lopez et al., 2001), we still know relatively little about them (Burghardt, 2013). It is probable that we are not appealing to the tortoises' propensities with our current novel objects. However, the way the star tortoises in this study actively sought out and interacted with the mirrors adds to growing evidence that reptiles have more cognitive needs than they are often given credit for. In understanding a species umwelt, we not only increase the chance of selecting motivating objects but also increase the accuracy of our interpretation. For example, the fast orientation that the Barbary sheep showed towards the novel objects, especially the moving object, is a trait shared by other ungulates and domesticated ovids (Désiré et al., 2004). However, orientation may not be the best indication of attention in this species, as domestic sheep have a 290 • field of vision (Kendrick, 2008), which makes it possible that the objects were sighted in the peripheral vision of the Barbary sheep before they orientated their body towards the objects making orientation times for this species inaccurate. Behavioural responses may also be affected by the location of the object in penguins as they are dichotomous on land and sea, the same objects placed under the water or on land may provide us with different results.
LIMITATIONS
There were two main limitations in this study. The first being small samples sizes and gender bias. Due to the nature of captive animals many of the groups are small and of a single gender due to housing restrictions. Repeated studies using different populations would assist in reducing this issue however, in the zoo environment this is unavoidable. It must also be mentioned that there are many limitations when testing a group of animals together such as a lack of experimental control, differential access to the objects, reduced accuracy in large groups (Cronin et al., 2017) and social influences (Arakawa, 2006;Stöwe et al., 2006;Frost et al., 2007). However, evaluating animals in their everyday environment can reduce external modifiers such as unfamiliar environments which can reduce the validity of the results (Cronin et al., 2017;Richter, Garner & Würbel, 2009). Cronin and colleagues (2017) have identified the use of multiple identical objects placed around the testing area to try to mitigate some of the dominance concerns and vicinity issues faced in this study, as occurred in the stationary object tests in the red kangaroos and star tortoises, and this practice may be useful in further studies in this area. Additional information may also be obtained in future research by recording how many ways an individual tried to interact with the object; this may require extending the duration of the experiment.
CONCLUSION
This study has demonstrated that the behavioural responses to novelty differed within and between species depending on the characteristics of the objects themselves. This is an important preliminary step in developing tests for curiosity across species as it shows that there is need for novel objects with a range of characteristics to allow for accurate assessment of curiosity. Even if one object promotes a curious response in all individuals within a population (such as the mirror with the penguins), additional objects that cause variation in individuals may show the effect of personal experience or personality on behaviour. Before implementing a test for curiosity, it would be advisable to identify multiple motivating stimuli within the species being studied to increase the chances of accurately capturing curiosity. This can be done by combining knowledge of both the sensory limitations of the species and the ecological niche they occupy in the wild, as well as studies like this one to identify motivating characteristics of objects. By identifying common factors, such as husbandry practices or ecological similarities, shared by individuals and species which display curious and well adjusted behaviours towards novelty we may be able to modify management practices to improve the lives of captive animals.
ADDITIONAL INFORMATION AND DECLARATIONS Funding
This work was partly funded by an ARC collaborative grant with the University of Melbourne, Zoos Victoria and the Taronga Conservation Society LP14100373. The remainder of the funding was obtained through the University of Melbourne, faculty of agriculture and veterinary science, as part of their masters program. There was no additional external funding received for this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.
Grant Disclosures
The following grant information was disclosed by the authors: University of Melbourne, Zoos Victoria. Taronga Conservation Society: LP14100373. | 2018-04-03T02:10:33.659Z | 2018-03-08T00:00:00.000 | {
"year": 2018,
"sha1": "81d1b8bb03fcac57b9d918ba36100ee86b31c187",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.4454",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "81d1b8bb03fcac57b9d918ba36100ee86b31c187",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
} |
254656818 | pes2o/s2orc | v3-fos-license | In the Eye of the Beholder: How Self-Other Agreements Influence Leadership Training Outcomes as Perceived by Leaders and Their Followers
Based on Yammarino and Atwater’s self-other agreement typology of leaders, we explored whether leaders’ and followers’ agreement influenced their ratings of leadership behaviors after training where leaders received multi-source feedback to stimulate behavior change. We used a prospective study design including 68 leaders and 237 followers from a Swedish forest industry company. Leaders underwent training to increase their transformational leadership and contingent reward styles and reduce management-by-exception passive and laissez-faire leadership. We found that self-other agreement influences followers and leaders reporting changes in leadership styles. We also found that although some leader types were perceived to improve their leadership behaviors, leaders and followers reported differential patterns in which types of leaders improved the most. Our results have important implications for how feedback should be used to support training to achieve changes in leadership styles.
Introduction
Approximately 34% of leadership training programs do not achieve their intended outcomes (Avolio, Reichard, Hannah, Walumbwa, & Chan, 2009). Effective leadership training requires not only passive learning, but changes in leadership behaviors (Kirkpatrick, 1994;Nielsen, Randall, & Christensen, 2017). Leadership training often relies on leaders receiving feedback on their leadership style as a means to raise their self-awareness and promote leadership behavioral change (Slater & Coyle, 2014), and often in the form of multi-source feedback (Lacerenza, Reyes, Marlow, Joseph, & Salas, 2017). It is widely acknowledged that leaders and followers do not always agree on their leaders' behavior (for a meta-analysis, see Lee & Carpenter, 2018). A critical issue in multi-source feedback research is the extent to which leaders and followers agree on the leader's behaviors; this is also known as self-other agreement (SOA) (Fleenor, Smither, Atwater, Braddy, & Sturm, 2010).
The extent to which leaders and their followers agree on the leaders' behaviors prior to training may have implications for leaders' motivation to change their behaviors as a result of training and their followers' acknowledgement of any attempts to change behaviors. In the present study, we aim to understand how pre-training SOA influences the extent to which leaders change their leadership styles as a result of training when 180-degree feedback (leaders' own ratings and followers' ratings) is integrated into training. To the best of our knowledge, there have been no studies on how SOA affects perceptions of leaders' leadership style as a result of training. Understanding the links between feedback as input during leadership training and how different types of leaders develop in response to such feedback may help us understand differential changes in leadership styles. We focused on understanding how different SOA leader types predict changes in leadership style. We developed hypotheses as to which leader types may improve the most, as perceived either by themselves or by their followers. Such understanding may help predictions of how leaders react to training and feedback and provides valuable insights into how leadership training including feedback may be designed and what supportive interventions may need to be developed to ensure a successful training outcome.
The present study extends and contributes to existing theory and research in four ways: First, we explore the links between pre-training SOA and leaders' and followers' ratings of leadership styles post-training. Despite feedback being strongly related to changes in leadership behaviors, no differences between single source feedback and multi-source feedback have been identified (Lacerenza et al., 2017). We propose that the impact of multi-source feedback may depend on self-other agreement. Lee and Carpenter (2018) highlighted that previous research on SOA has been mostly crosssectional with limited attention to how SOA may influence leadership-training outcomes. A few studies have focused on the effects of SOA feedback. Atwater and Brett (2005) found that favorable attitudes toward multi-source feedback led to higher levels of motivation after receiving feedback. Mackie (2015) found that over-estimators rated themselves lower on transformational leadership after receiving coaching. It should be noted that coaching may help leaders develop a more realistic self-image and become more self-aware whereas leadership training focuses on changing leadership behaviors. To the best of our knowledge, how the integration of SOA feedback into leadership training impacts changes in leadership behaviors has yet to be explored.
Second, the study contributes to our understanding of how SOA impacts changes in leadership style across a range of full-range leadership styles (Avolio, 2011). Full-range leadership includes both desirable, constructive behaviors, such as transformational leadership and contingent reward that have a positive impact on follower's performance and well-being and undesirable, passive leadership styles, such as laissez-faire leadership and management-by-exception-passive (MBEP), which can have a negative impact on followers' performance and well-being (Arnold, 2017;Bass & Riggio, 2006;Harms, Credé, Tynan, Leon, & Jeung, 2017;Hoch, Bommer, Dulebohn, & Wu, 2018;Inceoglu, Thomas, Chu, Plans, & Gerbasi, 2018;Skakon, Nielsen, Borg, & Guzman, 2010;Wang, Oh, Courtright, & Colbert, 2011). Previous leadership training has focused on whether certain leadership behaviors can be improved (Lacerenza et al., 2017); however, it is an equally interesting question whether undesirable behaviors can be reduced. Leaders exert both transformational and passive leadership at different times (Mullen, Kelloway, & Teed, 2011), and so increasing transformational leadership does not guarantee that passive leadership behaviors such as laissez faire leadership reduce. In the present study, the training also aimed at reducing passive leadership behaviors, and we therefore studied whether laissez faire leadership and MBEP behaviors can be reduced.
Third, Yammarino and Atwater (1997) suggested that leaders who either agree or disagree with their followers on their leadership style are characterized by different traits, and these traits influence how they respond to feedback. They suggested four categories of SOA leaders: over-estimators, under-estimators, in-agreement, good leaders, and in-agreement, poor leaders. We use the Yammarino and Atwater (1997) categorization and suggest that in-agreement, good leaders rate themselves high on transformational leadership and contingent reward and low on undesirable leadership styles such as laissez faire and MBEP leadership and their followers agree. In-agreement, poor leaders rate themselves low on transformational leadership and contingent reward and high on laissez faire and MBEP leadership and their followers agree. Over-estimators (of their leadership competence) rate themselves higher on transformational leadership and contingent reward and lower on laissez faire and MBEP leadership than their followers, and finally, under-estimators (of their leadership competence) rate themselves lower on transformational leadership and contingent reward and higher in laissez faire and MBEP leadership than their followers. Previous SOA studies have primarily focused on whether leaders and followers agree rather than whether they agree the leader is "good" or "poor" (Amundsen & Martinsen, 2014;Lee & Carpenter, 2018). We extend this literature to understand how positive and negative inagreement influence changes in leadership styles posttraining.
Fourth, previous studies have primarily focused on whether leadership training can lead to changes in leadership styles as rated by followers (Barling, Weber, & Kelloway, 1996;Parry & Sinha, 2005). As followers and leaders do not always agree on the leadership style pre-training, leaders and followers may also differ in their perceptions of changes in leadership style post-training (Sosik, 2001;Tekleab, Sims Jr., Yun, Tesluk, & Cox, 2008).
Hypothesis Development
The context triggers behaviors (Tett & Guterman, 2000). Feedback at the beginning of a training program allows for meta-cognitive activities such as reflecting on which changes in leadership behaviors are required and which elements of training may be particularly useful for leaders to alter their behaviors (Ford, Smith, Weissbein, Gully, & Salas, 1998).
Feedback helps leaders gain insights into their current leadership styles and identify any discrepancies between actual and intended behaviors (Maurer, 2002).
We base our study on Yammarino and Atwater's (1997) model of SOA, which proposes that underlying traits of overestimators; under-estimators; in-agreement, good leaders; and in-agreement, poor leaders influence how they react to feedback. Since the seminal paper of Yammarino and Atwater (1997), a body of research has explored the links between personality and SOA (e.g., Bergner, Davda, Culpin, & Rybnicek, 2016) and how SOA influences organizational outcomes (e.g., Amundsen & Martinsen, 2014). We build our hypotheses not only on the predictions of Yammarino and Atwater (1997) but also on this more recent research.
Leaders' Self-ratings of Leadership Behaviors Posttraining
In-agreement, good leaders are likely to respond positively to feedback as they recognize and accept followers' ratings (Lee & Carpenter, 2018;Yammarino & Atwater, 1997). Leaders who are in agreement with their followers about their leadership behaviors are believed to be able to identify their own strengths and weaknesses and set appropriate goals for themselves as to how to improve their behaviors; they have high levels of self-awareness (Fleenor et al., 2010;Lee & Carpenter, 2018). In-agreement, good leaders may feel encouraged to increase their transformational leadership and contingent reward and reduce laissez faire and MBEP leadership post-training because they receive positive feedback during training. Positive feedback is more accurately recalled than negative feedback, acts as a reinforcer of behaviors, and encourages recipients to set specific goals (Ilgen, Fisher, & Taylor, 1979). As these leaders possess high levels of selfefficacy (Yammarino & Atwater, 1997), they are likely to take on board any tools and methods to change their behaviors learned during training, and they may set realistic goals as to what behaviors need to change and how. In-agreement, good leaders are intuitive and may better understand what changes are needed (Bergner et al., 2016). Atwater and Brett (2005) found that self-efficacious leaders were more likely to engage in follow-up activities after receiving 360-degree feedback. High internal locus of control (Fleenor et al., 2010;Whetten & Cameron, 2007) means these leaders feel in control over their behaviors toward followers. Individuals engage in behaviors that reinforce their positive image (Korman, 1976). As in-agreement, good leaders receive positive feedback, they may continually seek feedback from followers, and as a result, they are likely to rate themselves higher on transformational leadership and contingent reward and lower on laissez faire and MBEP leadership post-training because training provides them with input on how to change their behaviors.
Over-estimators may perceive they successfully change their leadership styles. Over-estimators show little concern for others' perceptions (Moshavi, Brown, & Dodd, 2003) and may be less inclined to seek consultation from followers on what might be desirable leadership behaviors (Berson & Sosik, 2007). Overestimators are high in self-monitoring and rely little on others as a source of feedback (Miller & Cardy, 2000). Overestimators are believed to have an exaggerated sense of selfgrandor and independence, and thus, they may be less likely to take followers' feedback on board (Berson & Sosik, 2007). Their grandiose self-perception is not likely to be corrected as they will ignore the feedback given during training.
In a field experiment, Atwater, Waldman, Atwater, and Cartier (2000) found that over-estimators reduced their commitment to followers, and Brett and Atwater (2001) found that such leaders reacted to negative feedback with anger. When leaders do not believe their followers have a realistic view of their behaviors, critical evaluations are less likely to influence their behaviors (Bernardin, Dahmus, & Redmon, 1993). Overestimators may perceive they are successful in increasing their transformational and contingent reward leadership behaviors and reducing their laissez faire and MBEP leadership behaviors when given advice on how to so, disregarding the feedback of followers. Over-estimators possess narcissistic traits (Fleenor et al., 2010) and in an experiment, Robins and John (1997) found that narcissists only rated themselves even higher after viewing their performance on video. Over-estimators' self-grandeur prevent them from seeking consistency with followers (Korman, 1976), but they may feel that training has made them even better leaders (Robins & John, 1997). Over-estimators are less likely to report changes than in-agreement, good leaders as they see less need for development (Yammarino & Atwater, 1997).
Under-estimators may perceive they benefit from training. Bratton, Dodd, and Brown (2011) found that under-estimators score higher on adaptability, and thus, they may be more willing to learn from training. The Pygmalion effect (Rosenthal & Jacobson, 1968) suggests that followers' positive expectations of the leader may have a positive influence. During training, leaders are given the tools and methods to change leadership behaviors, and under-estimators may be motivated to become a better leader. Amundsen and Martinsen (2014) suggested that feedback builds these leaders' self-confidence and helps them understand their position better. Training may build their confidence to try these behaviors at work (Tekleab et al., 2008). Under-estimators may thus increase their ratings posttraining as they come more self-aware of how to improve their transformational leadership and contingent reward behaviors and gain confidence they can reduce their laissez faire and MBEP behaviors after having been presented with the methods and tools to do so. As a result of the unexpected positive feedback, they may increasingly seek feedback from followers, which may help them improve their behaviors. Under-estimators' self-beliefs about their ability to be good leaders may increase; however, they are likely still to suffer from self-doubts (Yammarino & Atwater, 1997) and lack the confidence to fully engage with training, which may limit development in their self-ratings post-training.
Finally, in-agreement, poor leaders may not perceive they benefit from training. These leaders' self-confidence is low, and they rate themselves negatively (Tekleab et al., 2008). The Golem effect (Babad, Inbar, & Rosenthal, 1982) would suggest that followers' poor ratings only confirm the leaders' own negative self-view. Underachievers possess poor self-concepts and readily accept negative feedback (Ilgen et al., 1979). Negative feedback from followers that they are poor leaders may take an additional toll on leaders' self-confidence, and they continue to view themselves as poor leaders (Fleenor et al., 2010;Tekleab et al., 2008). In-agreement, poor leaders may suffer from learned helplessness (Amundsen & Martinsen, 2014). Although they do recognize themselves as weak leaders, they are unable to change due to low self-efficacy and poor selfconfidence (Fleenor et al., 2010;Tekleab et al., 2008).
Despite training, in-agreement, poor leaders may be less likely attempt to change their behaviors as they withdraw from their followers (Amundsen & Martinsen, 2014). Exploring performance as the outcome, Smither et al. (1995) found that leaders did not improve their performance after receiving feedback from their followers when they agreed with this feedback, even when this feedback was poor. We therefore hypothesize the following: Hypothesis 1: In-agreement, good leaders will report the greatest increases in transformational leadership post-training; over-estimators will report the second highest increases; under-estimators will report the third highest increases; and inagreement, poor leaders will report the least increases in transformational leadership. Hypothesis 2: In-agreement, good leaders will report the greatest increases in contingent reward posttraining; over-estimators will report the second highest increases; under-estimators will report the third highest increases; and inagreement, poor leaders will report the least increases in contingent reward. Hypothesis 3: The greatest reductions in management-by exception-passive leadership post-training will be reported by in-agreement, good leaders; the second greatest reductions for over-estimators; third greatest reductions for under-estimators; and in-agreement, poor leaders will report the least reductions in MBEP leadership.
Hypothesis 4: The greatest reductions in laissez faire leadership post-training will be reported by inagreement, good leaders; the second greatest reductions for over-estimators; third greatest reductions for under-estimators; and in-agreement, poor leaders will report the least reductions in laissez faire leadership.
Followers' Ratings of Leadership Behaviors Posttraining
According to social cognition theory, people develop schemas to mentally organize and store information (Fiske, 1992). Followers observe their leader's style and store this information in cognitive schemas that they use to predict and judge the leadership behavior of the leader (Ilgen & Feldman, 1983). In other words, followers' ratings of their leaders pre-training predict how they view their leaders post-training. If followers have positive perceptions of their leaders pre-training, they are more likely to react favorably to any attempts to change leadership behaviors (Fleenor et al., 2010). For over-estimators and in-agreement, poor leaders, followers' negative schemas will only be revised if leaders make major changes to their behaviors (Fiske, 1992;Ilgen & Feldman, 1983). We propose followers of in-agreement, good leaders will report increased transformational leadership and contingent reward and reduced laissez faire and MBEP leadership. Inagreement, good leaders typically have a good relationship with and are valued by their followers (Berson & Sosik, 2007) and followers of in-agreement, good leaders feel supported by their leader Sosik and Godshalk (2000). As followers have positive schemas of their in-agreement, good leader, they are likely to react positively to their leader's attempts to increase transformational leadership and contingent reward and reduce their laissez faire leadership and MBEP. As followers know leaders have been on training, they may additionally appreciate their leaders' commitment to improve, and this could enhance their positive leadership schemas. They will thus agree with their leaders that the in-agreement, good leaders have improved.
Under-estimators may be perceived as benefitting from training by their followers. As under-estimators are receptive to feedback from followers (Amundsen & Martinsen, 2014), they react to feedback and strive to increase their transformational leadership and contingent reward behaviors and reduce their laissez faire and MBEP leadership behaviors, especially as such leaders tend not to become complacent or overconfident (Amundsen & Martinsen, 2014). Moshavi et al. (2003) found that followers of under-estimators were satisfied with their leaders.
Under-estimators are humble to the opinions of followers (Aarons, Ehrhart, Farahnak, Sklar, & Horowitz, 2017;Amundsen & Martinsen, 2014). Humility is related to agreeableness, which has been found to be related to transformational leadership and contingent reward (Bono & Judge, 2004). Humility makes under-estimators likeable (Amundsen & Martinsen, 2014). McKee, Lee, Atwater, and Antonakis (2018) found that followers of agreeable leaders over-rated their leader on instrumental leadership. Under-estimators' willingness for self-improvement and desire to meet followers' expectations (Yammarino & Atwater, 1997) may invoke followers' positive appraisals post-training. As a result of followers' positive schemas of their leader, they welcome any attempts to increase transformational leadership and contingent reward and reduce laissez faire leadership and MBEP, and they thus develop even more positive cognitive schemas of their leader post-training. Bratton et al. (2011) found that followers of under-estimators rated their leader higher on transformational leadership than over-estimators and in-agreement, poor leaders. Under-estimators are less likely to be rated higher than in-agreement, good leaders as they lack confidence in trying out new challenging behaviors.
In-agreement, poor leaders who score themselves low on transformational leadership and contingent reward and high on laissez faire and MBEP leadership and their followers agree may not be perceived to change much by their followers. Followers have negative schemas of these leaders and may not appreciate any attempts to change leadership styles. Although followers may be aware that their leaders have been on training, their negative schemas of their leader mean they anticipate that leaders are unable to benefit from training. Unless very drastic changes in leadership styles can be observed, followers are unlikely to change their ratings of their leader. Such dramatic changes are unlikely to happen as these leaders have poor self-efficacy (Amundsen & Martinsen, 2014). Followers will thus agree with these leaders that little progress have been made.
Followers are unlikely to report any changes to the leadership styles of the over-estimator leader. Bashshur, Hernández, and González-Romá (2011) suggested when leaders' ratings are higher than those of their followers, the leaders enact passive leadership as they fail to understand the needs of their followers. It has been suggested that such leaders may be hostile towards their followers (Yammarino & Atwater, 1997). Sosik and Godshalk (2000) found that followers whose leaders overestimated themselves on transformational leadership reported low levels of support. If followers have in the past perceived their leader as arrogant and unapproachable (Fleenor et al., 2010), they are likely to continue to view their leader in this way. Atwater and Yammarino (1992) found a negative correlation between over-estimators and follower ratings of their transformational leadership behavior.
Followers see over-estimators as uncaring and self-centered (Sosik, 2001) and less emotionally competent (Wang, Wilhite, & Martino, 2016). Followers of estimators are the least satisfied with their leaders (Moshavi et al., 2003). Even if these leaders do make attempts to increase their transformational leadership and contingent reward and reduce their laissez faire and MBEP leadership, followers are unlikely to react positively to these attempts as their schemas categorize their leader as uncaring and self-centered, and major changes in behaviors may be required to change these schemas. They are thus likely to disagree with their over-estimator leaders that they have changed their behaviors. We thus developed the following hypotheses: Hypothesis 5: Followers will report the greatest increases in transformational leadership post-training if their leader is an under-estimator; the second highest increases if the leader is an in-agreement, good leader; third highest if the leader is an in-agreement poor leaders; and the least increases in transformational leadership will be reported by followers whose leader is an over-estimator. Hypothesis 6: Followers will report the greatest increases in contingent reward post-training if their leader is an under-estimator; the second highest increases if the leader is an in-agreement, good leader; third highest if the leader is an in-agreement, poor leader; and the least increases in contingent reward will be reported by followers whose leader is an over-estimator. Hypothesis 7: Followers will report the greatest reductions in management-by exception passive leadership if their leader is an under-estimator; the second greatest reductions will be reported by followers of in-agreement, good leaders, although the third greatest reductions will be observed by followers of in-agreement, poor leaders. Followers of over-estimators will report the least reductions in MBEP. Hypothesis 8: Followers will report the greatest reductions in laissez faire leadership if their leader is an under-estimator; the second greatest reductions will be reported by followers of inagreement, good leaders, while the third greatest reductions will be observed by followers of in-agreement, poor leaders. Followers of over-estimators will report the least reductions in laissez faire leadership.
Setting, Sample, and Procedure
The present study took place in the forest industry (Tafvelin, Hasson, Holmström, & von Thiele Schwarz, 2019) using a prospective study design. All leaders in the organization (N = 101) participated. Leaders were asked to invite five followers to provide ratings of their leadership behaviors, resulting in an average of 3.5 followers (range 1 to 5) responding to the questionnaires. Leaders were asked to select both followers they felt close to and had regular contact with and followers with whom they had less frequent contact. Leaders and their followers were sent an e-mail with a personal link to an online questionnaire together with information about the training and its evaluation. The study was approved by the local ethics review board.
For the purpose of the present study, we only included data from respondents who consented for their data to be used in research. In total, 68 leaders and their 237 corresponding followers, yielding a response rate of 77 % for the leaders, and 82% for followers (237 out of 290 invited followers). The majority of participants were men (leaders 74%; followers 85%), and the average age was 46 years for both leaders and followers (leaders SD 8.70, followers SD 8.87). Leaders had an average tenure of 5 years (SD 5.52) and followers an average tenure of 23 years (SD 9.9 years).
The Training
The leadership training was initiated by the organization and formed part of an overall cultural change initiative in the company. Management wanted to foster an organizational culture focused on learning where followers took responsibility for the success of the company rather than merely operating their machines. Management saw full-range leadership as an important component in this change. The training was delivered by organizational psychologists from an occupational health company. It was on the recommendation of the occupational health company that the organization invited the research team to evaluate the training. Leaders received 20 days of training over a period of 16 months. Training was extensive as it was developed to support the wider organizational cultural change.
The training was conducted in cross-departmental groups with 20 leaders in each group. The exception was a 30-min individual session where each leader was provided with an individual feedback report on their leadership styles and offered the opportunity to discuss it with one of the consultants. The report included graphs showing the means and variation of all rating sources and norms for the scales. Leaders were not provided information on which group they belonged to, but information were provided on how they compared with the MLQ norms (3 or higher on transformational leadership, between 2 and 3 on contingent reward, and between 0 and 1 on MBEP and laissez faire) to get an idea on what they might want to develop. This individual session was offered after the first training session where a theoretical explanation of the full-range leadership model was presented. All leaders participated in the individual feedback session. The feedback session was provided to give room for individual reflection and questions and aimed to build motivation for action plans developed during leadership training.
Multiple training methodologies were employed in line with recommendations for effective leadership training (Cacioppe, 1998;Lacerenza et al., 2017). The training was based on full-range leadership theory (Avolio, 2011) combined with a functional perspective emphasizing how leader behaviors influence follower motivation and behaviors, based on operant psychology (Skinner, 1963). This approach was chosen to align with the organization's leadership strategy. The training started with a theoretical block (14 days), followed by a practical block for the remaining time (6 days). The theoretical block included didactic sessions focusing on fullrange leadership, organizational change, and follower motivational processes, thus relating leadership to follower motivation, particularly during organizational change. The practical block focused on leaders developing action plans and following up on their implementation. Leaders were expected to apply learning on leadership, organizational change, and motivational processes working through a practical case together with their followers. Action plans focused on topics such as facilitating collaboration and improving information exchange.
Measures
Leaders and followers completed the questionnaire 1 month before training and immediately after the leadership training was completed.
Leadership behaviors were measured using the Multifactor Leadership Questionnaire (MLQ-Form 5X) (Avolio & Bass, 2004). Transformational leadership was assessed with 20 items covering the four subcomponents of transformational leadership: idealized influence, inspirational motivation, intellectual stimulation, and individualized consideration. In line with the original MLQ scale, the contingent reward, MBEP, and laissez-faire leadership dimensions were each measured with 4-item scales. For all scales, ratings were made an a 5point response scale ranging from 1 (never) to 5 (frequently, if not always), where leaders rated how often they engaged in each behavior, and followers rated how often the manager they were rating engaged in each behavior. The internal consistency of the MLQ subscales at baseline was as follows: transformational leadership 0.91 (followers) and 0.84 (leaders), contingent reward 0.80 (followers) and 0.52 (leaders), MBEP 0.58 (followers) and 0.62 (leaders), and laissez-faire 0.78 (followers) and 0.59 (leaders).
Analyses
To test how SOA influences leaders' and followers' perceptions of changes in leadership behaviors post-training, we used polynomial regression with response surface analysis (Edwards, 1994;Humberg, Nestler, & Back, 2019). This analysis enables us to examine the combined impact of two variables on a third, but at the same time retaining information about the differences between the variables. It is the recommended analysis when SOA is the independent variable (Edwards, 2002) as it keeps measures of self-and other ratings separate, at the same time as also incorporating higher-order terms such as squared and interaction terms which enable tests of more elaborated effects (Humberg et al., 2019). We followed the three-step procedure outlined by Shanock, Baran, Gentry, Pattison, and Heggestad (2010), also recommended by Gibson, Cooper, and Conger (2009).
First, agreement and disagreement between leaders and followers was investigated to confirm whether the level of disagreement was sufficient to warrant further analysis. At least 10% of the leaders need to disagree with their followers, with disagreement defined as at least 0.5 SD of the standardized mean score on the two predictors, to make further analyses meaningful (Fleenor & Prince, 1997). To classify the leaders into the four categories, we therefore standardized the score for self and followers, and a leader with a standardized score on the self-rating half a standard deviation above their followers' scores was categorized as an over-estimator. A leader with a standardized self-rated score half a standard deviation below their followers' scores was categorized as an under-estimator. Leaders within these limits were categorized as in agreement with followers (Fleenor, McCauley, & Brutus, 1996). Leaders who were in agreement and rated by their team above the mean score were classified as in-agreement, good, whereas leaders who were in agreement with their followers, but rated by the team below the sample mean was classified as in-agreement, poor. This classification, based on theory, is only used for descriptive purposes and to aid interpretation of the response surface analysis, and not used in the polynomial regressions where continuous variables are used.
Second, polynomial regression analyses were conducted, one for each of the four leadership styles for both follower and leader ratings. These analyses were performed on scalecentered variables to aid interpretation of the findings (Edwards, 1994). Time 2 ratings of leadership behavior were regressed on leaders' ratings, followers' ratings, the cross product of leaders' and followers' ratings, and the square of leaders' and followers' ratings of the leaders' leadership behaviors at Time 1. If the predictors explain significant variance in the outcome variable (i.e., R 2 of the polynomial regression is significant), further analyses are justified, which includes calculating the four surface test values: a 1 , a 2 , a 3 , and a 4 , based on unstandardized regression coefficients (Atwater, Ostroff, Yammarino, & Fleenor, 1998). Tables 4 and 5 show the equations for the surface tests and the results.
In the third step, the surface test values were plotted in graphs. The four surface test values represent the slopes and curvature of two lines. The first line, the "line of perfect agreement," runs diagonally from the nearest to the farthest corners of the graph. a 1 is the slope along the "line of perfect agreement" and represents how agreement between the predictors relates to the outcome. a 2 is the curvature along the line for perfect agreement and shows whether this relationship (between agreement and outcome) is linear or non-linear, that is, if the outcomes differ depending on whether the ratings are high and in agreement or low and in agreement. The second line, called the "line of incongruence," runs diagonally from the left to the right corner. The slope along the line of incongruence is reflected by a 3 and the curvature by a 4 . Similarly, as for the line of perfect agreement, the curvature shows how disagreement between predictors relates to the outcome and subsequently the slope if the direction of disagreement matters.
To justify aggregation of the follower data to the team level, intraclass correlation coefficients (ICC(1)s) and within-group agreement (rWG(j)) statistics were calculated. As presented in Table 1, the ICCs were all positive, and all except one were significant. The mean rWG(j)s were above 0.80 for all scales. Overall, the analyses support the aggregation of follower ratings.
Results
Descriptive statistics and correlations between study variables are presented in Tables 2 and 3. Table 3 shows the correlations between leader and follower ratings of the four leadership behaviors at Time 1 are all non-significant, indicating that variation exists between the ratings of leader and followers and that perceptual distance analyses are warranted.
In line with the procedure outlined by Shanock et al. (2010), we first analyzed the level of agreement between leaders' and followers' perceptions of leadership before training. For transformational leadership, 19% of leaders fell in the in-agreement, good leader category, whereas 11% fell were in-agreement, poor leaders; 33% were over-estimators; and 37% were under-estimators. For contingent reward, 17% were in-agreement, good leaders; 11% were in-agreement, poor leaders; 37% were over-estimators; and 35% were under-estimators. For MBEP leadership, 16% were in-agreement, good leaders; 15% were in-agreement, poor leaders; 32% were over-estimators; and 37 % were under-estimators. Finally, for laissez faire leadership, 20% were in-agreement, good leaders; 9% were in-agreement, poor leaders; 36% were over-estimators; and 36% were under-estimators. Thus, the discrepancies in leader and follower ratings were larger than 10% (Fleenor & Prince, 1997), warranting polynomial regression analyses.
We analyzed the impact of SOA pre-training on leaders' self-ratings of leadership styles after training. Four polynomial regressions were performed, one for each leadership style (see Table 3). Self-other agreement on transformational leadership before training explained significant variance in leaders' selfratings of transformational leadership after training. The same was true for all other leadership styles. We therefore calculated surface test values, a 1 -a 4 , for all four leadership styles at Time 2, which are presented in Table 4. The surface test values were then used to graph and interpret the results from the polynomial regressions.
Hypothesis 1 proposed that the highest increases in transformational leadership post-training would be reported by in-agreement, good leaders; second highest for over-estimators; third highest for under-estimators and the lowest for in-agreement; poor leaders as rated by leaders themselves. Testing Hypothesis 1, the slope along the line of perfect agreement was positive and significant (a 1 = 1.27, t = 5.41, p < .001) suggesting that in-agreement, good leaders reported that they increased more than in-agreement, poor leaders. The slope along the line of incongruence was positive and significant (a 3 = 1.09, t = 5.25, p < .001), indicating that over-estimators perceived that they increased their transformational leadership more than under-estimators post-training. As seen in Fig. 1, we were unable to differentiate between inagreement, good leaders and over-estimators, or between in-agreement, poor leaders and under-estimators. Thus, the pattern of results for these leadership styles partially supports Hypothesis 1: In-agreement, good leaders rated that they increased their transformational leadership more than in-agreement, poor leaders and over-estimators reported they increased these styles more than underestimators; however, in-agreement, good leaders and over-estimators increased these styles equally, as did inagreement, poor leaders and under-estimators. A similar pattern was found for Hypothesis 2 testing selfrated increases in contingent reward. The slope along the line of perfect agreement was positive and significant (a 1 = 0.92, t = 4.31, p ≤ .001), suggesting that in-agreement, good leaders reported that they increased their contingent reward behaviors more than in-agreement, poor leaders. The slope along the line of incongruence was positive and significant (a 3 = 1.09, t = 5.25, p < .001), indicating that over-estimators perceived that they increased more than under-estimators post-training. Again, as seen in Fig. 2, we were unable to differentiate between in-agreement, good leaders and over-estimators, or between in-agreement, poor leaders and under-estimators. Thus, Hypothesis 2 was partially supported. Testing Hypothesis 3, for self-rated development of MBEP, the slope along the line of perfect agreement was non-significant (a 1 = 0.01, t = 0.01, p = .991), and we thereby failed to detect any differences between in-agreement, good and in-agreement, poor leaders. In addition, the slope along the line of incongruence was non-significant (a 3 = 0.71, t = 1.36, p = .177) suggesting no difference between overestimators and under-estimators of passive leadership posttraining. Therefore, no support was established for Hypothesis 3.
The results for Hypothesis 4 and self-rated laissez faire leadership were similar to Hypothesis 3. The slopes along both the line of perfect agreement (a 1 = 0.42, t = 0.65, p = .516), and the line of incongruence (a 3 = 0.15, t = 0.34, p = .736) was non-significant, leading to the rejection of Hypothesis 4.
To test Hypotheses 5-8, we analyzed the impact of SOA before training on the followers' ratings of leadership posttraining. Again, four polynomial regressions were performed, one for each leadership style (see Table 4). The predictors explained significant variance post-training. We therefore calculated surface test values, a 1 -a 4 , for all four leadership behaviors post-training, which are presented in Table 5.
Hypothesis 5 proposed that followers would report the greatest increases in transformational leadership for underestimators; second highest for good, in-agreement leaders; third highest for in-agreement, poor leaders; and lowest increases for over-estimators. Testing Hypothesis 5, the slope along the line of perfect agreement was positive and significant (a 1 = 0.75, t = 2.66, p = .001) suggesting that in-agreement, good leaders increased their transformational leadership more than in-agreement, poor leaders. The slope along the line of incongruence was negative and significant (a 3 = − 0.73, t = − 3.01, p = .004), indicating that under-estimators increased more than over-estimators post-training. As seen in Fig. 3, under-estimators increased their transformational leadership more than in-agreement, good leaders. Contrary to expectations, over-estimators were perceived to increase more than in-agreement, poor leaders by their followers. The pattern of results for transformational leadership only partially supports Hypothesis 5.
The results for Hypothesis 6, regarding followers' ratings of improvements in contingent reward, revealed that the slope along the line of perfect agreement was positive and significant reward (a 1 = 0.84, t = 4.02, p < .001), suggesting that inagreement, good leaders increased their contingent reward leadership more than in-agreement, poor leaders. Contrary to the findings of transformational leadership, the slope along the line of incongruence was non-significant (a 3 = − 0.41, t = − 1.95, p = .054), and we therefore failed to detect any significant differences between over-estimators and under-estimators. As seen in Fig. 4, contingent reward post-training is highest for the in-agreement, good leaders, followed by under-estimators, over-estimators, and lowest for in-agreement, poor leaders as rated by followers. Hypothesis 6 was therefore partially supported.
Testing Hypothesis 7 and reductions in MBEP, the slope along the line of agreement was positive and significant (a 1 = 1.07, t = 2.13, p = .037), suggesting that inagreement, good leaders (i.e., low levels of MBEP) reduced their MBEP more than in-agreement, poor leaders (i.e., high levels of MBEP) as rated by their followers. The line of incongruence was negative and significant (a 3 = − 0.98, t = − 2.72, p = .008) indicating that under-estimators reduced their MBEP leadership behaviors more than overestimators according to followers. As seen in Fig. 5, the lowest ratings of MBEP after training are found for inagreement, good leaders followed by under-estimators. In addition, it was not possible to detect any differences between over-estimators and in-agreement, poor leaders in post-training MBEP leadership behaviors. Our pattern of findings therefore partially supports Hypothesis 7.
Finally, testing Hypothesis 8 and reductions in laissez faire leadership, the slope along the line of perfect agreement was non-significant (a 1 = − 0.48, t = − 0.75, p = .455), suggesting no difference between in-agreement, good or in-agreement, poor leaders. The line of incongruence was negative and significant (a 3 = − 1.00, t = − 2.29, p = .025), indicating that under-estimators reduced their passive leadership behaviors more than over-estimators. As presented in Fig. 6, the highest ratings of laissezfaire leadership after training were found for over-estimators, whereas lower ratings were found for under-estimators, and in-agreement, good and in-agreement, poor leaders. Thus, Hypothesis 8 was partially supported.
Discussion
In the present paper, we aimed to extend current theory and research on how SOA may influence the effects of leadership training when multi-source feedback is integrated into training. We explored these effects in a leadership training program aimed at increasing transformational leadership and contingent reward and reducing management-by-exception passive and laissez-faire leadership. Based on the seminal paper of Yammarino and Atwater (1997) and more recent empirical research (for reviews, see Fleenor et al., 2010;Lee & Carpenter, 2018) suggesting that over-estimators; under-estimators; in-agreement, good leaders; and in-agreement, poor leaders possess different traits (Fleenor et al., 2010;Lee & Carpenter, 2018;Yammarino & Atwater, 1997), we hypothesized that there are differences in the extent to which leaders change their leadership behaviors posttraining as rated by leaders themselves and their followers depending on their SOA type. Our first to fourth hypotheses stated that leaders themselves would rate themselves differently post-training based on their SOA. We suggested that in-agreement, good leaders would rate the highest improvements, followed by over-estimators, under-estimators, and in-agreement, poor leaders. These results lend overall support for the Yammarino and Atwater (1997) assumption that in-agreement, good leaders improve the most-according to themselves, however, do not support that over-estimators have the worst outcomes. We found support that in-agreement, good leaders reported the greatest increases in transformational leadership and contingent reward compared with in-agreement, poor leaders. As suggested by previous research (Lee & Carpenter, 2018), these leaders may feel comfortable with receiving feedback and may actively use this feedback to develop action plans during training that help them develop their transformational and contingent reward leadership.
The Golem effect (Babad et al., 1982) seemed to be at play when considering leaders' own ratings. When leaders and followers both rate the leader negatively, leaders did not rate improvements in their leadership behaviors. Receiving feedback that followers agree with oneself did not induce the same change when leaders and followers agreed that the leader was poor as when leaders and followers agreed the leader was good. One possible explanation may be that in-agreement, poor leaders distrust their ability to change. Receiving training in how to change behaviors may not necessarily increase inagreement, poor leaders' confidence that they are capable of change, suggesting that these leaders do not benefit much from receiving feedback during leadership training. Other explanations may be that these leaders are not motivated to change (Smither et al., 1995), or they simply do not have the ability to change despite training (Yammarino & Atwater, 1997). The prediction by Yammarino and Atwater (1997) concerning the negative outcomes of in-agreement, poor leaders was confirmed. We found no support leaders who reported lower levels of laissez faire, and MBEP leadership rated themselves differently post-training. It is possible that leaders do not perceive they enact such undesirable behaviors or that training did not focus sufficiently on how to reduce these behaviors. The leaders in the organization in the present study were mostly externally recruited and were university graduates rather than having been promoted through the ranks. This may mean that they had a greater awareness of the importance of active leadership prior to training.
Our fifth to eighth hypotheses suggested that (a) followers would perceive that transformational leadership and contingent reward had increased the most and (b) laissez faire and MBEP leadership reduced the most for under-estimators followed by in-agreement, good leaders and then over-estimators, and finally, in-agreement, poor leaders would be perceived to improve the least by the followers. We found that with regard to transformational leadership, followers rated under-estimators to improve their behaviors to a higher extent than in-agreement, good leaders (Hypothesis 5). This finding lends support to the Pygmalion effect (Rosenthal & Jacobson, 1968); under-estimators take on board the positive feedback from their followers, and this motivates them to improve their leadership behaviors, and followers appreciate these changes. A possible explanation may be that these leaders experienced a confidence boost by receiving positive feedback from their followers. This, in combination with having received the tools and methods needed to change style and the requirement to develop action plans during training, increased under-estimators' transformational leadership behaviors as acknowledged by their followers, who already hold positive schemas of their behaviors. Yammarino and Atwater (1997) predicted mixed results of these leaders, and it would appear that in our case, followers appreciated their attempts to change, even if leaders themselves did not perceive they changed much.
In-agreement, good leaders were perceived by followers to improve more on contingent reward compared with underestimators (Hypothesis 6), but not so on transformational leadership (Hypothesis 6). It is possible that transformational leadership is a less concrete style compared with contingent reward, and therefore, under-estimators may benefit more from followers' feedback enabling them to know how to increase transformational leadership behaviors. Contrary to expectations, we found that over-estimators were rated by followers to increase their transformational and contingent reward leadership more than in-agreement, poor leaders. These results suggest that self-confidence plays an important role; overestimators have faith in their own leadership ability, and negative feedback from followers during training may inspire them to improve their behaviors, despite the prediction from Yammarino and Atwater (1997) that these leaders have the most negative outcomes. Our results suggest that these leaders may not be as narcissistic and uncaring by their followers as suggested by Fleenor et al. (2010) and that followers do perceive and rate attempts of over-estimators to change leadership styles post-training; followers at least partly change their negative schemas of these leaders.
Followers rated in-agreement, good leaders reduced their MBEP leadership the most and under-estimators the second. These findings support Yammarino and Atwater (1997) and suggest that although leaders do not perceive they reduce these behaviors, changes are observed by their followers. Testing laissez faire leadership (Hypothesis 8), we found that under-estimators reduced their laissez faire leadership more than over-estimators. It is possible that the negative feedback resulted in over-estimators becoming withdrawn, and thus, any changes to laissez faire leadership are unlikely to be observed.
Implications for Research
Overall, we found support for the assumption that leaders react differentially to training depending on their levels of agreements with their followers (Yammarino & Atwater, 1997). The underlying individual and personality traits of SOA leader types may explain whether leaders changed their behaviors post-training as perceived by both followers and leaders themselves. Our results suggest that the degree to which leaders are able to identify and react to followers' feedback may have an impact on whether they change leadership styles as a result of training, as perceived by themselves and their followers. Our results were not straightforward. For some leadership behaviors, e.g., contingent reward, in-agreement, good leaders were found to improve the most, but for transformational leadership, followers reported more improvements in under-estimators than in-agreement, good leaders. Although progress has been made to understand the underlying traits of SOA leaders, we need to understand more about the traits of the different SOA leaders before we can make reliable predictions about how and why training outcomes differ.
We found that leaders and followers may have differential views on which type of SOA leader changed the most. Inagreement, good leaders reported the greatest changes both for transformational leadership and contingent reward. Followers only rated improved contingent reward; they perceived under-estimators to increase their transformational leadership behaviors more, as we hypothesized. For MBEP and laissez faire leadership, the different types of leaders (e.g., in-agreement good leaders) did not differ in their ratings, but followers rated the greatest decreases in laissez faire leadership in under-estimators and the greatest decreases in MBEP in-agreement, good leaders. Our results suggest that the Yammarino and Atwater (1997) typology of SOA leaders need to be refined as outcomes of SOA depend on the eyes of the beholder, be it leaders themselves or followers.
Implications for Practice
Our study has important implications for practice. Lacerenza et al. (2017) found no difference between leaders who received single-source feedback and multi-source feedback; however, our results suggest that the impact of multi-source feedback depends on the extent to which leaders are in agreement with their followers and whether such agreement is positive or negative. Feeding back leadership styles should include information about the level of agreement, including whether their followers rate them higher or lower than themselves or agree with them, not just ratings leaving leaders on training to draw their own conclusions on agreement Our results suggest that in-agreement, poor leaders find it difficult to change their style, implying that providing tools and methods for change during training is insufficient if there is a lack of confidence, motivation, and/or ability to make changes. Additional training aimed at increasing selfconfidence and self-efficacy may be needed, for example through web-based training where leaders learn about the self-efficacy and complete tasks on how they can act differently in challenging situations (Luthans, Avey, & Patera, 2008). Training these leaders in self-efficacy prior to leadership training may help ensure a successful outcome of such training. Leader development programs could include educational components focused upon the utility of multi-source feedback to encourage positive attitudes. Furthermore, our results indicate that over-estimators continue to over-rate themselves post-training. Alternative ways of supporting leader change may be required for over-estimators, e.g., integration in to key performance indicators and additional efforts to motivate leadership style change. Overestimators who have received unexpected negative feedback from their followers during training may need support to take on board rather than reject feedback. Feedback sessions during training may be structured so that over-estimators receive feedback in a group of leaders with the same profile. If other leaders receive similar patterns of feedback, it may seem less threatening and a group feeling of "we can change" may occur. If feedback is given or as in our study in one-to-one sessions, additional support to motivate over-estimators should be provided.
In addition to tailoring training to the four types, steps could be taken to create alignment between leaders and their followers' perceptions. Ostroff, Atwater, and Feinberg (2004) also found that more experienced leaders tended to overrate themselves, and thus, ongoing sense checks should be put in place, where feedback is provided to leaders about their leadership styles. Adjustments to leaders' self-ratings may be supported through regular discussions and tools supporting reflective work practices where leaders and followers jointly reflect on leaders' behaviors and what changes need to be made. As found by Mackie (2015), coaching may help overestimators develop a more realistic self-image.
Many companies conduct annual attitude surveys on working conditions, and often include leadership items in these. Compiling and feeding back SOA on the results of the survey may facilitate reflection in leaders. Feeding back SOA on working conditions, rather than just leadership, may feel less threatening to leaders, and they may be more open to explore disagreements with their followers. Ostroff et al. (2004) found span of control to be an antecedent of disagreement, i.e., the more followers a leader had, the greater the disagreement. Organizing work is smaller teams may thus be one way of increasing SOA; however, care must also be taken to ensure that leaders and followers interact and engage in reflections on their working conditions and leadership practices.
Strengths and Limitations
Clear strengths of our study are the prospective study design allowing us to evaluate the effects of multi-source feedback integrated into leadership training. The training was designed according to best practices: Training needs analysis, face-to-face training, spacing of sessions over time, external instructors delivering the training, practice-based learning methods (Lacerenza et al., 2017). The multi-source and prospective nature of our data means common method bias is unlikely to pose a threat to our results (Podsakoff, Mackenzie, & Podsakoff, 2012). Furthermore, the multilevel data (Hox, 2010) and the polynomial regressions, which include interactions (Siemsen, Roth, & Oliveira, 2010) also reduce this threat.
To the best of our knowledge, this is the first study to explore the impact of multi-source feedback pre-training on changes in leadership post-training and thus providing valuable insights to how the social context influences training outcomes; however, our study is not without limitations. First, the study took place in one organization, which limits generalizability. Future research should explore how our results translate to other settings.
Second, a limitation which this study shares with other studies on SOA is the statistical calculation of agreement and disagreement (Fleenor et al., 2010;Lee & Carpenter, 2018). Although this allows for an objective calculation, it fails to capture whether leaders and followers themselves experience disagreement or alignment of their perceptions of constructive and passive leadership. It may be worthwhile including such subjective measures to test whether they explain outcomes of training above and beyond these objective calculations.
Third, we measured leadership styles before and after training. SOA before training was found to be related to changes in leadership styles post-training; however, we were unable to make observations during the training as to how feedback influenced leaders' attitudes and behaviors. We are thus unable to establish causality and the mechanisms by which SOA influences leaders' reactions to training. Future studies should supplement before and after measurements with observations of how leaders react to feedback and how they subsequently engage with training activities.
Fourth, our outcome measure was leadership as rated by leaders and followers. This outcome is given the aim of this study (exploring how perceptions of leadership change); adding objective measures of leadership effectiveness would add another dimension to the study.
Fifth, we asked leaders to select which of their followers should participate in the study. This choice potentially leads to bias; however, an indication that leaders were not biased in their selection of their followers invited is the fact that followers did disagree with their leaders on their leadership styles.
Sixth, the relatively small sample size (68 leaders and 237 followers) potentially contributed to low statistical power in our analyses. Therefore, our results should be interpreted in light of the possibility of a Type-II error. A sensitivity analysis revealed that this sample, with a power of 0.80, can detect a medium-sized (f 2 = 0.17) change in R 2 going from a two main effects model to a polynomial model in terms of adding the interaction and two quadratic terms (Faul, Erdfelder, Buchner, & Lang, 2009). Finally, we focused on full-range leadership. In recent years, transformational leadership has been criticized for lacking a clear conceptual definition, being confused/confounded with charismatic leadership, being defined in terms of its outcomes of effectiveness, and using problematic measures to capture the concept (Antonakis, Bastardoz, Jacquart, & Shamir, 2016;van Knippenberg & Sitkin, 2013). More recently, it has been argued that rather than abandoning transformational leadership altogether, the theory itself holds value (Siangchokyoo, Klinger, & Campion, 2020)); however, the application of the theory has suffered as it became dominant in the leadership field too early in its development, and it is necessary to go back to drawing board and address some of the issues. One of the strengths of transformational leadership is the strong evidence linking transformational leadership to follower well-being (Arnold, 2017;Harms et al., 2017;Inceoglu et al., 2018;Skakon et al., 2010), suggesting it does have a positive impact on outcomes not inherent in its conceptualization, i.e., performance. Furthermore, transformational leadership theory does offer explanatory value in relation to effectiveness above newer leadership constructs (Hoch et al., 2018).
In our study, we found low reliability on some of the MLQ subscales on the leader subscales. This low reliability is somewhat unexpected given the wide usage of the scale (Judge & Piccolo, 2004), however, links to the criticisms of van Knippenberg and Sitkin (2013). Given that reliability is dependent on the sample size and number of items included in a scale (Raykov & Marcoulides, 2011), the combination of the low number of leaders and few items (4 items per subscale) may explain the low reliability in our data. Low reliability may attenuate relationships between variables (Schmitt, 1996), which suggests that some relationships in our study may have been underestimated. The low reliability may also have increased the discrepancy in our data, increasing the percentage of leader and followers who were not in agreement. Siangchokyoo et al. (2020) argued that new measures to capture transformational leadership should be developed. It was not feasible to develop and validate new scales in the present study, but we urge scholars to develop and validate scales in future studies.
The original full-range theory focused on leaders' transformation of followers (Siangchokyoo et al., 2020). We focus on full-range leadership as the organization under study had identified leaders as the vehicle for transforming their followers and that they wanted to change the leadership according to the principles of this type of leadership. Based on the analysis of what the organization wished to achieve, we focused on the dimensions of idealized influence, intellectual stimulation, inspirational motivation, and individualized consideration (Bass & Riggio, 2006). Siangchokyoo et al. (2020) argued for taking a step back and conducting quasi-experimental studies to develop the original transformational leadership theory. In the present study, we addressed this call by examining in which circumstances full-range leadership can be trained.
Conclusion
The main contributions of the present study are threefold. First, we extended the existing leadership training literature by exploring a range of leadership styles and how these may be impacted by training. We tested the extent to which it is possible to reduce undesirable, passive leadership behaviors. Followers reported this to be the case. Second, we contribute to the leadership training literature exploring the social context within which leaders successfully change their leadership styles as a result of training including feedback. We found that self-other agreement plays an important role, and these results call for future research and practice to rethink how multi-source feedback is used in training situations. Third, we extend the current literature on the outcomes of selfother agreement. We tested outcomes rated by both leaders and followers and found that depending on the source, differences were found in how effective training integrated with multi-source feedback was in changing leadership styles.
Funding This research was funded AFA Insurance.
Compliance with ethical standards
Disclaimer The research fund had no involvement in data collection, analysis and interpretation, nor in the decision for submitting this work.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2022-12-15T14:46:37.167Z | 2021-01-13T00:00:00.000 | {
"year": 2021,
"sha1": "14631a3d768f3b07feb4ea21fefb75847ee817d2",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10869-020-09730-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "14631a3d768f3b07feb4ea21fefb75847ee817d2",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
76663894 | pes2o/s2orc | v3-fos-license | Drug Repositioning Inferred from E2F1-Coregulator Interactions Studies for the Prevention and Treatment of Metastatic Cancers
Metastasis management remains a long-standing challenge. High abundance of E2F1 triggers tumor progression by developing protein-protein interactions (PPI) with coregulators that enhance its potential to activate a network of prometastatic transcriptional targets. Methods: To identify E2F1-coregulators, we integrated high-throughput Co-immunoprecipitation (IP)/mass spectometry, GST-pull-down assays, and structure modeling. Potential inhibitors of PPI discovered were found by bioinformatics-based pharmacophore modeling, and transcriptome profiling was conducted to screen for coregulated downstream targets. Expression and target gene regulation was validated using qRT-PCR, immunoblotting, chromatin IP, and luciferase assays. Finally, the impact of the E2F1-coregulator complex and its inhibiting drug on metastasis was investigated in vitro in different cancer entities and two mouse metastasis models. Results: We unveiled that E2F1 forms coactivator complexes with metastasis-associated protein 1 (MTA1) which, in turn, is directly upregulated by E2F1. The E2F1:MTA1 complex potentiates hyaluronan synthase 2 (HAS2) expression, increases hyaluronan production and promotes cell motility. Disruption of this prometastatic E2F1:MTA1 interaction reduces hyaluronan synthesis and infiltration of tumor-associated macrophages in the tumor microenvironment, thereby suppressing metastasis. We further demonstrate that E2F1:MTA1 assembly is abrogated by small-molecule, FDA-approved drugs. Treatment of E2F1/MTA1-positive, highly aggressive, circulating melanoma cells and orthotopic pancreatic tumors with argatroban prevents metastasis and cancer relapses in vivo through perturbation of the E2F1:MTA1/HAS2 axis. Conclusion: Our results propose argatroban as an innovative, E2F-coregulator-based, antimetastatic drug. Cancer patients with the infaust E2F1/MTA1/HAS2 signature will likely benefit from drug repositioning.
Introduction
Occurrence of metastasis indicates the terminal, incurable stage of cancer disease. Over 90% of cancer-related deaths are attributed to metastasis, rendering its management a still-unmet, major challenge in cancer therapeutics. Although a variety of effective agents for tumor growth control have been Ivyspring International Publisher developed, there is no established antimetastatic drug [1]. A major reason for this shortfall is that, thus far, secondary tumors have been tackled similarly to primary tumors. It was not until recently that we came to realize that distinct gene expression patterns and niches of the metastatic tumors, neoangiogenesis and changes in tumor microenvironment (TME) can severely affect their drug responsiveness [1]. As long as the arsenal of antimetastatic drugs remains poor, metastases tend to be treated like a primary tumor, providing rather limited survival benefit [2]. Understanding the specific mechanisms governing disease progression will promote the development of therapies for metastasis prevention and management [3,4].
The major transcription factor (TF) E2F1 is a critically important regulator of key events in the metastatic cascade across several cancer types [5,6]. E2F1 is the archetypal member of the E2F family and a pivotal regulator of genes required for cell cycle progression, proliferation, and differentiation [7,8]. Although, at the disease onset, E2F1 acts as a tumor suppressor and promotes apoptosis following DNA damage to block malignant transformation [9,10], it switches to facilitate cancer progression at late stages [11]. We established that E2F1 can engage cells to a metastatic fate by inducing chemoresistance, angiogenesis, secondary site extravasation, and epithelialmesenchymal transition (EMT) [10,[12][13][14][15][16][17]. High E2F1 levels correlate frequently with tumor aggressiveness and poor patient outcomes in several cancer types, including melanoma, bladder, breast, prostate, and small-cell lung carcinomas [18][19][20][21][22].
One possible explanation for this change in the behavior of E2F1 might, at least in part, be explained by the proposed concept of coregulators of TFs. According to this, DNA-binding TFs recruit coactivetors and corepressors to enhance or reduce their transcriptional activity on target gene promoters. Levels of these coregulators are critical for the TF function, since they can amplify or attenuate TF's effect on target genes. Thus, these interactions serve as molecular switches that link upstream signaling events to the downstream transcriptional programs [23]. During disease progression, levels of coregulators that are able to form complexes with several TFs are altered, thereby modulating their transcriptional activity towards an invasive outcome [23][24][25]. In line with this, it was recently shown that coactivators overexpressed in invasive cancer cells can direct E2F1 to enhance the transcription of metastasis-inducing genes [14,16,17]. Since disruption of these malignant associations might restore E2F1's 'bright-side' towards an anti-invasive outcome, characterization of the E2F1 coregulome emerges as a need in terms of developing novel therapies for targeted antimetastatic strategies against E2F1-driven aggressive tumors.
Following this concept, we employed a Co-IP-mass spectrometry (Co-IP/MS) approach to screen, in a high-throughput manner, for E2F1coregulator interactions in metastatic cancer cells with translational value as antimetastatic therapies. We identified the hitherto unknown interaction partner MTA1, which, in E2F1-positive cancers, forms a complex that synergistically potentiates expression of prometastatic targets such as HAS2, thereby promoting an aggressive TME. Using a structurebased pharmacology approach and pharmacophore models, we found that argatroban, a small-molecule, FDA-approved drug currently used against heparininduced thrombocytopenia, disrupts the E2F1:MTA1 interaction. Perturbation of the E2F1:MTA1 regulatory network by argatroban suffices to inhibit metastasis in vitro and in clinically relevant mouse models of metastasis. Based on this newly identified function, drug repositioning of argatroban offers new therapeutic applications for the prevention and treatment of metastatic cancers.
Co-immunoprecipitation and mass spectrometry (Co-IP/MS)
For Co-IP, cells were prepared using Protein G Immunoprecipitation Kit (Roche, Basel, Switzerland). Cells were lysed and total cell lysates were incubated for 1 h with 4 µg of anti-Flag antibody (M2, Sigma-Aldrich, Saint Louis, MO, USA), anti-E2F1 (KH-95, Santa Cruz Biotechnology), or anti-mouse IgG antibody (Santa Cruz Biotechnology, Dallas, TX, USA). Protein G-Agarose beads were added and the immune complexes were precipitated overnight at 4 °C, under rotation. Beads were washed extensively with a washing buffer, boiled in SDS sample buffer, fractionated by SDS-PAGE, and immunoblotted using MTA1 (A-11, Santa Cruz Biotechnologies, Dallas, TX, USA), E2F1, and Flag antibodies. For UPLC-MS/MS analysis of potential E2F1 binding proteins, eluted Co-IP samples were resolved on SDS-PAGE (4-12% NuPAGE, Life Technologies, Carlsbad, CA, USA) and stained with colloidal coomassie. Gel sample lanes were cut into defined pieces, de-stained, and trypsinized. The resulting peptide solutions were extracted, subjected to UPLC-MS/MS (nano ACQUITY/SYNAPTG2 HDMSe, Waters, Milford, MA, USA), and analyzed using the PLGS software (ProteinLynx Global SERVER, Waters, Milford, MA, USA).
GST-pull-down
Experiments were performed as previously described [17]. Beads coated with GST or GST-E2F1 fusion proteins were incubated with equivalent amounts of lysates from MTA1-transfected cells, followed by IB with an anti-MTA1 antibody.
3D structure modeling, in silico protein-protein interaction and computational site-directed mutagenesis
Three-dimensional (3D) structure of MTA1 protein sequence (NCBI accession no.: NP_004680.2) was generated using iterative threading assembly refinement server (I-TASSER, Ann Arbor, MI, USA) [29,30] by utilizing spatial information for ELM2-SANT domains from PDB ID: 3BKX [31] and, for the regions between amino acid residues 656 to 711, from PDB IDs: 4PBY and 4PC0 [32]. The best model predicted by I-TASSER server was further optimized for loops and side chains using Looper and ChiRotor tools [33,34] in Biovia Discovery Studio 4.0 software suite (BIOVIA, San Diego, CA, USA) after assigning CHARMm force field. To remove any steric overlap in the model, a Smart Minimizer algorithm was used, which combines Steepest Descent methods followed by the Conjugate Gradient Method available in Biovia Discovery Studio 4.0. Potential interaction sites between E2F1, E2F2 and MTA1 proteins were predicted and refined using the Dock Protein (ZDOCK) and Refined protein (RDOCK) protocols available in Biovia Discovery Studio 4.0. For this purpose, the top 2,000 poses based on ZDOCK score were analyzed and clustered using all-against-all RMSD with an interface cut-off of 10 Å. From the top 100 clusters based on cluster density, interaction poses with the highest ZDOCK score were selected from each cluster. Further, the RDOCK protocol was used to refine these interaction poses by removing clashes and optimizing polar and charge interactions [35,36]. Based on the RDOCK score, which represents the sum of ACE desolvation energy of the protein complex and the electrostatic energy after the second CHARMm minimization, a final list was obtained with a rearrangement of the previously selected 100 poses. The top 10 poses from this list were subsequently analyzed for amino acid residues that are involved in the E2F1:MTA1 interaction. To compare the affinity of MTA1 with E2F family members, the best models based on RDOCK scores for E2F1 and E2F2 were selected and analyzed using the PDBePISA web server (http://www.ebi.ac.uk/msdsrv/prot_int/pistart.html) [37], which assesses the macromolecular interfaces using structural and chemical properties of interfacing residues.
To investigate if the interacting amino acid residues between E2F1 and MTA1 have a role in complex stabilization, we performed computational site-directed mutagenesis experiments using the 'Calculate Mutation Energy (Binding)' protocol available in Biovia Discovery Studio 4.0. For this, we mutated amino acid residues one-by-one into alanine to estimate the impact of each mutation on the complex binding. The mutation binding energy is calculated as follows: where ΔΔG mut is the mutation energy and ΔΔG bind is the difference in the free energy of the complex and unbound state. Mutations were characterized as destabilizing (ΔΔG mut > 0.5 Kcal/mol), stabilizing (ΔΔG mut > -0.5 Kcal/mol) and neutral (-0.5 ≤ ΔΔG mut ≤ 0.5 Kcal/mol).
Pharmacophore modeling and in silico screening of drug library
The Structure-Based Pharmacophore (SBP) method is based on the selection of chemical features present in the active site of a protein to screen compounds from chemical libraries that are likely to bind within that site [38,39]. We used the 'Common Feature Pharmacophore Generation' protocol of Biovia Discovery Studio 4.0 to generate 3D pharmacophore models by considering crucial interaction patterns between MTA1 and E2F1 amino acid residues for the top 10 interaction poses which were selected based on RDOCK score. The best pharmacophore model selected in this way contains six pharmacophore features with two hydrogen bond acceptors, two hydrogen bond donors, and two hydrophobic groups. In order to increase the selectivity, we also included the 'excluded volume constrains' to the best selected pharmacophore model to highlight potentially forbidden sites for the drug molecules during the screening process. We were interested in identifying potential disruptors that prevent MTA1 from interacting with E2F1. For this, we used ZINC database subset 'Zdd', which is a collection of 2,924 commercially available, FDAapproved drugs/nutraceuticals in use for humans for the construction of virtual ligand library. The screened ligands were arranged in a decreasing order of their FIT score, which is a measure of how well the ligand fits the pharmacophore. In order to further confirm the interactions of screened drugs/ nutraceuticals with MTA1, we performed controlled molecular docking studies of demeclocycline and argatroban within the potential binding site of MTA1 associated in the interaction with E2F1. For this, we used the CDOCKER protocol of Biovia Discovery Studio 4.0, which is a grid-based molecular docking method, to dock ligands into the receptor active site [40].
Invasion and migration assay
Cell invasion and motility assays were conducted as described previously [16].
Cell viability assay
For XTT assays (Trevigene Inc., Gaithersburg, MD), 1x 10 5 SK-Mel-147 and 8x10 4 PC-3 cells were seeded in 12-well plates and supplemented with different concentrations of demeclocycline, argatroban or silibinin. Cells were incubated with TACS XTT labeling mixture for 2 h and supernatants of the cells were pipetted into 96-well plates. The conversion of XTT to formazan was quantified by measuring the spectral absorbance at 490 nm. Cell viability was measured every 24 h for 2 days.
Survival studies
Overall survival curves were analyzed by SigmaPlot (Systat Software, San Jose, CA, USA) according to the Kaplan-Meier method using the log-rank test. Prostate, pancreas, melanoma and Pan-Cancer data were retrieved from The Cancer Genome Atlas (TCGA) database and separated according to the median values of either E2F1, MTA1, and HAS2 mRNA levels or combinations thereof, using the Xena cancer browser (https://xena.ucsc. edu/).
Microarrays
Cells stably expressing sh.ctrl or sh.E2F1 and sh.MTA1 after lentiviral transduction were harvested 96 h post-transduction, while PC-3 cells treated with either 100 µM argatroban or DMSO were harvested after 24 h. Following RNA isolation, equal RNA amounts were analyzed using AffymetrixGeneChip Human Transcriptome 2.0 Arrays (Affymetrix, Santa Clara, CA, USA) in duplicate for each sample. Background-corrected signal intensities were determined, processed, and normalized using the Transcriptome Analysis Console and SST and RMA algorithms (TAC, Affymetrix, Santa Clara, CA, USA). Gene transcripts not detected in any samples were excluded from statistical analysis. Genes differentially regulated by knockdown of E2F1 or MTA1 were isolated. Downregulated (≤ -2 fold) E2F1 and (≤ -1.5 fold) MTA1 targets were further analyzed using the DAVID Tool enrichment analysis (https://david.ncif crf.gov/content.jsp?file=citation.htm). Target genes with putative E2F1 binding sites were considered for further analysis. These genes were ranked based on the weighted sum of their fold changes using the ratio of median log2 fold change of the targets in the shE2F1 versus shMTA1 array and were functionally characterized according to Gene Ontology (GO) terms.
ELISA
1x10 5 cells were seeded in 48-well plates and incubated for 24 h under appropriate treatments (argatroban addition or sh.E2F1, sh.MTA1 or sh.ctrl). Culture supernatants were collected and quantification of hyaluronic acid was performed by the enzymelinked sandwich assay Hyaluronan DuoSet ELISA (R&D Systems, Minneapolis, MN, USA) following the manufacturer's instructions.
Animal studies
To analyze drug efficiency in a melanoma metastasis model in vivo, 3×10 6 tumor cells, stably transduced with sh.ctrl, sh.E2F1, sh.MTA1 or pretreated with 100 µM argatroban (for 24 h), were injected intravenously (i.v.) into the tail vein of 6-week-old male athymic NMRI nude mice (Charles River, Sulzfeld, Germany). Argatroban was administered intraperitoneally (i.p). at a dose of 9 mg/kg body weight every other day over 4 weeks. Finally, lung tissue was surgically excised, fixed in 4% paraformaldehyde, paraffin-embedded, and processsed for histological analysis with hematoxylin and eosin (H&E) staining. For quantification of pulmonary metastasis, the relative area of tumors was expressed as a percentage of the total lung area and calculated using ImageJ program.
For the clinically adapted orthotopic pancreatic ductal adenocarcinoma (PDAC) xenotransplantation model, four-week-old female SCID beige mice were obtained from Charles River and two experimental settings were employed. In the first approach, 1x10 6 PancTuI cells, stably transfected with control sh.RNA, sh.RNA against E2F1, or sh.RNA against MTA1, were inoculated orthotopically into the pancreas. All mice (12 animals per group) developed primary tumors and were subjected to re-laparotomy fifteen days later by subtotal resection of the tumor-bearing pancreas, as described previously [42,43]. Two mice, one from sh.E2F1-and one from sh.MTA1-group, died due to complications after tumor resection, while the remaining animals recovered well. On day 31 post tumor-cell inoculation, all mice were sacrificed and organs as well as tumors were preserved and examined. In the second round, wild-type 1x10 6 PancTuI cells were injected orthotopically and the primary tumors were resected thirteen days later as described above. Three days post-resection, mice were randomly assigned into two groups (n = 11 each). Mice of one group were treated i.p. with argatroban (9 mg/kg body weight/day), while control mice received 0.9% saline (125 µl). Animals were sacrificed 31 days post-inoculation; organs and tumors were preserved and examined. All animal experiments were performed according to ethical standards, in compliance with the local authorities (V312-7224.121 (75-5/12)).
Statistical analysis
All quantitative values were expressed as mean ± standard deviation (SD). For in vitro assays, SigmaPlot (Systat Software, San Jose, CA, USA) was used to determine statistics, performing 2-tailed Student's t-test. For Kaplan-Meier analyses, significance was estimated using the log-rank test. For the melanoma mouse model and the orthotopic mouse model, statistical significance was evaluated with 2-tailed Student's t-test and Mann-Whitney test, respectively. P-values less than 0.05 were considered significant.
E2F1 physically interacts with MTA1
To identify the complexome of E2F1, we prepared cellular extracts from melanoma cells expressing endogenously high amounts of E2F1 [11], and performed immunoprecipitation with anti-E2F1 or control IgG antibodies ( Figure 1A). Precipitates were separated, in-gel digested, and subjected to liquid chromatography followed by mass spectrumetry. Unspecific IgG precipitates were excluded from further analysis. In the group of proteins co-purified with E2F1, we found classical interacting partners such as RB and DP-1, together with MTA1, which, thus far, has never been described as an E2F1 interacting partner ( Figure 1A, right). MTA1 is upregulated in most malignant cancers and induces cell transformation, DNA repair, and EMT, acting either as corepressor or coactivator of TFs [44]. As shown in Figure 1B, MTA1 co-localizes with E2F1 in the nucleus of the SK-Mel-147 cell line. The in vivo interaction of both proteins was validated in tumor cells overexpressing E2F1-Flag by co-immunoprecipitation with Flag-antibody and subsequent immunoblotting (IB) using antibody against MTA1 ( Figure 1C), as well as through GST-pull-down experiments with whole-cell lysates from SK-Mel-147 and PC-3 cells ( Figure 1D).
Next, the interaction sites between E2F1 and MTA1 were identified through computational protein-protein interaction (PPI) analysis. For this purpose, we used our previously designed threedimensional (3D) model of E2F1 [16] and a newly designed, optimized 3D-model of MTA1 using iterative threading assembly refinement (I-TASSER) server [29,30]. PPIs between E2F1 and MTA1 were predicted using the ZDOCK and RDOCK algorithms [35,36] with the best interacting pose demonstrated in Figure 1E (amino acid residues involved in PPI are shown in Table S1). MTA1 is predicted to interact with E2F1 through a vast portion of its BAH-, ELM2-, and SANT-domains. Moreover, dimerization and transactivation domains of E2F1 are very important for MTA1 interaction. Additionally, to evaluate if there is a preference of MTA1 for E2F1, we estimated the binding affinity of MTA1 for other members of the E2F family. For this purpose, we generated the E2F2 structure and predicted the best interaction pose with MTA1 ( Figure S1). Solvation energies from the complex formation (ΔG s ) indicated a higher binding affinity of MTA1 for E2F1 than E2F2 (E2F1: ΔG s = -7.0 kcal/mol versus E2F2: ΔG s = +1.1 kcal/mol). Hence, we focused on the E2F1:MTA1 interaction. The binding interfaces between E2F1:MTA1 were further confirmed via computational, site-directed mutagenesis experiments. Interacting residues at the best binding pose of E2F1 and MTA1 (Table S1) were mutated one-by-one into alanine. After each mutation, we calculated the mutation binding energy of the complex (Tables S2-S3). All residues that, upon mutation, either destabilize or stabilize the complex were considered as active players of the E2F1:MTA1 interaction.
MTA1 is a direct E2F1 transcriptional target
Binding partners of E2F1 can, at the same time, be E2F1 transcriptional targets [14,16,17]. With this in mind, we examined if E2F1 regulates MTA1 expression. Depletion of E2F1 across several E2F1-expressing cancer cell lines using specific shRNA severely impaired both transcript and protein levels of MTA1 ( Figure 2A). Conversely, MTA1 expression was substantially upregulated upon E2F1 overexpression or 4-OHT-mediated activation in stable ER-E2F1 cell lines ( Figure 2B). Additionally, immunofluorescence of E2F1 and MTA1 in H1299.ER-E2F1 and PC-3.ER-E2F1 cells following 4-OHT treatment showed enhanced MTA1 staining and nuclear co-localization ( Figure 2C, left panels). The protein levels of MTA1 increased in the nuclear fraction of E2F1-activated cells ( Figure 2C, right panels).
Since MTA1 expression is E2F1-dependent, we examined if MTA1 is a direct transcriptional target of E2F1. Bioinformatic analysis of the MTA1 promoter predicted one putative binding motif for E2F1. ChIP assays revealed that E2F1 is recruited to this promoter region (-270 to -116 bps) of MTA1 upon E2F1 expression ( Figure 2D). The MTA1 promoter region comprising the E2F1-binding site was cloned into a pGL3-luciferase reporter construct. Cotransfection of this construct with expression plasmids encoding either wild-type E2F1 or an E2F1 mutant deficient for DNA-binding (E132) in H1299, SK-Mel-29, and LNCaP cells demonstrated that the MTA1 promoter is activated through E2F1 in a dose-dependent manner. In contrast, no significant promoter upregulation was noticed in response to the E2F1 mutant or empty vector (ctrl) ( Figure 2E).
The E2F1:MTA1 complex promotes cancer cell invasion
The fact that E2F1 and MTA1 are co-elevated in invasive cancer cell lines versus non-invasive ones ( Figure S2) provides hints that the E2F1:MTA1 complex might mediate the invasive potential of tumor cells. To monitor the effect of this complex in cancer cell motility, we separately depleted E2F1 or MTA1 in highly invasive PC-3, SK-Mel-147, and PancTuI cells by transducing them with a lentiviral vector expressing sh.E2F1 or sh.MTA1, and performed Matrigel assays. Knockdown of E2F1 led to a decrease of MTA1 in these cell lines and significantly reduced their migratory/invasive capacity ( Figure 3A). Cell invasiveness was also severely impaired when MTA1 was completely abrogated, while high E2F1 levels remained unchanged ( Figure 3B). Notably, neither E2F1 nor MTA1 knockdown affected proliferation in the aforementioned cells ( Figure S3). In addition, invasive growth, induced by E2F1 overexpression in initially less aggressive cells such as LNCaP, was strongly reduced upon MTA1 ablation ( Figure 3C). These results demonstrate that the pro-invasive potential of E2F1 greatly depends on its physical interaction with MTA1. To investigate the clinical relevance of this finding, we performed Kaplan-Meier analyses of RNA-Seq data from TCGA cohorts in patients with malignant melanoma, prostate, and pancreatic carcinoma, as well as the Pan-Cancer data which include 36 different cancer subtypes. High E2F1 levels alone are consistently associated with poor outcomes, while MTA1 levels alone are associated with survival in a cancer type-dependent manner ( Figure S4). Nevertheless, high co-expression levels of E2F1 and MTA1 significantly and consistently correlate with poor overall survival across all cancer types tested ( Figure 3D-E and Figure S4). In summary, our data show that E2F1 abundancy is a prerequisite for a negative prognostic value of MTA1, and the latter facilitates E2F1's oncogenic function, indicating its ability to cooperate with E2F1 towards favoring tumor invasion.
MTA1:E2F1 complex mediates invasion via transcriptional coregulation of hyaluronan synthase 2
E2F1 has been shown to transactivate coregulators, which in turn physically associate with E2F1 to synergistically regulate genes essential to angiogenesis, extracellular matrix (ECM) remodeling, tumor cell survival, and interactions with vascular endothelial cells [14,16,17], thereby creating feedforward loops among E2F1, the coregulator, and common downstream targets. Therefore, we examined whether MTA1 and E2F1 follow this pattern, also creating feedforward loops with prometastatic genes. In search for putative genes that may be coregulated by an E2F1:MTA1 complex, we performed whole transcriptome analysis in E2F1-and MTA1-depleted PC-3 cells to screen for commonly affected transcripts.
Differentially expressed transcripts affected by both proteins could be putatively subjected to an E2F1:MTA1 coregulation ( Figure 4A, left). Using the DAVID Tool enrichment analysis, the list of targets commonly downregulated by E2F1 and MTA1 was further narrowed down to 24 candidates that have putative E2F1 binding sites. GO analysis indicated that these candidates are, among others, involved in pathways of cell migration ( Figure 4A, right). From those potential candidates, HAS2 showed one of the highest fold changes based on ranking their weighted sum of fold changes using the ratio of median log2 fold change of the targets in the shE2F1 versus shMTA1 microarray. HAS2 was also clustered with potential E2F1/MTA1 coregulated targets with an implication in extracellular matrix organization and cell invasion ( Figure 4A, right, Table S4). In support, HAS2 levels were co-elevated along with E2F1 and MTA1 levels in metastatic melanoma [45], prostate [46], and pancreatic tumors [47] versus their respective primary tumors, as revealed by the Oncomine™ platform analysis ( Figure 4B). In agreement, an increment of HAS2 mRNA transcripts was also significantly correlated with high co-expression of E2F1 and MTA1 in the Pan-Cancer cohort (p=2.296 × 10 -26 ), implying that the correlation of high E2F1:MTA1 levels with elevated HAS2 levels is a common denominator across several cancer types. Notably, HAS2 is known to promote tumor progression [48][49][50] and also to regulate production of hyaluronic acid (or hyaluronan/HA), a main component of the ECM which is accumulated in the TME of many cancers [51,52]. Indeed, Pan-Cancer data analysis confirmed that HAS2 is an indicator of poor prognosis across a wide range of cancer types ( Figure S5A). Based on these data, HAS2 emerged as a representative target for evaluating the transcriptional activity of the prometastatic E2F1:MTA1 complex. As shown in Figure 4B, HAS2 mRNA considerably decreased in PC-3 cells upon E2F1 or MTA1 inhibition. Conversely, overexpression of E2F1 resulted in the upregulation of HAS2 transcripts ( Figure 4C, left panel). Importantly, this increase of HAS2 in response to E2F1 addition was seen only in the presence of MTA1 and abolished after treatment with sh.MTA1 ( Figure 4C, right panel). Notably, the expression of HAS3 was not altered, neither by knockdown of each individual complex protein nor by overexpression of E2F1 ( Figure 4B-C), confirming the specificity of the E2F1:MTA1 complex for HAS2.
HAS2 promoter analysis in the UCSC genome browser predicted a putative E2F1 binding site at position -593 to -582 bps upstream to the transcription start site. ChIP experiments showed a strong binding of E2F1 to this HAS2 promoter region upon 4-OHT addition to inducible PC-3.ER-E2F1 cells that led to the upregulation of HAS2 levels ( Figure 4D). The HAS2 promoter, containing the E2F1 binding site, was cloned into the pGL3-luciferase reporter construct and transiently co-transfected with expression plasmids for E2F1, MTA1 or both. Although E2F1 or MTA1 expression significantly increased luciferase activity in PC-3 cells, a much stronger and up to 45-fold upregulation of reporter activity was observed when both interacting proteins were co-expressed ( Figure 4E, left panel). Similar results were obtained for SK-Mel-29 cells ( Figure 4E, right panel), demonstrating a cooperative effect of MTA1 and E2F1 on the activation of the HAS2 promoter in a cell-context-independent manner. Conversely, when endogenous levels of E2F1, MTA1 or both were depleted through transfection of SK-Mel-147 cells with sh.E2F1-and sh.MTA1-expressing plasmids, HAS2 levels significantly decreased with the most potent reduction achieved upon simultaneous knockdown of both proteins ( Figure 4F, right panel). Accordingly, this was accompanied by a drastic reduction of their invasive capacity, with the strongest decline of invasiveness and loss of HAS2 expression observed in cells lacking the transcription factor plus its coregulator ( Figure 4F, left panel). Consistent with HAS2 downregulation, the levels of HA released by melanoma cells stably expressing sh.E2F1 and sh.MTA1 were reduced, as measured by ELISA in conditioned media ( Figure 4G). Together, these results demonstrate that MTA1 cooperates with E2F1 to potentiate transcriptional activity on the HAS2 target gene promoter, thereby leading to increased HA production and increased invasiveness. Moreover, high co-expression of E2F1, MTA1, and HAS2 is associated with poor survival in melanoma patients ( Figure 4I). Due to the small number of patients for prostate and pancreatic cancer when grouping together all three factors, we were unable to calculate a statistically significant prognosis ( Figure S5B). Analyzing the Pan-Cancer cohort, we bypassed this limitation and observed that, compared to high E2F1 and MTA1 alone ( Figure 4J, black), an increased HAS2 expression worsened the prognosis ( Figure 4J, red), whereas patients with low HAS2 levels showed higher survival rates ( Figure 4J, cyan). Additionally, the expression levels of E2F1, MTA1, and HAS2 correlate with each other in the Pan-Cancer cohort (E2F1:MTA1: ρ=0.2745, E2F1:HAS2: ρ=0.1651, MTA1:HAS2: ρ=0.07137). In summary, E2F1, MTA1, and HAS2 create an axis that promotes aggressiveness through HA upregulation. In line with our expression profiling data, a high E2F1/MTA1/HAS2 signature is prognostic for poor overall survival across a wide range of cancer types.
Disruption of the E2F1:MTA1/HAS2 circuit reduces metastasis by altering TME
To assess whether the E2F1:MTA1/HAS2 interaction can be targeted towards an antimetastatic outcome, we evaluated the in vivo metastatic potential of cancer cells where formation of the E2F1:MTA1 complex is abolished by knockdown of either E2F1 or MTA1. First, SK-Mel-147 cells, stably expressing sh.RNA directed against MTA1 or E2F1 ( Figure 5A, left panel, also used in vitro), were delivered i.v. into nude mice. The metastatic potential of circulating tumor cells was determined by quantitatively analyzing areas of metastatic tissues versus total lung area in histological sections. Macroscopic and microscopic examination of the lungs showed massive metastases mainly in animals injected with control cells (ranging from 27% to 54%), whereas knockdown of either E2F1 (0% to 7.6%) or, to a greater extent, MTA1 (0% to 3%) markedly abolished the formation of pulmonary nodules ( Figure 5A, center and right panels). As determined by immunohistochemistry (IHC), tumors originating from unmodified cells exhibited higher HAS2 levels compared to E2F1-or MTA1-knockdown cells ( Figure 5B). HAS2, in turn, activates M2 type tumor-associated macrophages (TAMs) via HA production, thereby generating a prometastatic tumor environment [48,50]. In line with this, HAS2 downregulation in tumors derived from cells where E2F1 or MTA1 was depleted demonstrated reduction in the proportion of TAMs, as estimated by immunofluorescence detection of TAMs using the M2 TAM marker CD206 [48] (green fluorescence staining, Figure 5C). Overall, disruption of the E2F1:MTA1 complex efficiently impairs the establishment of a prometastatic TME. This is achieved, at least in part, by reducing the HAS2/HA production.
We further confirmed the antimetastatic effect after perturbation of the E2F1:MTA1 complex in a pancreatic cancer model. SCID beige mice were orthotopically xenotransplanted with PancTuI cells stably expressing shRNA against E2F1 or MTA1 and compared to sh.ctrl. (Figure 5D). All animals developed primary tumors. Fifteen days after cell inoculation, re-laparotomy was performed and the tumor-bearing pancreata were carefully mobilized and resected by subtotal pancreatectomy. Assessment of mice on day 31 revealed that inhibition of E2F1 as well as MTA1 strongly attenuates recurrent tumor occurrence and formation of liver metastases ( Figure 5E). Moreover, mice bearing PancTuI tumors originating from sh.E2F1-or sh.MTA1-expressing cells showed a significant decrease in recurrent tumor weight after resection ( Figure 5F), but more remarkably, MTA1 depletion completely abolished metastatic dissemination of PancTuI cells into the liver after pancreatectomy ( Figure 5G).
Structural systems pharmacology-based identification of FDA-approved drugs that inhibit the E2F1:MTA1 complex
Given the antimetastatic potential of perturbation of the E2F1:MTA1 interaction, we aimed to block the formation of the E2F1:MTA1 malignant complex by selectively inactivating MTA1 while leaving E2F1 intact. To this end, we used a pharmacophore modeling approach to perform high-throughput screening of a library of FDA-approved, smallmolecule drugs/nutraceuticals that are able to bind to MTA1 surfaces, mediating its PPI with E2F1. First, we identified key amino acid residues of MTA1 present at the interaction interface of E2F1 and used them for structure-based pharmacophore modeling using the 'create pharmacophores' protocol available in Biovia Discovery Studio 4.0 software suite. From the top 10 pharmacophore models generated on the MTA1 surface, we selected the best one where maximum pharmacophore features were associated with the key amino acid residues involved in the E2F1 interactions. The best pharmacophore model comprised six features: two hydrogen bond acceptors near MTA1: Ser319 and Leu320, two hydrogen bond donors near MTA1:Lys318 and Ser322, and two hydrophobic groups being the indole side chain of MTA1:Trp317, contributing to a pi-anion electrostatic interaction with Asp436 and a pi-sigma hydrophobic interaction with Leu435 of E2F1 at the top interaction pose. Additionally, twelve exclusion volumes were also considered ( Figure S6A).
This pharmacophore model was used as an input query for high-throughput screening of potential inhibitors of E2F1:MTA1 interaction from a virtual library of 2,924 FDA-approved drugs/nutraceuticals. We found a total of 16 substances predicted to interact with MTA1 residues which are involved in the interaction with E2F1 and ranked them according to their FIT scores (Table S5). These compounds include active substances for indications other than cancer, such as drugs with antibacterial, anticoagulant, antioxidant or anti-inflammatory activity. Of note, the list includes argatroban and silibinin, which both have recently been reported to exert antimetastatic effects [53][54][55][56][57]. This is also supported by our XTT assays demonstrating that neither argatroban nor silibinin affect proliferation ( Figure S7A-B). In addition, compared to, for example, demeclocycline (rank 1), argatroban and silibinin are less cytotoxic at clinically relevant doses ( Figure S7C). Our controlled molecular docking studies of argatroban in the binding cavity of MTA1 indicated that argatroban is a more effective ligand of MTA1 than demeclocycline ( Figure S6B-C).
Subsequently, we focused on evaluating these compounds' inhibitory capacity on the interaction between E2F1 and MTA1 and their ability to intercept the E2F1:MTA1/HAS2 axis. Computational models of argatroban and silibinin fitting into the binding cavity of MTA1 are shown in Figure 6A. To validate these predictions, Co-IPs using anti-E2F1 or IgG as control were conducted in whole lysates from PC-3 cells treated with argatroban, silibinin or vehicle. Figure 6B (upper panel) illustrates that the amount of MTA1 co-immunopurified with E2F1 considerably decrea-sed upon compound addition, with the strongest effect observed when testing argatroban. In addition, the inhibitors significantly reduced E2F1 binding to the HAS2 promoter ( Figure 6B, bottom). HAS2 promoter activation through E2F1 in conjunction with MTA1 was significantly impaired in response to both drugs, but argatroban displayed the most potent inhibitory effect ( Figure 6C). Since argatroban inhibits E2F1:MTA1 complex formation and HAS2 promoter activation more efficiently than silibinin, we selected argatroban for further studies. To monitor the effect of argatroban on the E2F1:MTA1 complex, we evaluated HAS2 expression in aggressive PC-3 and SK-Mel-147 cells in the absence or presence of this drug. As shown in Figure 6D, drug-treated cells display a clear reduction of HAS2 transcripts. In contrast, mRNA expression of HAS3, which is not a target of the E2F1:MTA1 complex, remained unchanged. More importantly, HAS2 protein also decreased substantially upon argatroban administration in both cell lines ( Figure 6E). Intriguingly, E2F1 and MTA1 levels were not affected, strongly indicating that the loss of HAS2 expression occurs via inhibition of the PPI between E2F1 and MTA1. Consistently, HA levels in conditioned media from argatroban-treated cells markedly declined relative to untreated cells, as evidenced by HA ELISA ( Figure 6F). Thus, disruption of the PPIs between E2F1 and MTA1 by treatment of cells with argatroban suffices to reduce its regulatory activity on HAS2, eventually leading to decreased HA levels in the conditioned medium. In order to examine whether argatroban-mediated disruption of the E2F1:MTA1 complex leads to reduced invasiveness, Matrigel assays in the presence or absence of argatroban were performed by overexpressing E2F1 or MTA1 in LNCaP cells. In particular, exogenous addition of E2F1 or MTA1 failed to induce invasiveness in the presence of argatroban, compared to controls in the absence of the drug ( Figure 6G). Further, administration of different concentrations of argatroban to PC-3 ( Figure 6H) and SK-Mel-147 ( Figure 6I) cells, which endogenously express high levels of both E2F1 and MTA1 ( Figure S2), confirmed the inhibitory effect of the compound on the invasive and migratory traits.
The effectiveness of argatroban in fighting metastases was further confirmed in vivo in two mouse models, as shown in Figure 7. Mice were i.v. injected with argatroban-pretreated parental SK-Mel-147 cells and were subsequently treated by an i.p. drug administration for four weeks. A strong decline in metastatic growth (> 90 %) resulting in only a few lung foci was observed in these mice compared to untreated controls ( Figure 7A). IHC analyses of lung tumors grown in argatroban-treated animals revealed a much lower HAS2 expression ( Figure 7B) that was accompanied by a reduction in the proportion of TAMs ( Figure 7C). Additionally, we tested the therapeutic efficacy of argatroban in pancreas-to-liver metastasis, using our clinically adapted orthotopic xenotransplantation model [42]. Primary tumors established via intrapancreatic injection of human parental PancTuI cells were resected 13 days after implantation. Three days after resection, mice were treated i.p. with argatroban for another 15 days.
Strikingly, the size of the local recurrences and the number of liver metastases severely decreased upon argatroban treatment ( Figure 7D-F). In conclusion, our data demonstrate that argatroban exhibits a strong antimetastatic effect via disrupting the PPI between MTA1 and E2F1, leading to inhibition of the HAS2/HA axis.
Discussion
Antimetastatic regimens are urgently needed, but developing New Molecular Entities (NMEs) to effectively combat cancer progression is tedious. Until recently, cancer precision medicine has relied on the "lock-and-key" specificity, meaning that molecules that are newlydesigned to target a certain pathway are anticipated to eradicate tumors in a highly selective manner, thereby maximizing efficacy and minimizing sideeffects [58]. In practice, however, the majority of innovative drugs with a promising profile in preclinical settings demonstrated inadequate efficacy and/or safety on clinical subjects. These failure rates have been discouragingly high and disproportional to the costs of developing new drugs from scratch [59,60]. Nevertheless, "Deus ex machina", polypharmacology [61], and drug repositioning [62] promote a paradigm shift in drug research and development. On one hand, polypharmacology questions the "lock-and-key" dogma by countersuggesting that many drugs can be effective against one disease, and that one drug can show efficacy against diseases with distinct clinical manifestations [61]. On the other hand, drug repositioning refers to discovering, validating and marketing previously approved drugs for new indications. From this point of view, re-profiling of well-established drugs might be a key approach for future cancer treatment. Due to their known safety and efficacy profiles against other indications, they pose advantages for easier introduction into clinical trials, faster filing and regulatory approval procedures and significantly reduced financial costs compared to NMEs [63]. Tools and platforms are under development or already in place for supporting data-driven, rather than chancedriven, prediction of the repositioning potential of drugs [64][65][66][67].
In this study, we unveiled that in invasive cancers, increased E2F1 levels transactivate MTA1. Then, MTA1 develops PPIs with E2F1 to form coactivator complexes that potentiate expression of metastasis-related targets, such as HAS2, leading to extracellularly increased HA production and enhanced cell migratory and invasive capacity. Disruption of this prometastatic circuit by targeting the E2F1:MTA1 PPI reduces HA synthesis, as well as the infiltration of TAMs in the TME. Formation of the prometastatic complex can be targeted by inhibiting the expression of either E2F1 or MTA1 through de novo synthesized sh.RNAs. Moreover, structurebased pharmacophore modeling identified inhibitors that can perturb a particular E2F1-coregulator complex, in this case E2F1:MTA1, from a virtual library of already-marketed, small-molecule drugs. In this respect, the small-molecule compound argatroban demonstrates strong antimetastatic efficacy in vivo by specifically blocking the assembly of E2F1 and MTA1, thereby providing promises for rapid translation of antimetastasis therapy using drug repositioning. Argatroban is a reliable and predictable anticoagulant that binds reversibly and selectively to the thrombin active site and inhibits thrombin-catalyzed or -induced reactions, including fibrin formation, activation of coagulation factors V, VIII, and XIII, activation of protein C, and aggregation of platelets. The compound is currently prescribed against heparin-induced thrombocytopenia and for use in patients undergoing percutaneous coronary intervention [68]. Argatroban's antimetastatic potential has been recognized earlier [53][54][55]; however, in lack of previous insights on the underlying molecular mechanism, it was hypothesized that this effect is mediated via its well-known mechanism of action as a competitive inhibitor of thrombin [69]. Here, we demonstrate that the metastasis-inhibitory property of argatroban relies on a distinct mechanism of action that involves disruption of the interaction of E2F1 with its newly identified coactivator MTA1, eventually leading to downregulation of metastatic targets. Disruption of this interaction suffices to prevent aggressive cancer cells from forming metastases. Of note, argatroban is efficient against metastatic pancreatic tumors. This finding might be valuable, as this type is one of the most difficult-to-cure cancers [70]. Although RNAi-based therapeutics against E2F1 or MTA1 have also the potential to be developed as NMEs towards the same purpose, their high antineoplastic and antimetastatic effects in the current approach are mainly connected with the stable modulation of injected cancer cells in both metastasis models. In our clinically adapted in vivo models, argatroban's significant antimetastatic properties were demonstrated by intraperitoneal administration of low, non-toxic doses. This suggests that argatroban could present a genuine therapeutic solution that is being forwarded to bedside faster than shRNAs, whose greatest challenge is delivery. Moreover, drug safety data from phase 1 and 2 clinical trials of argatroban are already in place [71].
Our study further underscores that E2F1's aggressive behavior largely depends on the spatiotemporal availability of its coregulators. E2F1 is dragged into metastasis-supporting processes once a malignant fate is established [5,72]. Rewiring of E2F1 to malignant networks is mediated by an increasing repertoire of E2F1 coregulators that enhance E2F1's transcription programs to favor expression of genes underlying invasiveness [16,17,24,73,74]. MTA1, the newly identified member of this prometastatic E2F1 coregulome, is frequently upregulated in metastatic cancers and is causatively associated with cell transformation, DNA repair, and EMT. It participates in the nucleosome remodeling and deacetylase (NuRD) complex and contributes to its stabilization and assembly [75]. The malignant E2F1:MTA1 complex is predicted to be formed via the BAH-and SANT-domains, which, in several proteins, are involved in transcriptional regulation, as well as its ELM2 domain, which is significant for the recruitment of histone deacetylases [75]. Clinically, the E2F1/ MTA1 signature is translated into poor prognosis.
Argatroban is predicted to inhibit the complex formation by binding to the above-mentioned interacting surfaces of MTA1. Thus, E2F1 transcribes MTA1, but argatroban binds to the newly synthesized MTA1 protein molecules via these surfaces, rendering MTA1 essentially unable to develop malignant PPIs. Considering the growing evidence for the critical significance of coregulators as rheostats of E2F1 -mediated aggressiveness, this is the first time that targeting of a prometastatic E2F1-coregulator interaction towards inhibiting metastasis was achieved through the use of already-marketed drugs.
HAS2, the representative transcriptional target of the metastatic E2F1:MTA1 complex, is a hyaluronic synthase that catalyzes the synthesis of HA, a glycosaminoglycan polymer which is both a key structural component of the ECM and a signaling molecule involved in inflammatory response and immunomodulation [76,77]. HA is also critical for TME architecture and tumor-stromal cell interactions [78]. Upon increased HAS2 activity, long HA molecules are produced and extruded to the extracellular space, where they can bind directly to matrix components and cell surface receptors, collectively referred to as hyaladherins. HA cross-links with matrix components such as versican, aggrecan, tumor necrosis factor-inducible gene 6 (TSG-6), and serum-derived hyaluronan-associated protein (SHAP) to form a pericellular meshwork that defines the mechanical properties of tumors. Moreover, it interacts with typical HA receptors, such as cluster of differentiation 44 (CD44) or receptor for hyaluronan-mediated motility (RHAMM) to trigger phosphoinositide 3-kinase/serine-threonine kinase 1 (PI3K/AKT), mitogen-activated protein kinase (MAPK), and extracellular signal-regulated kinase (ERK) signaling cascades, thereby enhancing cell survival, drug resistance, EMT, and the migratory capacity of tumor cells [79]. In addition, HA can interact with immune cells through CD44, the only receptor that has been demonstrated to bind HA to immune cells [77]. It can also modulate toll-like receptor 2 and 4 (TLR2/4) downstream signaling which reprograms inflammatory cells towards creating a tumor-permissive environment via immunosuppression and neutrophil transformation [80]. Since the TME contains several types of immune cells which, depending on their type, tend to occupy specific locations [81], the HA-immune cell interactions emerge as potential effectors of the so-called immune contexture ( i.e., the density, functional orientation, and spatial organization of the immune infiltrate) [81].
Targeting HA-turnover pathways is, thus, an appealing therapeutic strategy since HA depletion could manage lesions in a "two-birds-with-one-stone" manner by simultaneously modulating both the tumor and the surrounding microenvironment that supports it, including the infiltrated immune cells. Hyaluronidases, the degrading enzymes of HA which, together with HAS2, regulate HA turnover, have been suggested to alter tumor properties and increase the penetration and uptake of chemotherapeutic drugs [52]. Hyaluronidase, mainly of bovine origin, could yield serious adverse events though, especially if administrated systemically, due to its tendency to induce allergic reactions as well as increased risk of inflammation and joint pains to non-malignant tissues, where HA is also present [52]. Consequently, argatroban may present an appealing alternative to HA-targeting strategies since it precisely intervenes with the HAS2/HA axis only at sites with increased of E2F1:MTA1 levels, as these are the progressing malignant tissues.
Last but certainly not least, several hyaladherins (RHAMM, CD44, and versican) are part of a recently unveiled E2F1 interacting map that underlies EMT and metastasis [72]. Given the critical role of hyaladherins and HA binding for shaping both the TME [79] and immune responses [77], this observation provides insights on a possible association of an E2F1-regulated interactome with immunological aspects of tumors. E2F1 might orchestrate alterations in the TME and immune contexture in favor of metastasis via exerting a broader effect on HAbinding molecules in addition to or in support of, respectively, the E2F1:MTA1/HAS2 axis. It currently remains unknown whether E2F1 affects the ability of tumors to evade immune surveillance, which immune components might be E2F1-susceptible and which are the underlying mechanisms. For several years, we used to think of E2F1 as an indispensable component of the DNA damage response and repair (DDR/R) signaling network and, therefore, its oncogenic behavior seemed rather paradoxical. Importantly, it was recently suggested that DDR/R crosstalk with immune response (ImmR) signaling networks and that disequilibrium in the DDR/R-ImmR alias opens the "bag of Aeolus" in terms of disease progression [82]. While DDR/R-ImmR crosstalk prevents oncogenesis at early stages, it passes to the dark side to support disease progression at later stages [82]. In this respect, the E2F1 paradox might be explained, if it is hypothesized that E2F1 stands at the crossroads between DDR/R and ImmR, mediating their interactions. The E2F1 role in onco-immunology emerges as a subject of fruitful future research, which might open new avenues for next-generation therapeutics.
Conclusions
Uncovering the metastasis-promoting E2F1: MTA1/HAS2 network and using structure-based pharmacophore modeling, we propose argatroban as an innovative, E2F1-coregulator-based antimetastatic drug. Treatment of high E2F1/MTA1-expressing tumors with argatroban in clinically safe doses disrupts this complex, modulates the TME and prevents metastasis and cancer relapses. | 2019-03-15T02:58:03.591Z | 2019-02-20T00:00:00.000 | {
"year": 2019,
"sha1": "cdc43862352506fd8b06106e5b2f49ed80262cae",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.7150/thno.29546",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cdc43862352506fd8b06106e5b2f49ed80262cae",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
93003881 | pes2o/s2orc | v3-fos-license | Effect of CO2 Concentration on Uptake and Assimilation of Inorganic Carbon in the Extreme Acidophile Acidithiobacillus ferrooxidans
This study was motivated by surprising gaps in the current knowledge of microbial inorganic carbon (Ci) uptake and assimilation at acidic pH values (pH < 3). Particularly striking is the limited understanding of the differences between Ci uptake mechanisms in acidic versus circumneutral environments where the Ci predominantly occurs either as a dissolved gas (CO2) or as bicarbonate (HCO3-), respectively. In order to gain initial traction on the problem, the relative abundance of transcripts encoding proteins involved in Ci uptake and assimilation was studied in the autotrophic, polyextreme acidophile Acidithiobacillus ferrooxidans whose optimum pH for growth is 2.5 using ferrous iron as an energy source, although they are able to grow at pH 5 when using sulfur as an energy source. The relative abundance of transcripts of five operons (cbb1-5) and one gene cluster (can-sulP) was monitored by RT-qPCR and, in selected cases, at the protein level by Western blotting, when cells were grown under different regimens of CO2 concentration in elemental sulfur. Of particular note was the absence of a classical bicarbonate uptake system in A. ferrooxidans. However, bioinformatic approaches predict that sulP, previously annotated as a sulfate transporter, is a novel type of bicarbonate transporter. A conceptual model of CO2 fixation was constructed from combined bioinformatic and experimental approaches that suggests strategies for providing ecological flexibility under changing concentrations of CO2 and provides a portal to elucidating Ci uptake and regulation in acidic conditions. The results could advance the understanding of industrial bioleaching processes to recover metals such as copper at acidic pH. In addition, they may also shed light on how chemolithoautotrophic acidophiles influence the nutrient and energy balance in naturally occurring low pH environments.
This study was motivated by surprising gaps in the current knowledge of microbial inorganic carbon (Ci) uptake and assimilation at acidic pH values (pH < 3). Particularly striking is the limited understanding of the differences between Ci uptake mechanisms in acidic versus circumneutral environments where the Ci predominantly occurs either as a dissolved gas (CO 2 ) or as bicarbonate (HCO 3 − ), respectively. In order to gain initial traction on the problem, the relative abundance of transcripts encoding proteins involved in Ci uptake and assimilation was studied in the autotrophic, polyextreme acidophile Acidithiobacillus ferrooxidans whose optimum pH for growth is 2.5 using ferrous iron as an energy source, although they are able to grow at pH 5 when using sulfur as an energy source. The relative abundance of transcripts of five operons (cbb1-5) and one gene cluster (can-sulP) was monitored by RT-qPCR and, in selected cases, at the protein level by Western blotting, when cells were grown under different regimens of CO 2 concentration in elemental sulfur. Of particular note was the absence of a classical bicarbonate uptake system in A. ferrooxidans. However, bioinformatic approaches predict that sulP, previously annotated as a sulfate transporter, is a novel type of bicarbonate transporter. A conceptual model of CO 2 fixation was constructed from combined bioinformatic and experimental approaches that suggests strategies for providing ecological flexibility under changing concentrations of CO 2 and provides a portal to elucidating Ci uptake and regulation in acidic conditions. The results could advance the understanding of industrial bioleaching processes to recover metals such as copper at acidic pH. In addition, they may also shed light on how chemolithoautotrophic acidophiles influence the nutrient and energy balance in naturally occurring low pH environments.
INTRODUCTION
Acidithiobacillus ferrooxidans is a polyextremophile inhabiting very acidic (pH < 3) and often metal laden environments that belongs to the Acidithiobacillia class within the Proteobacteria (Williams and Kelly, 2013). It is an obligate chemolithoautotrophic, mesophilic microorganism that gains energy and reducing power by the aerobic oxidation of hydrogen, inorganic sulfur compounds, and ferrous iron (Bonnefoy and Holmes, 2012;Dopson and Johnson, 2012) and anaerobically via sulfur or formate oxidation coupled to reduction of ferric iron (Pronk et al., 1991;Hedrich and Johnson, 2013;Osorio et al., 2013).
A. ferrooxidans is one of the most abundant microorganisms found at ambient temperatures in industrial bioleaching heaps used for the recovery of, e.g., copper (Soto et al., 2013;Vera et al., 2013;Zhang et al., 2016). It also forms an integral part of natural occurring acidic ecosystems such as the Rio Tinto and deep subsurface in the Iberian pyrite belt (Amils et al., 2014), acidic springs, cave systems plus volcanic soils (reviewed in Johnson, 2012;Hedrich and Schippers, 2016), and acid mine drainage (AMD) (Chen et al., 2015;Teng et al., 2017). A. ferrooxidans is considered a model species for understanding genetic and metabolic functions reviewed in Cardenas et al., 2016) and survival mechanisms at extremely low pH (Chao et al., 2008) and reviewed in Slonczewski et al. (2009). It has also provided useful information for understanding how microorganisms can contribute to the nutrient and energy balance in bioleaching heaps (Valdes et al., 2008;Valdés et al., 2010).
The dominant source of available inorganic carbon (Ci) in circumneutral and slightly alkaline environments such as terrestrial fresh water and oceans is bicarbonate (HCO − 3 ) with lower concentrations of dissolved CO 2 (Mangan et al., 2016). The majority of models for prokaryotic Ci uptake and assimilation have been elucidated for organisms, such as cyanobacteria, that inhabit these environments (Burnap et al., 2015;Klanchui et al., 2017). Cyanobacteria fix carbon via the Calvin-Benson-Bassham (CBB) cycle and use a variety of carbon concentration mechanisms (CCMs) to take up CO 2 or bicarbonate and provide CO 2 to the carbon fixation enzyme, ribulose bisphosphate carboxylase-oxygenase (RubisCO). Five C i uptake systems have been reported including three bicarbonate transporters: BCT1, SbtA, and BicA that vary in affinity and flux for bicarbonate and two intracellular CO 2 "uptake" systems, that convert CO 2 , passively diffusing into the cell, into bicarbonate (Burnap et al., 2015;Klanchui et al., 2017). The transporters vary in affinity and flux for bicarbonate providing a selective advantage to organisms in environments with a wide dynamic range of HCO 3 − availability. For example, freshwater β-cyanobacteria that live at about pH 7 not only use the high affinity SbtA transporter and the low affinity, high flux BicA transporter but also the medium affinity BCT1, an inducible bicarbonate transporter under limited Ci conditions (Sandrini et al., 2014(Sandrini et al., , 2015Klanchui et al., 2017). Alkaline lake β-cyanobacteria tend to have just BicA and it is hypothesized that the high affinity SbtA is not necessary in environments rich in HCO 3 − (Klanchui et al., 2017).
In contrast, less is known about Ci uptake and assimilation in extremely acidic environments where the dominant source of Ci is the dissolved gas CO 2 (Carroll and Mather, 1992;Cardenas et al., 2010;Valdés et al., 2010;Mangan et al., 2016;WikiVividly, 2018). A. ferrooxidans fixes carbon by the CBB cycle . Bioinformatic analyses, EMSA assays, and complementation of mutants in the surrogate host Cupriavidus necator (formerly Ralstonia eutropha) have demonstrated the presence of four operons (cbb1-4) of CBB cycle genes in A. ferrooxidans that are involved in Ci uptake and assimilation. Operons cbb1-3 were shown experimentally to be regulated by CbbR, a LysR-family transcription regulator (Esparza et al., 2009(Esparza et al., , 2015. In the present study, RNA transcript and protein abundance profiles were determined for genes present in A. ferrooxidans operons cbb1-4 under different CO 2 concentrations. In addition, a fifth cbb operon (cbb5) and a gene cluster predicted to encode a bicarbonate uptake transporter and a carbonic anhydrase were detected and were also evaluated for expression under different CO 2 concentration regimes. Acquiring this knowledge is important considering the central roles that the CCM and CBB cycle genes play in the determination of CO 2 fixation and biomass formation in extremely acidic environments.
Bacterial Strains and Culture Conditions
A. ferrooxidans ATCC 23270 was cultured in 9K medium (Quatrini et al., 2007) adjusted to pH 3.5 with H 2 SO 4 and containing 5 g/L elemental sulfur at 30 • C under aerobic conditions (0.036% CO 2 ). Increased concentrations of CO 2 were obtained by sparging with a mixture of CO 2 and air by changing the ratio of CO 2 in the gas mixture. A. ferrooxidans cultures were grown to mid-log phase (Guacucano et al., 2000) as measured by cell counts using a Neubauer chamber. Cells were rapidly cooled on ice and then centrifuged at 800 × g for 5 min at 4 • C to remove solid sulfur particles followed by cell capture by centrifugation at 8,000 × g for 10 min at 4 • C. The cell pellet was re-suspended in ice-cold 9K salt solution for further washing. Total RNA was prepared immediately after cell harvesting.
Isolation of RNA and Real-Time Quantitative PCR (RT-qPCR) Assays
Total RNA was isolated from A. ferrooxidans cells as described previously (Guacucano et al., 2000). The RNA preparations were treated with DNase I (Fermentas) before proceeding with the cDNA synthesis step. One microgram of total cellular RNA was used for each reaction. Real-time quantitative RT-PCR (RT-qPCR) was performed using RevertAid H Minus Reverse Transcriptase (Fermentas). The sequences of the qPCR primers for genes involved in CO 2 assimilation are provided in Table 1. Control reactions performed using RNA but lacking reverse transcriptase to assess genomic DNA contamination did not produce any bands after gel electrophoresis (data not shown). RT-qPCR assays were carried out in a 25 µL PCR mixture consisting of 12.5 µl 2 × SYBR Green Supermix (Bio-Rad). The RT-qPCR was performed on iCycler iQ Real-time PCR detection system (Bio-Rad Laboratories, United States) with IQ SYBR green supermix (Bio-Rad) as described previously (Liu et al., 2011). Quantification of the target gene expression was performed using iCycler iQ5 TM software using a normalized expression analysis method as described by the manufacturer. Relative quantifications were performed from duplicate biological replicates using expression of recA as a control as described previously (Esparza et al., 2015). PCR primers were designed as described (Thornton and Basu, 2011) and the results were analyzed using IQ Bio-Rad equipment software RT-qPCR and excel software. Statistical variance was analyzed using the Tukey test (Tukey, 1949) and ANOVA (Kotz et al., 2014). The P-value was 0.05.
A list of A. ferrooxidans genes used in this study, their predicted functions and GenBank locus tags is provided in Table 2. The table has been updated from Esparza et al. (2010).
Growth of A. ferrooxidans in Varying CO 2 Concentrations
In order to evaluate the effect of CO 2 on the growth of A. ferrooxidans, cells were cultivated in 9K medium, pH 3.5 and containing 5 g/L elemental sulfur at 30 • C (Quatrini et al., 2007) with increasing concentrations of CO 2 from 0.036% (air) to 20%. Maximum growth rate occurred in 2.5% CO 2 with decreasing growth rates in 5, 0.036, 10 and 20% CO 2 , respectively (Figure 1). However, maximum cell concentration (cells/mL) was unaffected by increasing CO 2 .
Transcriptional Response of CBB Genes to Cellular Growth in Different CO 2 Concentrations
Having established that CO 2 concentration impacts cell growth rate, we wished to examine the effect of CO 2 concentration on the expression of genes involved in the CBB and CCM pathways. Levels of RNA transcripts were assayed by RT-qPCR for one or more representative genes of each of the five cbb operons isolated from cells grown under different regimens of CO 2 concentration from 0.036% (natural CO 2 concentration in air) up to 20% (Figure 2). Transcript numbers of each tested gene are reported with respect to the level of RNA during growth at 0.036% CO 2 normalized to one. The relative levels of transcripts of cbbR encoding the CbbR transcriptional regulator increased 3.4 ± 0.6fold at a concentration of 10% CO 2 . A further increase to 20% CO 2 did not result in any additional changes in RNA expression (Figure 2). In contrast, levels of RNA expression decreased with increasing CO 2 concentrations for genes in the cbb1 operon including RubisCO Form IAc, associated carboxysome genes (including can1 encoding a carboxysome-associated ε-type carbonic anhydrase), and the RubisCO activase genes cbbQ1 plus cbbO1. The expression of RNA from the cbb2 operon, encoding RubisCO Form IAq and the RubisCO activase genes cbbQ2 and cbbO2, also decreased with increasing CO 2 but the decrease was more abrupt than that for RubisCO Form IAc suggesting that its expression was more sensitive to increasing CO 2 . RNA transcripts for hyp3 (unknown function) and cbbG (encoding glyceraldehyde-3-phosphate dehydrogenase) in the cbb3 operon were increased 30-and 20-fold, respectively when the CO 2 concentration was raised from air to 2.5% CO 2 followed by a subsequent decrease in transcript numbers in 5, 10, and 20% CO 2 , although transcripts in 20% CO 2 were still higher than in air. The operon cbb3 encodes enzymes in the Calvin cycle together with phosphoglycolate phosphatase (cbbZ) that is involved in the detoxification of 2-phosphoglycolate produced by the reaction of RubisCO with oxygen (Ogren and Bowes, 1971) and part of the Trp operon that is involved in pyruvate formation and tryptophan biosynthesis. Transcripts for cbbP encoding phosphoribulokinase (PRK) in the cbb4 operon increased about 70-fold when cells were grown in 2.5% CO 2 with a further increase in 5% CO 2 to approximately 100-fold. Although the fold difference increased further in 10 and 20% CO 2 , the increases were not statistically significant using the Tukey test (Tukey, 1949) and ANOVA (Kotz et al., 2014). The P-value was 0.05. PRK catalyzes the ATP-dependent phosphorylation of ribulose 5phosphate (RuP) into ribulose 1,5-bisphosphate (RuBP) which is the substrate for RubisCO. RNA transcript abundance for cbbM (cbb operon 5), encoding RubisCO form II increased about two-fold in 2% CO 2 with further increases to about threefold in 5-20% CO 2 .
Protein Response of CBB Genes to Cellular Growth in Different CO 2 Concentrations RNA transcript abundance, as measured by RT-qPCR, does not always correspond to the level of the corresponding protein (Rocca et al., 2015). In order to evaluate whether protein concentration exhibited similar trends as the RNA levels, proteins encoded by selected cbb operon genes were assayed by Western blotting when cells were grown in increasing concentrations of CO 2 (Figure 3). CbbR concentrations increased with increasing CO 2 concentrations, mimicking transcript changes. The levels of CbbS1 and/or CbbS2 (the antibody cannot distinguish between the two forms of CbbS) decreased when cells were grown in increasing concentrations of CO 2 . Levels of CbbP increased until a concentration of 10% CO 2 was reached with a subsequent slight decrease with 20% CO 2 . These data matched the changes in levels of RNA abundance in all three cases. However, absolute levels of protein abundance do not match transcript abundance, perhaps because of additional levels of post-transcriptional and post-translational regulation of the proteins and because Western-blotting is at best semiquantitative (Gassmann et al., 2009). FIGURE 2 | RNA transcript levels for genes and corresponding cbb operons involved in A. ferrooxidans CO 2 assimilation under different CO 2 concentrations relative to the transcript levels in air (0.036% CO 2 ) normalized to one. mRNA abundance was determined by RT-qPCR (n = 4) for the following genes: cbbR encoding the transcriptional regulator CbbR (cbb1); cbbS1 small subunit of RubisCO Form 1Ac (cbb1), cbbS2 small subunit of RubisCO form 1Aq (cbb2); hyp (hypothetical) and cbbG glyceraldehyde-3-phosphate dehydrogenase (cbb3); cbbP phosphoribulokinase (cbb4) and cbbM RubisCO form II (cbb5). The genes assayed in each operon are highlighted in orange. The cbbR responsive promotors are indicated with a DNA symbol and a cartoon of the two subunits of CbbR1 (gray ellipses). A full list of genes in the operons is provide in Table 1.
Ci Uptake
High affinity NDH-I 3 and low affinity NDH-I 4 CO 2 uptake systems have been described in cyanobacteria (reviewed in Klanchui et al., 2017). However, bioinformatic examination of the genome of A. ferrooxidans failed to reveal gene candidates for the critical cupA in the NDH-I 3 system or cupB in the NDH-I 4 system, suggesting that A. ferrooxidans does not use these systems. Instead, we propose that CO 2 passively diffuses into A. ferrooxidans, as has been shown in other organisms (Gutknecht et al., 1977).
SulP Is Predicted to Encode Be a Bicarbonate Uptake/Efflux Pump
In order to investigate the possibility that SulP in A. ferrooxidans encodes a bicarbonate transporter, a detailed bioinformatic examination of the gene/protein was undertaken. SulP is predicted to be an inner membrane protein with eleven transmembrane regions with a similar topology to the experimentally verified BicA from M. tuberculosis H37Rv (Supplementary Figure S1). Particularly significant is that sulP is juxtaposed to can2 in A. ferrooxidans. Can2 is strongly predicted to encode a cytoplasmic carbonic anhydrase of the β-class clade B (Valdes et al., 2008). Carbonic anhydrases (EC 4.2.1.1) are metallo-anhydrases that catalyze the reversible hydration of CO 2 to HCO 3 − (Frost and McKenna, 2014). The juxtaposition of sulP and can2 suggests a functional relationship involving the uptake (or export) of HCO 3 − by SulP and the interconversion of HCO 3 − and CO 2 by Can2 inside the cell. This hypothesis is strongly supported by the discovery, using the String database (Szklarczyk et al., 2017), of multiple examples of conserved microsynteny between sulP and can including gene fusions in many different organisms (Supplementary Figure S2).
Motivated by the mounting evidence that SulP is a bicarbonate transporter, the functional relationship between SulP and experimentally validated sulfate or bicarbonate transporters was explored using phylogenomic approaches. SulP sequences chosen for comparison included an experimentally validated sulfate transporter from M. tuberculosis H37Rv and experimentally validated bicarbonate transporters from E. coli APEC O1, E. coli O157:H7 str. Sakai, M. tuberculosis H37Rv, and Synechococcus sp. PCC 7002 as specified in the Section "Materials and Methods." Additional SulP protein sequences with predicted sulfate or bicarbonate transport functions were obtained from multiple phylogenetically distinct Bacteria and added to the analysis. A multiple sequence alignment was constructed using MAFFT and ClustalW alignment program. Resultant alignments were used to construct a maximum likelihood, unrooted phylogenetic tree that was visualized and annotated in Figtree (Figure 4).
Five phylogenetically distinct clades were detected (labeled A to E in Figure 4). In clade A, sequences cluster with the experimentally verified sulfate transporter Rv1739c from M. tuberculosis H37Rv (Marietou et al., 2018). Clade B contains no sequences with experimentally validated functions and remains of unknown function. Clade C is associated with the experimentally validated bicarbonate transporter Rv3273 from M. tuberculosis H37Rv (Felce and Saier, 2004). SulP from A. ferrooxidans ATCC 23270 clusters in this clade, strongly supporting the contention that it is a bicarbonate and not a sulfate transporter. SulP sequences from A. ferrooxidans ATCC 53993 and Acidithiobacillus ferrivorans SS3 plus CF27 also cluster in this clade suggesting that they are also bicarbonate transporters. Microsynteny examination of clade C indicated that sulP is always juxtaposed to can2 and in some instances they are fused, providing additional support for the idea that the two genes are functionally related. Bicarbonate transporters in other systems use either Na + or H + as the counter-ion for the importation of HCO 3 − (Saier et al., 2016). The counter-ion used by A. ferrooxidans remains unknown. Clade D includes sequences that cluster with the experimentally verified bicarbonate transporter BicA of Synechococcus PCC 7002 (Price et al., 2004). In clade E, sequences cluster with experimentally verified bicarbonate transporters YchM of E. coli APEC O1 and E. coli O157:H7 str. Sakai (Moraes and Reithmeier, 2012). In contrast to clade C, SulP in all other clades is not associated with Can2 rather it is fused to a STAS domain (sulfate transporter/anti-sigma factor antagonist) that is thought to be involved in regulation or targeting (Shibagaki and Grossman, 2006).
Transcriptional Response of the can2-sulP Gene Cluster to Cellular Growth in Different CO 2 Concentrations
Given the multiple lines of bioinformatic evidence suggesting that sulP encodes a bicarbonate transporter and that it is functionally related to the adjacent can2 encoding carbonic anhydrase, transcript abundance of can2 was assayed by RT-qPCR when cells were grown in increasing concentrations of CO 2 (Figure 5). RNA transcript abundance in 2% CO 2 decreased to less than one-half that determined in 0.036% CO 2 , with further decreases to 0.1% in 20% CO 2 . The can2-sulP gene cluster has not been experimentally demonstrated to be an operon, but their phylogenetically conserved juxtaposition and close proximately separated by only nine nucleotides suggest that they are co-transcribed.
Additional Discussion and Model
This study advances our understanding of the mechanisms employed by A. ferrooxidans to take-up and concentrate Ci and the incorporation of CO 2 into fixed carbon via the CBB cycle. A model is presented that builds upon prior investigations (Appia-Ayme et al., 2006;Esparza et al., 2009Esparza et al., , 2010Esparza et al., , 2015 and provides a preliminary framework to understand carbon fixation at extremely acidic pH under different regimes of CO 2 concentration (Figure 6). Though much remains to validate aspects of the model, this work is an important step toward identifying the components, pathways, and regulation of carbon sequestration in A. ferrooxidans. It generates a more accurate and perceptive starting point to characterize the genetics and physiology of carbon sequestration in other extreme acidophiles. In addition, the model reveals a potentially flexible metabolic repertoire mediating carbon sequestration in different environments that can guide future research. Finally, it serves as a portal for deducing aspects of the CCM and CBB pathways in metagenomes from low pH environments (Guo et al., 2013).
Model
The maximum rate of A. ferrooxidans growth in media containing elemental S as an energy source was obtained in the presence of 2.5% CO 2 (Figure 1). This tendency can be explained, at least partially, by the expression of the five cbb operons as determined by changes in RNA transcript abundance (Figure 2) supported by protein abundance profiling (Figure 3). Two representative genes of the cbb3 operon, hyp3 and cbbG, obtain maximum transcript abundance in 2.5% CO 2 that subsequently diminishes as the CO 2 concentration is increased to 20% CO 2 . These genes are part of an operon coding for enzymes that pass the carbon from 3-PGA (3phosphoglycerate), generated by RubisCO, through the pentose phosphate and glycolysis pathways to pyruvate and pathways for glycogen metabolism. The cbb3 operon also encodes cbbZ involved in 2-phosphoglycolate detoxification, a by-product of the reaction of RubisCO with O 2 (Esparza et al., 2009(Esparza et al., , 2015. Assuming that a higher level of transcript abundance results in a concomitant increase in the levels of their respective encoded enzymes, growth in 2.5% CO 2 could result in an increase in fixed carbon compounds and increased protection from O 2 damage compared to growth in air. This, in turn, could contribute to more rapid growth in 2.5% CO 2 . In order to achieve this increase in growth, the CBB cycle needs to provide more 3-PGA as a starting material to feed into the sugar transformation pathway. 3-PGA is the primary product of RubisCO and in 2.5% CO 2 there is an increase in the abundance of transcripts for RubisCO form II encoded by cbbM of the cbb5 operon (Figure 2) that could account in part for an increase in 3-PGA production. In other organisms, Form II RubisCO has poor affinity for CO 2 and a low discrimination against O 2 as an alternative substrate suggesting that the enzyme is adapted to functioning in low-O 2 and high-CO 2 environments (Dobrinski et al., 2005;Badger and Bek, 2008). The observed increase in transcript abundance for RubisCO Form II in higher concentrations of CO 2 (Figure 2) is consistent with this view.
One of the products of the enzymes encoded by the cbb3 operon is ribulose-5-P that is a precursor to ribulose-1,5-P, the substrate for RubisCO (Figure 6). The conversion of ribulose-5-P to ribulose-1,5-P is carried out by phosphoribulokinase (PRK) encoded by cbbP of the cbb4 operon. PRK catalyzes the ATP-dependent phosphorylation of ribulose 5-phosphate (RuP) into ribulose 1,5-bisphosphate (RuBP) that are both intermediates in the CBB Cycle. Together with RubisCO, PRK is unique to this cycle. There is a 65-fold increase in transcript abundance for cbbP in 2.5% CO 2 compared to air (Figure 2) that could be responsible for an increase in ribulose-1,5-P. RNA transcript abundance for cbbP continues to rise in 10% CO 2 but this is not accompanied by a concomitant increase in growth rate (Figure 1). Clearly, there are other factors limiting the growth rate at concentrations of CO 2 above 2.5%. One possible explanation is the observed decrease in transcript abundance of FIGURE 5 | RNA transcript abundance (assayed by RT-qPCR) of can2 when cells were grown in different concentrations of CO 2 from 0.036% CO 2 (air) to 20% CO 2 . Orange arrow indicates the gene assayed for transcript abundance.
genes in the cbb3 operon (encoding many genes involved in sugar interconversions) in 5-20% CO 2 (Figure 2) that would potentially limit growth by diminished enzyme availability for various sugar conversions (Figure 6).
In summary, the model suggests A. ferrooxidans grows fastest in 2.5% CO 2 due to an increase in transcript abundance for sugar transformation pathway genes, transcripts for cbbP that feeds RBP into the CBB cycle, and transcripts for genes encoding RubisCO Form II that is postulated to be the RubisCO used at higher CO 2 concentrations.
Carbon Concentration Mechanisms: Carboxysomes With β-Type Carbonic Anhydrases
Another important consideration is how A. ferrooxidans CCM genes involved in the uptake and concentration of Ci respond to changes in CO 2 concentration. A. ferrooxidans has evolved efficient CCMs to transport and accumulate Ci. First, it encodes the formation of α-carboxysomes, bacterial micro-compartments that provide elevated concentrations of CO 2 to the main CO 2fixing enzyme RubisCO and reduce its reaction with oxygen (reviewed in Rae et al., 2013). A. ferrooxidans has multiple forms of RubisCO including two copies of Form I and one copy of Form II (Heinhorst et al., 2002;Levican et al., 2008). Using protein similarity analysis, we now predict the two forms of Form I RubisCO as sub-types IAc and IAq that consist of the large and small subunits of RubisCO as has been observed in other organisms (Badger and Bek, 2008). RubisCO Form II consists only of a large subunit with little sequence or structural similarity with the large subunit of forms IAc and IAq (Tabita et al., 2008;Bohnke and Perner, 2017). Genes encoding RubisCO Form IAc are encoded in the cbb1 operon, co-occur with carboxysome formation genes, and is probably encapsulated within the carboxysome as has been found in other organisms (Tabita et al., 2008). In addition, csoS3 is also present in the cbb1 operon and encodes a β-type carbonic anhydrase (Can1) (Sawaya et al., 2006). CsoS3 is located in the carboxysome shell and is responsible for the conversion of bicarbonate to CO 2 and is an important contributor to CCM (So et al., 2004). Thus, cbb1 encodes the major components of the CCM carboxysome that encapsulates RubisCO in A. ferrooxidans.
Under conditions of low CO 2 concentrations, carboxysome formation genes are upregulated in other microorganisms (Orus et al., 2001). This has been confirmed in A. ferrooxidans where increased transcript abundance was observed for cbb1 operon genes under anaerobic conditions with low CO 2 concentrations (Osorio et al., 2013). On the other hand, genes encoding RubisCO Form IAq, located in the cbb2 operon, are not closely linked in the genome to carboxysome genes. In addition, despite considerable sequence similarity of both large and small subunits of Form IAq and Form IAc, a major difference is the presence in many bacteria including A. ferrooxidans of a six amino acid insertion in the small subunit of Form IAq not found in Form IAc (Supplementary Figure S4). A crystal structure of the small subunit of Form IAc RubisCO from Halothiobacillus neapolitanus [1SVD; the structure of H. neapolitanus RubisCO, (Kerfield, C. A. et al., 2005, unpublished)] provides evidence that this insertion impedes its interaction with carboxysome proteins (Badger and Bek, 2008), suggesting that Form IAq is not associated with carboxysomes in the cell. An examination of its kinetic properties suggests that Form IAq is adapted to environments with medium to high CO 2 concentrations with oxygen present whereas Form IAc is more adapted to low CO 2 and low to high O 2 environments (Badger and Bek, 2008). RNA transcript abundance of both RubisCO Form IAc and Form IAq indicate that their abundance diminishes in CO 2 concentrations above that of air. However, as both their expressions are low it is not possible to discern if there are statistically significant differences in the rate of decrease of transcript abundance between the two RubisCO Forms as CO 2 concentrations are increased.
Carbon Concentration Mechanisms: Bicarbonate Uptake and Cytoplasmic β-Carbonic Anhydrase
In addition to the proposed involvement of a carboxysome β-type carbonic anhydrase, a second potential CCM mechanism is the presence of a cytoplasmic-located β-carbonic anhydrase (Can2) that is genetically linked to a predicted bicarbonate transporter SulP. Carbonic anhydrases catalyze the protonmediated reversible hydration of CO 2 to HCO 3 − (Smith and Ferry, 2000) equilibrating the reaction between CO 2 , bicarbonate, and protons and play important roles in ion transport, acidbase regulation, gas exchange, and CO 2 fixation in many organisms (Lotlikar et al., 2013;Aggarwal and McKenna, 2015). Although the function of Can2 in A. ferrooxidans remains to be experimentally validated, the model suggests that it is involved in the reversible hydration of CO 2 (that has entered the cell by diffusion) to HCO 3 − as found in other organisms (Smith and Ferry, 2000). The genomic co-localized of can2 with the predicted bicarbonate transporter sulP suggests that they work together, perhaps in pH regulation as has been found in other organisms (Lotlikar et al., 2013;Aggarwal and McKenna, 2015). In this model, the importation of bicarbonate into the cell by SulP would be accompanied by the expulsion of protons. Subsequently, Can2 could convert the bicarbonate to CO 2 accompanied by the conversion of cellular protons to water and the diffusion of the CO 2 outside the cell as shown in Figure 6. Thus, protons are both exported and consumed and this may be an important mechanism for pH regulation in extremely acidic conditions. Alternatively, and not mutually exclusive, there is the possibility that Can2 works in the reverse direction and converts CO 2 to bicarbonate that is subsequently taken into the carboxysome by Can1. This could improve the efficiency of carbon fixation under limiting conditions of external CO 2 as has been observed in the facilitation of growth of other microbes at low partial pressures of CO 2 (Kusian et al., 2002;Merlin et al., 2003;Mitsuhashi et al., 2004;Burghout et al., 2010). Low partial pressures of CO 2 in bioleaching heaps has been observed due to the decreased solubility of CO 2 in low pH especially when the temperature of the heap rises (lowering further the solubility of CO 2 ), resulting from chemical and biochemical exothermic reactions including the conversion of pyrite to oxidized sulfur compounds (Valdés et al., 2010). If Can2 is involved in improving uptake of CO 2 at low partial pressures of CO 2 , then it might explain why can2 exhibits a decrease in transcript abundance in increasing concentrations of CO 2 (Figure 2).
In summary, A. ferrooxidans is predicted to have a second carbonic anhydrase, encoded by can2, located in the cytoplasm that functions in the reversible hydration of CO 2 . Juxtaposed is a gene (sulP) predicted to be membrane associated bicarbonate transporter. It is predicted that sulP/can2 constitute an operon. The abundance of transcripts for sulP/can2 decreases with increasing CO 2 concentrations. It is hypothesized that SulP/Can2 function as a bicarbonate uptake system but they may also serve as an intracellular proton concentration homeostatic mechanism.
Regulation
Regulation of Ci uptake and assimilation is very complex and is dependent on transcriptional regulators that act in concert with small molecular effectors that are well known metabolites. In addition, it has recently been discovered that numerous small RNA molecules act as antisense regulators (Burnap et al., 2015). Although there are many studies of the regulation of Ci uptake and assimilation in autotrophs, principally in photoautotrophs (Kusian and Bowien, 1997), there have been only limited insights into their regulation in extremely acidophilic chemolithoautotrophs. It was suggested in Esparza et al. (2010) that the regulation of cbb operons 1-4 of A. ferrooxidans involved the action of the master regulator CbbR, as has been observed in many microorganisms (Badger and Bek, 2008). The evidence included: (i) the presence of a CbbR binding site upstream of cbbR leading to autoregulation of cbbR (Esparza et al., 2015); (ii) the presence of CbbR binding sites upstream of operons cbb1-3 ; and (iii) the activity of A. ferrooxidans cbbR promoters when cloned into the surrogate host C. necator (formerly R. eutropha) (Esparza et al., 2015), including the detection of promoter activity upstream of cbb4 even in the absence of an experimentally validated CbbR binding site in this operon. The observed transcript profiles of operons cbb1-4 can be explained on the basis of the activity of CbbR. Increased CbbR down-regulates the expression of the cbb2 operon and up-regulates the cbb3 and cbb4 operons (Figure 4). That CbbR can act as both a positive and negative regulator has been observed in other organisms (Viale et al., 1991). However, what controls the up-regulation of CbbR in A. ferrooxidans in response to increasing CO 2 concentrations is unknown. One possibility is that it involves the interaction of the regulator RegA, that responds to the redox state of the cell, with CbbR (Dangel and Tabita, 2015). Alternatively, it could involve the binding of possible effectors such as ATP, NADPH, RuBP, and fructose-1,6bisphosphate to CbbR, many of which are metabolites of the Cbb cycle involved in feedback regulation (Joshi et al., 2012) as discussed further below. An important observation is the increase in transcripts of RubisCO Form II in high CO 2 concentrations (Figure 2). In other organisms it has been shown that RubisCO Form II is controlled by the transcriptional regulator CbbRm (Bohnke and Perner, 2017) and CbbRm plus RubisCO Form II expression levels increase at CO 2 concentrations above 2%. The molecular mechanisms underlying the regulation of RubisCO Form II are only beginning to be understood (Dubbs et al., 2004;Toyoda et al., 2005;Tsai et al., 2015). It has been suggested that RubisCO Form II evolved at least 2.7 billion years ago, when atmospheric CO 2 levels were one to three orders of magnitude higher than today (Raven, 1991;Rye et al., 1995;Tortell, 2000;Kaufman and Xiao, 2003;Dobrinski et al., 2005;Griffiths et al., 2017). At that time, CCMs were perhaps not required and that is consistent with the observation that RubisCO Form II in A. ferrooxidans is not associated with the CCM carboxysome formation genes.
Regulation of expression of the CCM and CBB cycle genes in other organisms is also known to be mediated by small effector molecules (Dubbs et al., 2004;Tamoi et al., 2005). These include CO 2 (Shimizu et al., 2015), α-ketoglutarate and the oxidized form of nicotinamide adenine dinucleotide phosphate (NADP + ) (Daley et al., 2012), ATP, fructose 1,6-bisphosphate, and NADPH (Joshi et al., 2012) and several compounds of the CBB reductive pentose phosphate pathway several of which are encoded by the operon cbb3 of A. ferrooxidans (Figures 3, 6) (Dubbs et al., 2004). The role of these effectors has not been tested in A. ferrooxidans and it will be a considerable challenge to elucidate the manifold dependencies and interconnections between the diverse cellular processes that together facilitate the regulation of the CCM and CBB pathways in this organism. The use of the surrogate host C. necator provides an opportunity to experimentally test the role of metabolic effectors in A. ferrooxidans (Esparza et al., 2015).
In summary, CbbR has been shown to regulate the expression of cbb operons 1-4. Its increase in expression in higher CO 2 concentrations is consistent with previous observations that it can serve as both a negative regulator (cbb operons 1 and 2) and a positive regulator (cbb4 operon). In the case of operon cbb3, an initial increase in expression is observed when the CO 2 concentration is increased to 2.5% suggesting that CbbR acts as a positive regulator, but this is followed by subsequent decreases in transcript abundance as CO 2 levels are increased beyond 2.5% indicating that other factors are involved in the regulation of cbb3. These factors are unknown but could include interactions with small metabolites and with the redox sensing RegAB system.
High Level Network Interconnections
Network analyses of the multiple levels of CCM and CBB regulation including the regulation of bicarbonate uptake by a CbbR-like transcription factor (Omata et al., 2001), the interconnection between carbon and nitrogen metabolism (Wheatley et al., 2016) and with oxidative stress as sensed by the redox-sensitive two-component global regulator system RegAB (Romagnoli and Tabita, 2009), and other multilayered connections (Eisenhut et al., 2007;McClure et al., 2016;Westermark and Steuer, 2016) have been carried out principally in photoautotrophs. Less is known about the potential high level regulatory networks involved in Ci uptake and assimilation in extremely acidic chemolithoautotrophs (Campodonico et al., 2016). Of particular relevance to the present study, was the discovery that transcripts were more abundant for the glycogen biosynthetic pathway genes (glyB, EC 2.4.18; glyC, EC 2.7.7.27; amy and malQ, EC 2.4.1.25) when A. ferrooxidans was cultivated in sulfur versus ferrous iron and that this coincided with increased expression of CBB genes (Appia-Ayme et al., 2006). Glycogen biosynthesis/degradation has been shown to be interconnected with glycolysis and the pentose phosphate pathway in A. ferrooxidans (Mamani et al., 2016), supporting the idea that there is a direct connection with CBB cycle genes and the biosynthesis of glycogen. It is postulated that more energy is available when sulfur is used as an energy source and we propose that this is used as an opportunity to synthesize glycogen as a stored energy source as has been proposed in other organisms (Goh and Klaenhammer, 2013;Preiss, 2014).
Ecological Considerations
The availability of Ci depends, in part, on the pH of the environment. At high pH values (>pH 9) it occurs principally as carbonate/bicarbonate (HCO 3 − /CO 3 2− ). At circumneutral pH values, it is mainly available as bicarbonate, whereas in very low pH environments (<pH4) Ci occurs principally as a dissolved hydrated CO 2 gas (H 2 CO 3 ). In addition, to the different chemical forms of Ci, their concentrations can vary over a wide range in different environments (Sandrini et al., 2014(Sandrini et al., , 2015Klanchui et al., 2017).
Whereas Ci uptake has been studied extensively in cyanobacteria, to the best of our knowledge, there are no studies on the uptake of bicarbonate in very low pH environments. In initial studies using BlastP with an acceptance cut-off of 1e-06 to probe the genome of A. ferrooxidans, we were unable to detect any of the known bicarbonate transporters. However, weak sequence similarity of SulP with the bicarbonate transporter BicA was observed and additional phylogenomic and gene microsynteny studies supported the prediction that SulP was a bicarbonate transporter rather than the original prediction that it was a sulfate transporter (Figure 4). A. ferrooxidans grows optimally at pH 2.4 when ferrous iron is used as an energy source and would be expected to rely principally on the free diffusion of the hydrated CO 2 gas (H 2 CO 3 ) through the membrane as their source of Ci. So why does A. ferrooxidans have a predicted bicarbonate transporter?
Although A. ferrooxidans grows at a pH optimum of 2.5 when grown on ferrous iron medium, it can also grow at pH 5 when elemental sulfur is used as an energy source (Mcgoran et al., 1969). At this pH, and in an environment with a temperature of 25 • C and a salinity of 5,000 ppm, up to 10% of the dissolved Ci could be in the form of bicarbonate (WikiVividly, 2018) and having a bicarbonate transporter would allow A. ferrooxidans to use this source of Ci. This would permit A. ferrooxidans to exploit a wide range of HCO 3 − availability, providing potential access to environments with a spectrum of pH values from, e.g., pH 1 to at least pH 5. Thus A. ferrooxidans would be considered a "generalist" rather than a "specialist" (Baronchelli et al., 2013) in a dynamic environment such as a bioleaching heap where initial pHs are around 5-6 at a time when acid addition is consumed by, e.g., silica minerals (Dopson et al., 2008, and before sulfur compound oxidation to sulfuric acid has lowered the pH to the A. ferrooxidans optimum.
DATA AVAILABILITY
The datasets generated for this study can be found in GenBank NCBI, NC_011761.
AUTHOR CONTRIBUTIONS
DH and EJ conceived the project. ME, EJ, and DH planned the experiments. ME carried out the experiments and CG helped with the bioinformatic analyses. All authors interpreted the results. DH and MD wrote the initial draft of the paper. All authors contributed to manuscript revision and approved the submitted version.
FUNDING
This work was supported by the Programa de Apoyo a Centros con Financiamiento Basal AFB 170004 to Fundación Ciencia & Vida and grants from Fondecyt 1130683 and 1181717. ME received a Deutscher Akademischer Austauschdienst (DAAD). | 2019-04-04T13:04:18.628Z | 2019-04-04T00:00:00.000 | {
"year": 2019,
"sha1": "82c1d38bbac21c19d95277ca25efee75e0e198ad",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2019.00603/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82c1d38bbac21c19d95277ca25efee75e0e198ad",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
54877633 | pes2o/s2orc | v3-fos-license | Principled Eclecticism: Approach and Application in Teaching Writing to ESL/EFL Students
The principal purpose of this paper is to critically examine and evaluate the efficacy of the principled eclectic approach to teaching English as second/foreign language (ESL/EFL) writing to undergraduate students. The paper illustrates that this new method adapts mainstream writing pedagogies to individual needs of learners of ESL/EFL in order to address students’ difficulties arising from their contact with an unfamiliar language. Such a claim is based on the researcher’s review of relevant research, the analysis and evaluation of scholarly studies on the subject by leading academics and authorities in the area, and the researcher’s practical experiences as a writing teacher in the Department of English Language and Translation (DELT), College of Languages and Translation (COLT), King Saud University (KSU). It has been generally observed that the common, time-honored, language-based, process-based, and genre-based approaches to teaching writing tend to troubleshoot only certain specific problems related to the teaching of ESL/EFL writing. This paper highlights the importance of student-centered approaches to teaching in order to achieve the goal of coherent, pluralistic language teaching. To achieve this, the discussion recommends classifying, selecting, and sequencing the activities related to teaching writing. Indeed, this is what eclecticism means. The term principled signifies coherence that consistently focuses upon the same formal or functional units and sequencing them at the end to help learners interact and participate in writing activities that need contextualized attention. The paper concludes that the gap between eclecticism and principled eclecticism in teaching English writing must be bridged to improve ESL/EFL learners’ writing skills.
Introduction
The acquisition of English writing skills has gained tremendous importance not only as an academic skill, but with the expansion of businesses worldwide and the economic and cultural globalization, it has also become an important skill that translates into any career field.By virtue of being the most important language link, nearly all professions require some form of writing on the job.Whether it is writing medical reports, financial reports, instructional sheets, and user manuals written by software engineers or emails and other forms of written communication in all kinds of businesses, the role of writing as a multi-target English as a second/foreign language (ESL/EFL) tool is significant and expanding.Writing also equips us with the communication and thinking skills we need to participate effectively in any activity.Given the paramount importance of English language, there is an overwhelming surge in learners of ESL/EFL writing seeking to acquire the necessary skills to achieve success in their area of occupation.Teaching writing skills to ESL/EFL learners has its own intrinsic problems, and various teaching methodologies and different approaches have been therefore introduced in language classrooms to teach writing to ESL/EFL learners.All these approaches have doubtlessly helped ESL/EFL acquire the requisite writing skills in English and display a certain command of English language, but the teaching of writing and its acquisition are not linear processes; they require a multi-layered, recursive procedure.Considerable time must be made available for review.Teachers have to emphasize expanding and improving the learners' critical faculties so they can assess and mull over their writing composition as well as evaluate their strengths and weaknesses in relationship to their works (Foster, 1992).In the same spirit, it is also imperative that writing instructors draw on learners' cultural, social, and geographical backgrounds and keep them in proper perspective while teaching writing skills in language classrooms.It is for this purpose that a realistic approach to language learning has to be introduced.The principled eclectic approach, in this respect, has appeared as the most effective way of learning and teaching writing to ESL/EFL learners.This concept shall be substantiated and demonstrated in the Discussion section, with attestations from experts in the field.The principled eclectic approach starts with the writing instructor's valid awareness of the student's needs with respect his or her environment.The eclectic approach stresses using a variety of methodologies and approaches, choosing techniques from each method that the instructor deems effective and applying them according to the learning context and objectives.Larsen-Freeman (2000) devised the term principled eclecticism to demonstrate a coherent and pluralistic approach to language learning.The term principled, when applied to the eclectic approach, signifies that the use of a variety of language learning activities must be guided by giving appropriate importance to the different components of language learning rather than separating them into chunks of grammar and vocabulary.Language learners come with diverse experience goals; therefore, a writing course must be designed in response to their goals and objectives.This is what principled eclecticism stresses.Only after understanding the future needs of learners can instructors design a practical and functional writing course incorporating various instructional skills.Accredited scholars and researchers on writing pedagogy have asserted that no single writing methodology can be sufficiently helpful in teaching writing to learners with diverse linguistic and cognitive abilities (e.g., Kumaravadivelu, 2006;Nunan, 1991).They can fall short of producing the desired learning outcomes in learners with different social, cultural, and rhetorical predispositions.Thus, it is necessary for ESL/EFL writing teachers to undertake a well-rounded approach to meet the diversified needs of their learners.This is possible only through a principled eclectic approach, which combines the text, the context, and the learners (Mellow, 2002).
The current paper analytically examines the scope and utility of the principled eclectic approach to teaching of writing as a second language.It investigates and tests its efficacy in terms of the learning outcomes with regard to the learners.The discussions and conclusions herein are advanced and supported by various research findings by authorities in the field and by empirical evidence and an extensive and critical discussion on the research topic.The outcomes and the inferences drawn in this paper are based on the assumptions that the ESL/EFL teacher exploits various teaching resources that stimulate learners to compose relevant writing lessons.It is impossible to use only one method when teaching writing to students with varied backgrounds and different goals.The principled eclectic method has been regarded as the most effective way of teaching language, and it makes learning interesting and innovative due to the unique nature of the learning process (Kumar, 2013).The Discussion section of this paper shows how this approach can be used in teaching writing and delineates this phenomenon more explicitly and elaborately.This work can serve as a guideline to ESL/EFL teachers of writing and encourage them to use the methodology of principled eclecticism to motivate students to write professionally and independently.
Literature Review
With the tremendous upswing and growing interest in learning about ESL/EFL writing, researchers and ESL/EFL teachers have been busy developing an approach and employing a methodology that cater to the needs of contemporary ESL/EFL learners.Such a need is becoming increasingly urgent as more and more learners are joining ESL/EFL writing classes due to the growing demands of English writing to participate in the globalized world of trade and commerce (Leki, 2002).Moreover, traditional methodology does not present the language as a means of communication.The teaching of writing has been based on the idea of giving information to the learners about the grammar and the structures of the language through various exercises in the course books.Students were expected to learn this information, with an emphasis on intellectual rigor.Meanwhile, pluralism associated with the idea of eclecticism carries in itself the freedom to select teaching methodologies that cohere with specific learning needs, thereby emboldening learners to communicate through their writing abilities.
According to Kumar (2013), teaching language using the eclectic method involves a rich combination of multiple activities, including participatory, communicative, and situational approaches.He observed that an ESL/EFL writing classroom includes heterogeneous students with varied levels of language intelligence, and the teacher has to use multiple methodologies of language teaching, giving particular attention to the learners' cognition and linguistic objectives.Kumar defines this as a holistic eclectic language teaching approach.Hyland (2007), on the other hand, asserts that, in order to teach ESL/EFL writing to learners from varied cultural and geographical backgrounds, it is pertinent to use a variety-based methodology, which he calls "genre-based pedagogies."He argued that this will offer a valuable resource for helping instructors and learners produce effective and relevant texts.In Hyland's (2007) view, the genre-based pedagogy provides educators a more effective tool in preparing ESL/EFL learners in different capacities, creating variety in their teaching, after carefully planning, sequencing, and assessing the learning objectives and outcomes.
Recent decades have witnessed a change from traditional writing approaches to an integrated, pluralized, and eclectic approach to be implemented in writing classroom practices.Eclecticism is a pedagogical strategy that moves away from teachers following one specific methodology in order to assimilate different existing methodologies and approaches, according to the learners' needs (Lazarus & Beutler, 1993).In this strategy, the teacher is left free to choose the methodologies useful for learners in a given circumstance.In a typical writing classroom, the instruction is based on a combination of various approaches, such as the communicative approach, the lexical approach, and the structural-situational approach.This helps the teacher establish a clear, need-based context, with flexible writing strategies to suit learners' learning goals (Yonglin, 1995).Language teaching experts and researchers invariably uphold the effectiveness of the eclectic approach in teaching ESL/EFL writing because they believe that it liberates them from the inflexible, traditional, teacher-oriented teaching methods.Lecture-based, blackboard-supplemented, teacher-centered writing lessons give no alternative to either the teachers or the learners.With the traditional approach, writing became a merely classroom activity.The eclectic approach has the potential of keeping the language teacher open to alternatives.Here, the teacher has to embrace and actively seek out new techniques, trying them out in their professional practice all the time in terms of their underlying rationale (Weideman, 2001).Howard (2001) argued that the eclectic approach in teaching writing composition holds a particular fascination for learners.It enhances small-group discussion and peer response, in which students individually draft an assigned paper and then classmates respond, making suggestions for improvement.The basic thrust of the eclectic approach, which Howard termed as collaborative pedagogy, is to enhance students' experience of writing, with instructors acting as motivators and evaluators.But many language teaching experts believe that eclecticism does not imply the use of unrestrained liberty in mixing arbitrarily chosen methodologies.They believe that no single method of teaching writing works well for all students.Thus, they recommend that composition teachers need to have a big bag of pedagogical tricks.These tricks or improvisations must adapt to the variety of learning styles that the learners bring with them into the classroom (Xiao-yun et al., 2007).Xiao-yun et al. (2007) added "principled" to the term eclecticism in order to put certain conditions on the choice and implementation of the eclectic approach to teaching writing.
According to Mellow (2002), principled eclecticism is the most appropriate, logical, and pluralistic language teaching approach that stresses an assorted recipe of learning activities depending on learners' needs.It reinforces the employment of a wide range of language learning activities with diverse features to help the teachers and learners engage in holistic composition learning.Rodgers (2001, p. 4) envisaged that principled eclecticism, which he termed "disciplined eclecticism," is expected to shape and change the teaching of second language in the future.He believed that, because traditional ESL/EFL writing methodologies are concerned only with the skills dealing with language mechanics (e.g., language, text, and composition), adherence to the traditional approach would lead to a slanted aspect of language writing, which is a comprehensive and complex skill involving reading and cognitive skills.Thus, it becomes inevitable to use a variety of suitable approaches to provide scope and range to the learning of writing skill, which it so pragmatically needs.Responding to the demand for a pluralistic and principled eclectic approach, Larsen-Freeman (2000) used the term "informed eclecticism" or "enlightened eclecticism" when discussing L2 writing pedagogy.He underscored the growing dissatisfaction among instructors of ESL/EFL writing in failing to address the learning needs of non-native learners.Nunan (1991, p. 228) stated that "it has been realized that there never was and probably never will be a method for all."In his discussion on diagnosis, treatment, and assessment with respect to ESL/EFL pedagogy, Brown (2002, p. 13) suggested the use of principled eclecticism, where teachers select the teaching methodology that synchs well with their own dynamic contexts.In this approach, teachers select the syllabus and devise the course designs and objectives with a view to the learners' specific needs in their learning contexts.Principled eclecticism, therefore, throws a challenge to language teachers to remind them that any decision they make must be based on a complete awareness of the purpose and context of language learning as well the needs of language learners.Teachers who make use of the eclectic method must be conversant with all teaching methodologies, knowing full well how language is learned, and how and what teaching is all about (Brown, 2002).In his discussion on principled eclecticism, Kumaravadivelu (2001) defined the methodology, which he also addressed as a post-method pedagogy, as a focused context-sensitive language learning approach, congruent with the understanding of local linguistic, socio-cultural, and political exigencies to enable teachers to construct their own theory of practice.In a review of TESOL methods, Kumaravadivelu (2006, p. 60) pointed that the major change that has taken place in teaching English to ESL/EFL learners relates to the shift from method-based pedagogy to post-method pedagogy, in which written and oral exercises are utilized to improve learners' language accuracy, fluency, and communicative ability.
Discussion
ESL/EFL writing teachers have long used the mainstream approaches to teaching writing as a mainstay of their language teaching methodology.But many researchers feel that the traditional teaching methodologies do not address the problems that a vast variety of language learners experience in their ESL/EFL writing classes.Reflecting on the need to bring about a shift in ESL/EFL writing pedagogy, Scarcella and Oxford (1992) suggested in their book The Tapestry of Language Learning: The Individual in the Communicative Classroom that pedagogical choices must be governed by students' multicultural needs.They argued that, because individual students bring distinct learning styles to the classroom, individual and cultural differences in writing significantly affect classroom pedagogy.Based on this observation, ESL/EFL teachers have to improvise their teaching pedagogy in tune with learners' needs and not follow the beaten track that has now become defunct with a transformation in the language learning scenario.
In his research experiments, Kumaravadivelu (2006) demonstrated that the ESL/EFL writing classrooms consist of such a large variety of language learners, with such divergent learning needs and aptitudes, that the teacher has to employ a fit-for-all teaching technique.In such classrooms, instructions must underscore the need to use a variety of language teaching approaches in order to connect with all the students and adhere to the "local needs, creating local pedagogies to address students' difficulties, and critically examining and evaluating extant mainstream writing practices" (Kumaravadivelu, 2006, p. 60).Language teachers must use the eclectic approach to teaching writing in a principled way, adapting it to the traditional writing methodology, so that learning becomes effective, productive, and practical-what language teaching experts call the principled eclectic approach.
The teaching of ESL/EFL writing can only be effective and functional if particular care is given to understand the social, creative, and cognitive aspects of language learning, which would necessarily involve not one particular approach, but a variety of carefully selected teaching approaches.Only then can the teaching of writing achieve its underlying goal.The first step in moving toward a pluralistic writing instruction methodology is to outline the course objectives.This goal setting must depend on what learners want to learn and what their expectations are, followed by the teacher's choice of methodology based on pragmatic and instrumental grounds befitting the learners' career aspirations.For example, students of literature and linguistics need to be taught academic essay writing, so that they are competent in writing essays and compositions that pertain to a general and wide area of topics requiring creativity and independence.Writing in English is essentially an evolving process that takes into consideration the linguistic, cognitive, social, and cultural aspects, which demands a multidimensional approach (Kucer & Silva, 2006) to achieve effective written communication in order to meet the expectations of a certain discourse community (Swales, 1990(Swales, , 2004)).Writers employ certain cognitive strategies in the process of writing, like planning the essay and reviewing and proofreading, in order to construct meaning.This cannot be successfully taught without including the traditional approach, which is largely based on lexis and grammar.This approach has to include a broader spectrum of reading and writing activities, such as writing the topic sentence or thesis statement and analyzing the pragmatic functions of language, where the learners have to be trained to read like perceptive writers and write like a thoughtful reader.Therefore, it is not inappropriate to suggest that, before learners embark on their first draft, teachers of ESL/EFL writing should apply mainstream teaching methodologies that include observing the grammatical rules, patterns, and structures while making sentences and collecting a repertoire of vocabulary.In the second stage, when learners actually start writing their compositions, teachers devise and improvise a practical and systematic instruction based on students' needs and the exigencies of the writing goals.This is what principled eclecticism signifies.This methodology strives to teach students to integrate each writing methodology that they have learned with their desired goals and the objectives with which they are learning writing.This kind of pluralistic methodology also includes other features of language, like reading comprehension, idea generation, and rhetorical analysis, which assists ESL/EFL learners of writing in attaining the mandatory language competence and subject expertise that is indispensable for writing an original, practical composition (Kroll, 1993).
The eclectic approach can be employed by utilizing diverse language activities, like combining sentences, separating sentences, identifying pronouns and their antecedents, and teaching words (and their synonyms) and prepositional phrases, focusing on specific features of the language relevant to learners' contextual needs.As these language activities are divergent, they compel the teachers to cohere them with the objectives of the group of ESL/EFL learners in writing composition.Zamel (1982) concluded that learners have to be trained to consider how to connect their language with the ideas communicated to them.This kind of linking device requires a teaching approach that stresses the creation of a linguistic structure before learners begin composing their first draft.After a series of lessons in grammar and sentence structures and drafts of their own papers, learners are then asked to examine sample essays compared to their own essays.The eclectic approach encourages inputs from peers and teachers in planning, drafting, and revising a composition.It is not merely teacher-centered approach, but includes the participation of other learners.In this approach, the choice of topic is left to the learners rather than being dictated by teachers.It is a natural teaching method whereby students write about whatever interests them.To make the eclectic approach more effective and integrative, the writing project seeks to engage students with each other in specifiable writing tasks.
The ESL/EFL writing instructor has to understand that a learner's writing style reflects his/her personality type.Some learners like to work in a group while others do not.No single writing methodology can be successful in such a varied class of ESL/EFL learners.The principled eclectic method, therefore, involves a recipe for different methodologies appropriate in a given circumstance in order to make the learners active, hands-on participants in each aspect of a structured learning process.According to Myers and Myers (2010), one's personality type significantly influences life decisions, such as which career to pursue.Some students are intuitive and emotional whereas others are rational, decisive, and practical.Still others are more extraverted and practical than introspective and creative.Hence, with reference to the personality types, each composition class has to have a diversity of approaches in tandem with the writing styles of learners and also in harmony with the expectations of the learners.This can be done when the teacher focuses on a well-informed eclecticism, making the teaching more learner-centered.When using principled eclectic methodology, the teacher introduces learners to a variety of exercises to improve their facility in writing and keep them focused on their learning goals and needs.The language teacher needs to be proactive in correcting students' errors, thereby developing learners' linguistic capabilities.In eclecticism, learners are facilitated to tailor not only the lexis and grammar construction, but also the various functions of language and its idioms.It cannot be achieved with only one language pedagogy, which is why skilled ESL/EFL instructors stick to principled eclecticism, where students are also inspired to be self-sufficient in their learning.
Writing is basically and essentially an academic skill.At its primary and core level, it starts with the building up of correct sentences with coherent syntactic and semantic structures.At succeeding levels, it develops into a craft and a cognitive process, using a rule or procedure that can be applied repeatedly.Thus, the role of the teacher in an ESL/EFL writing class is to teach across-the-board and guide students through a mixed baggage of methodologies concentrating on the learners' needs, because effective writing requires competence in a wide variety of creative, social, and cognitive processes.On the part of the instructor, principled eclecticism, with regard to ESL/EFL writing, entails a comprehensive knowledge of all learning theories, ranging from the grammar-translation to the audio-lingual technique to the more communicative methods that are frequently exploited today, in terms of the purpose and context of language learning and the needs of the language learners.It should not merely be a simple arrangement of different teaching options.Instead, it should be a systematic instructional approach based on the specific dynamic context of the ESL/EFL learners.
Limitations
Principled eclectic methodology has its own share of shortcomings.Some ESL/EFL researchers on writing cast doubt on the efficacy of the collaborative writing approach.They argue that some learners tend to be reticent about sharing their compositions with their classmates.They distrust their peers and prefer the authority of a teacher's feedback.In certain cases, learners shower fulsome praise on each other's works (Spear, 1988).Despite this, the overall belief is that peer conferencing and collaborative learning as an aspect of principled eclectic methodology work well with learners by helping them write better and focus on their writing goals.Another pitfall of this method is that, as it entails maximum teacher-learner interaction, it is possible that the teacher might give learners the impression that he/she is exerting too much control over the writer's voice and usurping the composition.In this case, the approach can be counterproductive.The instructor has to deal very carefully with the learners, without imposing his/her knowledge and position, and offer advice only where the learners need it.This would give the message to the learners that their instructor is giving them space and will help them develop.The eclectic method has to be based on outcomes and centered on the learners' needs.
Although principled eclecticism relies, to a great extent, on learners collaborating with each other in revising and proofreading written drafts and interacting with the teacher whenever there is confusion, some learners may be reluctant to work with their peers or to provide honest feedback.In such cases, the teacher has to become proactive by giving learners specific guidelines for interacting with their peers at each stage of their writing process.Teacher monitoring and observation are critical aspects of principled eclecticism, specifically in teaching ESL/EFL writing skills.
Conclusion and Suggestions
There can be no single method of teaching writing.It is quite obvious that writing skill requires creative competence.Writing also needs the activation of critical skills requiring extensive cognitive activity and planning.Moreover, a written text follows certain required language structures, along with the use of adequate phrases or vocabulary congruent with the learning perspectives, objectives, and outcomes.A written composition also needs a lot of redrafting, proofreading, and editing.Furthermore, particular care has to be given to the fact that different language learners have different learning styles.All these aspects and ground realities highlight the fact that a writing instruction teaching writing classroom has to be eclectic, holistic, and need-based.The constraint here is that it is not easy to find instructors who are comprehensively competent in teaching ESL/EFL writing in a situation that is ever-changing.Therefore, the need, now more than any other time, is to train teachers of ESL/EFL writing so that they are capable of teaching with specific and concrete instructions and encourage learners to help them learn to write well according to their local exigencies.
In an ESL/EFL writing classroom, the basic thrust of the principled eclectic approach is to reach all students despite the diversity in learning styles and intelligences inherent in most composition classes.It is a learner-centered classroom activities approach, where teachers engage students in group work and individually on writing tasks.Thus, the ESL/EFL writing teacher needs to develop an excellent understanding of the problems directly related to learners' needs in order to solve learners' multiple writing problems successfully.This requires the choice of an approach that addresses the issue of learners' needs and styles.
The current paper has underscored the need of employing, encouraging, and reassuring writing tasks required in learners' real lives.As asserted herein, the aspects of language writing and form must conform to learners' learning needs to generate effective writing.The layout of the composition, its paragraphing, the linking of ideas, the appropriate word choice, and the economy of phrasing should cohere with the purpose, content, and writing situation.The paper contends that principled eclecticism can be successful only when the teacher is competent in the knowledge of all the mainstream writing pedagogies and has a complete grasp of the learners and their needs.In addition, the teacher must know well enough how to improvise the methodologies, keeping a keen eye on the learning goals and outcomes in an ESL/EFL writing classroom, so that principled eclecticism can be implemented holistically and successfully. | 2018-12-11T05:57:05.211Z | 2017-01-04T00:00:00.000 | {
"year": 2017,
"sha1": "e1bd1830b0c62daf26ea72ec1f873275e74fc940",
"oa_license": "CCBY",
"oa_url": "https://www.ccsenet.org/journal/index.php/elt/article/download/65553/35421",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "5a57258b309374b619ee0464ad3d9469d85a3d1c",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Psychology"
]
} |
210157557 | pes2o/s2orc | v3-fos-license | Age-related Differences in the Morphology of the Impedance Cardiography Signal
Abstract Impedance cardiography (ICG) is a non-invasive method of hemodynamic measurement, mostly known for estimation of stroke volume and cardiac output based on characteristic features of the signal. Compared with electrocardiography, the knowledge on the morphology of the ICG signal is scarce, especially with respect to age-dependent changes in ICG waveforms. Based on recordings from ten younger (20–29 years) and ten older (60–79) healthy human subjects after three different levels of physical activity, the typical interbeat ICG waveforms were derived based on ensemble averages. Comparison of these waveforms between the age groups indicates the following differences: a later initial upward deflection for the younger group, an additional hump in the waveform from many older subjects not presented in the younger group, and a more pronounced second wave in the younger group. The explanation for these differences is not clear, but may be related to arterial stiffness. Further studies are suggested to determine whether these morphological differences have clinical value.
Introduction
Hemodynamic measurements often include invasive catheter-based techniques like thermodilution [Nyboer, 1940] or advanced imaging modalities like Ultrasounddoppler [Thiele, 2015] or Magnetic Resonance Imaging (MRI) [Chai 2005]. The risks of catheter-based methods are well-documented and include bloodstream infections and sepsis and are associated with increased morbidity [Bouza, 2007;Linares, 2007]. Ultrasound Doppler-based methods are dependent on precise alignment of the beam and the aortic cross-sectional area [Thiele, 2015], while MRI-based methods are time-consuming and expensive [Li, 2015;Young, 2015].
Impedance cardiography (ICG) is a non-invasive, low cost alternative to the aforementioned methods and was described in 1940 by Nyboer et al. [1940]. Nyboer found that the beat-to-beat stroke volume (SV) is given by the segment length of the chest (L), the resistivity of blood (ρ), the maximum backward extrapolated value of the thoracic cardiogenic impedance change (ΔZ max ) and the base impedance (Z 0 ) by the following equation: The model was further developed by Kubicek et al. [1966] to include the left ventricular ejection time (TLVE), the stroke volume is given by the equation: Sramek et al. replaced the cylindrical model of the chest with a truncated cone and an estimated a fixed chest length of 17% of the total height [Sramek, 1983;Van De Water, 2003], a model that was refined by Bernstein including a new parameter δ which is the actual weight divided by the ideal weight [Bernstein, 1986]: Bernstein later extended the model with a new parameter ζ (zeta), which is an index of transthoracic aberrant conduction, in order to compensate for the conduction through the very highly conductive interstitial extravascular lung water [Bernstein, 2010]. The relation between the volume of electrically participating thoracic tissue (VEPT), the intrathoracic blood volume (VITBV) and ζ is given by: This ends up with the Bernstein stroke volume equation for impedance cardiography [Bernstein, 2010]: • ICG has been compared to a number of different methods for measuring cardiac output. These studies are not as rigorously designed as the studies for thermodilution and Doppler-based ultrasound techniques, but the majority of the studies indicate lower accuracy for ICG than the competing methods [Thiele, 2015]. The level of adequacy of hemodynamic measurement techniques is not welldefined; it depends on the procedure and is a clinical discussion. However, describing and testing applications in the outer limit of the range will provide some important information about the limitations and possibilities of the method. Lately, hemodynamic patterns identified by ICG has been shown to predict mortality in the general population [Medina-Lezama et al., 2018].
The use of ICG for cardiac output measurements is a well-known method but little is known of the morphology of the ICG signal compared to signals from other cardiovascular measures such as continuous blood pressure and ECG.
A typical human ICG curve with marked characteristic points is shown in Bernstein [2009] and is reproduced below in figure 1, along with definitions of characteristic point of the ICG curve and their physiological attribution.
Although the ICG curve is defined in the literature, the shape of representative ICG curves of apparently healthy subjects can have substantial differences between publications, comparing figure 1 to figures in for example [Cybulski, 2011;Marquez et al., 2013;Carvalho et al. 2011]. The issue of differences in ICG waveforms and complexes of characteristic points is very well highlighted in Ermishkin et al. [2012] and in Benouar et al. [2018], also demonstrating that the ICG waveform morphology is far from stationary.
Compared with ECG, the knowledge on the morphology and interpretation of the ICG signal is still scarce, especially with respect to age-dependent changes in ICG waveforms. ICG measurement provides a non-invasive recording that is different from both ECG and optical plethysmographic methods, and could possibly provide physiological information that is not provided by other non-invasive methods, or improve predictions in combination with them. Aging is known to bring changes in the cardiovascular physiology such as arterial stiffness and hypertension [Sun, 2016], and it is likely that the ICG waveform is altered along with these age-related changes. The aim of this study was to derive the average intrabeat ICG waveform in two separate groups of younger and older subjects and inspect possible differences in ICG morphology between the groups. Further on, the aims are to interpret any differences with respect to cardiovascular physiology based on existing literature and suggestions for further studies. In the end, possible ICG patterns related to aging and cardiovascular risks can be considered for clinical use.
Materials and methods Dataset
From an earlier study on ICG for estimation of pulse wave velocity [Aria et al., 2019], an anonymized dataset of ICG and electrocardiography (ECG) recordings from 20 healthy volunteers in two age groups (20-29 and 60-79 years) was used in this study. Both age groups had 10 participants with five males and five females. Briefly, ICG (dZ/dt) over the thorax and at the upper arm was recorded simultaneously with ECG for one minute following three different levels of physical activity using a stationary bike at low (phase 1), intermediate (phase 2) and high (phase 3) mechanical resistance. More details on the protocol and the ICG measurement can be found in Aria et al. [2019]. Only the ICG recordings over the thorax were used in this study, together with the ECG recordings. The ICG and ECG recordings were sampled synchronously with a rate of 2000 samples per second.
Signal processing
In order to study the ICG waveform, each heartbeat interval within every recording was identified by the location of the R peaks from the corresponding ECG recording, extracting ICG beats between all successive R peaks. The Pan-Tompkins algorithm [Pan & Tompkins, 1985] was used for automatic identification of the R peaks in from the ECG recording. The ICG beats were then resampled to a length of one second (2000 samples).
Waveform analysis
For each recording, the ensemble average of all ICG beats were used to derive the typical waveform for that participant and physical activity level. The ensemble averages were then grouped into the two age groups and a grand ensemble average was calculated based on a mean of these for all three physical activity levels, enabling a comparison of the ICG morphology between the two age groups. Although the dZ/dt signal has been plotted with the largest deflection (C-point) in the downward direction in earlier publications [Van Eijnatten et al, 2014;Aria et al. 2019], the negative dZ/dt (with an upward deflecting Cwave) has been plotted here for easier comparison with other studies. In order to investigate changes in the direction of the -dZ/dt curve such as small humps, the second derivative of the impedance signal (-d 2 Z/dt 2 ) was also calculated and presented in ensemble averages.
Informed consent
Informed consent has been obtained from all individuals included in this study.
Ethical approval
The research related to human use has been complied with all relevant national regulations, institutional policies and in accordance with the tenets of the Helsinki Declaration, and has been approved by the authors' institutional review board or equivalent committee.
Results
As shown in figure 2, the shape of the ICG waveform had substantial interindividual variation. It can also be seen from the figure that some recordings had a lower repeatability from beat to beat than the rest (e.g. upper left plot in figure 1a). In most of the cases, the waveform was similar for the same individual over the three phases. One ICG recording from phase 3 was excluded from analysis due to errors in the ECG recording.
The grand ensemble averages (mean of the ensemble averages) of -dZ/dt for both age groups and each phase is shown in figure 3, allowing the interpretation of waveform differences between younger and older participants based on visual inspection. The curves show a tendency of later initial upward deflection for the younger subjects, and for the older group, an additional hump in the following downward deflection around sample nbr 400 that is not present in the younger group. The second wave at around sample nbr 1000 is more pronounced in the younger group compared to being almost invisible in the grand ensemble average ICG of the older group. For enhancement of bumps in the waveform, the second derivative ICG (-d 2 Z/dt 2 ) is presented in figure 4. This presentation shows the bump around sample nbr 400 more clearly as a small wave in the second derivative that is not present in the grand ensemble average from the younger group.
Discussion
The analysis showed a difference in the average ICG waveform between younger and older healthy individuals, indicating a possible effect of age on the morphology of the ICG. It is well known that aging affects the cardiovascular system, such as stiffness of the arteries [Sun, 2016]. It is known based on the previous study by Aria et al. [2019] using the same data, that the pulse wave velocity was higher for the older subjects, indicating increased stiffness of the arterial walls, which may have influenced the ICG waveform. The earlier rise in the ICG waveform for the older subjects could be attributed to a shorter pre-ejection period, attributed to the contractility of the heart regulated by sympathetic activity [Van Eijnatten et al., 2014]. However, other researchers have reported no agerelated differences in the pre-ejection period in a study on 133 healthy participants between 30 and 70 years old [Uchino et al., 1999]. Ermishkin et al. [2012] found that healthy subjects at the age below 40 had typical ICG waveforms (as depicted in frequently cited publications), while they found abnormal ICG waveforms in older subjects (above 60). This alteration is seen as a stepwise pre-ejection wave superposing the ejection wave front in the impedance signal, giving rise to double-humped dZ/dt and d 2 Z/dt 2 curves when the preejection changes become especially pronounced. This is in agreement with our recordings, where many of the older subjects show a double-humped dZ/dt curve during the Cwave, while this is not clearly present in any of the younger subjects ( figure 2). The examples in Ermishkin et al. [2012] also shows an earlier upward deflection of the ICG signal in the older group, in agreement with our averaged ICG waveforms (figure 3). As discussed in Ermishkin et al. [2014], it is not clear whether these abnormal ICG waveforms are of pathological of physiological nature, and that the underlying mechanisms of such phenomena remain unclear and require further research [Ermishkin et al. 2014].
Most other studies have used an electrode setup that is slightly different from the setup used for acquisition of the ICG recordings analyzed in this study, which could possibly cause a small difference in the ICG waveform. The differences between the setups is clearly shown in Mansouri [2018], figure 1 where the setup in b was used for the recordings in this study. This analysis has several limitations and a weak ground for drawing general conclusions on the age-related cardiovascular physiology. The sample is too small to know whether the presented average waveforms represent the typical waveform for the age groups. The average waveforms can thus not be used as reference material for ICG morphology in the two age groups. The recordings were done with an electrode configuration slightly different from the conventional one (see figure 1 in [Aria et al., 2019]), and the possibility for comparison with other studies could be limited due to this difference, as the electrode configuration may affect the ICG recording. Inspecting figure 1, it is clear that some recordings had weak or noisy ICG signals, seen by e.g. many lines far outside the ±1 standard deviation band, indicating that technical issues could have affected the quality of the data.
However, the results indicate a possible association between ICG morphology and aging. Relating this to physiology, the most common cardiovascular change with aging is arteriosclerosis. Derived parameters from impedance cardiography has been used in assessment of arterial compliance [Medina-Lazama et al., 2018], and a relation between arterial stiffness and ICG morphology could also be possible.
While there is an obvious need of hemodynamic monitoring of the critically ill, most of the existing methods have their limitations. Among the most frequently used methods, the pulmonary artery catheter is still widely used despite a decline in use in the recent years due to the lack of improved outcome and fear for infections [Vincent, 2008;Gershengorn, 2013]. Thermodilution is still regarded as the gold standard, and other noninvasive methods are compared to this [Josteen, 2017]. The problem with the noninvasive methods has been the accuracy. For bioimpendance measurement, the percentage error in COmeasuring devices was found to be 42% in a recent metanalysis [Josteen, 2017]. Ultrasound is a noninvasive method with few side effects, but the esophageal Doppler has limited accuracy in measurement of cardiac output and echocardiography has limited duration of placement and is operatordependent [Sakka, 2015]. Pulse pressure monitors and plethysmography do also have shortcomings, like for bioimpedance measurements [Sakka, 2015]. An interesting question is whether ICG measurements have the potential of providing continuous measurements of the most important hemodynamics at low cost with no invasive elements and with sufficient accuracy. There are many limitations that increase the uncertainty and reduce the reproducibility of the ICG-measurements; electrode placement, electrical interference from surrounding equipment and long-term electrode characteristics are some of them. However, this study suggests that the ICG morphology is related to aging, and possibly pathology such as stiffening of arteries.
Further research should include a larger sample of younger and older subjects to check the reproducibility of these findings, including reference measurements for arterial stiffness (such as carotid-femural pulse wave velocity) and blood pressure. As brought up in Ermishkin et al. [2012], morphological waveform variability may hamper accurate identification of characteristic points on the ICG recording, and more knowledge on natural waveform variability with age may improve ICG interpretation.
As demonstrated for the ECG signal, large amounts of labeled recordings has enabled the use of deep learning to identify patterns in the waveform related to pathology such as atrial fibrillation [Attia et al., 2019]. Machine learning approaches could also be relevant for ICG recordings given the availability of larger amounts of labeled data.
Conclusion
Although this study is small and lacks solid conclusions, the reported findings contribute to the scarce literature on agerelated changes in the ICG morphology. This study indicates that there is a measurable difference in ICG morphology between young and older persons, and that the measurable difference is most probably a result of the physiological changes in the vascular system as a result of stiffening arteries. It is unclear whether this approach could improve the utility of ICG in clinical settings, but we believe that further research on ICG morphology could possibly improve the method. | 2020-01-13T14:13:59.908Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "3c2027279e3dc8d74a3daae2e95269ef3e67de0f",
"oa_license": "CCBYNCND",
"oa_url": "https://content.sciendo.com/downloadpdf/journals/joeb/10/1/article-p139.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "858ba2a71b075ca5f72213ce8cff47d138d23a53",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
229679741 | pes2o/s2orc | v3-fos-license | An easy (horizontal) walk through fake octagons
A fake octagon is a genus two translation surface with only one singular point and the same periods as the octagon. Existence of infinitely many fakes was first established by McMullen in 2007, and more generally follows from dynamical properties of the so called isoperiodic foliation. The purpose of this note is to build an explicit infinite family of fake octagons, constructed by means of elementary methods. We describe an easy cut-and-paste surgery and show that the all iterates of that surgery produce fake octagons. We prove that any iterate gives a fake which is different from each other, and we show that any such fake can be approximated arbitrarily well by some other fake of the family. This note is intended to be elementary and fully accessible to non-expert readers.
Introduction
Bibliography on translation surfaces is immense, we cite here only the celebrated handbooks of dynamical systems (see for instance [5,6,8,10,11]), the nice survey [15], as well as [17] and [18], and references therein. Also, we refer to Section 2 for precise definitions, staying colloquial in this introduction.
The translation surface obtained by gluing parallel sides of a regular octagon is commonly known as "the octagon". A fake octagon is a translation surface with one singular point and the same periods as the octagon.
It is well known that periods are local coordinates for the moduli space of translations surfaces of fixed genus and singular divisor. Periods come in two flavours: absolute and relative: former ones are translation vectors associated to closed loops, the latter are those associated to saddle connections (i.e. paths connecting singular points). Socalled isoperiodic deformations of a translation surface consist in changing relative periods without touching absolute ones. Isoperiodic loci are leaves of the isoperiodic foliation (also known as absolute period foliation or kernel foliation). Local coordinates on isoperiodic leaves are given by positions of singular points with respect to a fixed singular point, chosen as origin. As a consequence, translation surfaces of the minimal stratum (that is, with a unique singular point) cannot be continuously and isoperiodically deformed in that stratum (all periods are absolute).
A priori, it is not clear whether or not, given X in the minimal stratum, there is a translation surface, still in the minimal stratum, with same periods as X. If any, such surfaces are called "fake X". In fact, the question of finding fakes of famous translation surfaces, as for instance the octagon, was a nice coffee-break problem in dynamical system conferences some years ago. Nowadays, this is literature.
Fakes where introduced and studied by McMullen in [12,13] -who gave a complete and detailed description of isoperiodic leaves in genus two -and dynamical properties of the isoperiodic foliation were established in [3,7] in general (in particular ergodicity and classification of leaf-closures).
From [12,13,3,7] it follows that if periods of X are not discrete (e.g. the octagon), then X has infinitely many fakes. More precisely, the isoperiodic leaf through X intersects the minimal stratum H 2g−2 in a set whose closure has positive dimension. In particular, any such X can be approximated by fakes. Moreover, in [13] McMullen showed that, in genus two, fakes are arranged in horizontal strips, and described all fake pentagons.
The purpose of this note is to give easy proofs of such results for the particular case of the octagon by using elementary methods; where "easy" means "explicable in a conference coffee-break". The "elementary methods" we use are surgeries that are the topological viewpoint of the so-called Schiffer variations. Given the octagon, we describe a surgery (that we call "left-surgery") that produces a fake octagon and that can be iterated. We then prove that all fakes produced by iterating left-surgeries are in fact different from each other, exhibiting therefore an explicit infinite family of fake octagons. Also, we will show that any fake of the family can be arbitrarily approximated by iterations. We note that all our fakes are along a "horizontal" line of the isoperiodic leaf of the octagon: the Schiffer variations are always in the horizontal direction. In this way we describe all fakes in a horizontal strip.
Finally, we discuss ingredients needed for possible generalisations. Our main result is summarised as follows: Theorem (Theorem 4.3, Remark 4.4). Fake octagons obtained by iterated left-surgeries on the octagon are different from each other, and any such fake can be arbitrarily approximated by iterates.
Acknowledgements This work originated from master's thesis [4] of first named author. Second named author would like to thank first named author for the genuine friendship born during the redaction of that thesis.
Isoperiodic foliation and fakes
Translation structures on closed, connected, oriented surfaces can be defined in many different ways, for instance: • They can be viewed as Euclidean structures with cone-singularities of cone-angles multiple of 2π, up to isometries that reads as translations in local charts. Equivalently, they are branched C-structures whose holonomy consits of translations, where "branched" means that the developing map is not just a local homeomorphism but can also be a local branched covering; • or as pairs (X, ω) where X is a Riemann surface and ω a holomorphic 1-form, up to biholomorphisms; • or quotients of poligons in C via gluings that identify pairs of parallel edges via translations, up to suitable "tangram" relations. Third construction clearly produces a Euclidean structure with cone-singularities, which, by pulling back the structure of (C, dz) produces a complex structure together with a 1-form (whose zeroes correspond to cone-singularities). In fact, it turns out that all viewpoints are equivalent (we refer to [15] for more details). Any singular point has an order: if viewed as a cone-point, then it has order d if the total angle is 2π + 2πd; if viewed as a zero of ω, then it has order d if locally ω = z d dz.
As usual, we will refer to a surface endowed with a translation structure as a translation surface. Singular points are also referred to as saddles.
If a translation surface has genus g, then by Gauss-Bonnet (or by a characteristic count) the sum of the orders of singular points is 2g − 2.
The moduli space of translation surface of genus g -that we denote simply by H if there is no ambiguity on the genus -is naturally stratified by the singular divisor: if κ is a partition of 2g − 2 (more precisely a list of non increasing positive integers summing up to 2g − 2) then the stratum H(κ) consists of all translation surfaces whose singular points have orders as prescribed by κ. For example, in genus g = 2 there are only two strata: the principal, or generic, stratum H 1,1 -consisting of translation surfaces with two simple singular points (with cone-angles 4π each) -and the minimal stratum H 2 -consisting of translation surfaces having only one singular point of cone-angle 6π. It turns out that any stratum is a complex orbifold of dimension 2g + s − 1 where s = |κ| is the number of singular points.
Apart from obvious issues due to the orbifold structure, periods give coordinates on any stratum. More precisely, if S is a translation surface with singular locus Σ = {x 1 , . . . , x s }, then we consider the relative homology H 1 (S, Σ; Z). If γ 1 , . . . , γ 2g is a basis of H 1 (S; Z) and η 2 , . . . , η s are arcs connecting x 1 to x 2 , . . . , x s , then the family γ 1 , . . . , γ 2g , η 2 , . . . , η s is a basis of H 1 (S, Σ; Z). By using the (X, ω) viewpoint of translation surface, the period map is a local chart H(κ) → C 2g+s−1 . These are the so called period coordinates. In other words, we consider [ω] ∈ H 1 (S, Σ; C). Periods of curves γ i 's are usually called absolute periods, while those of η i 's are relative periods.
There is a natural period map P er : H → C 2g = H 1 (S; C) that associates to any translation surface its absolute periods The so-called isoperiodic foliation F (also known as kernel foliation or absolute period foliation) is the foliation locally defined by the fibers of P er. Namely, two translation surfaces are in the same leaf of F if one can be continuously deformed into the other without changing absolute periods. The isoperiodic foliation is globally defined in H = ∪ κ H(κ), and its leaves have dimension 2g − 3. Isoperiodic foliation has been extensively studied, for instance in [12,13,3,7,1,9,16].
One of the problems in studying isoperiodic foliation, is to determine the foliation induced by F on each stratum. For instance, in the minimal stratum H 2g−2 there is no room for deformations: locally, any leaf of F intersects transversely such stratum in a single point. Given X ∈ H 2g−2 , a "fake X" is a translation surface, different from X, but with same absolute periods as X (as a polarized module) and only one singular point, that is to say, if F X is the leaf of F through X, then a "fake X" is a point in F X ∩ H 2g−2 .
Example 2.1. The so-called octagon is the translation surface obtained by gluing parallel sides of a regular octagon sitting in C with an edge in the segment [0, 1]. It is a genus two surface with a single singular point. A fake octagon is an intersection point of the isoperiodic leaf of the octagon with the minimal stratum H 2 , i.e. any translation surface with the same (absolute) periods as the octagon (the same area) and only one singular point.
Traveling on isoperiodic leaves by moving singular points
If X has s singular points, then there are s − 1 degrees of freedom for perturbing X without changing its absolute periods (we can change the relative periods of η 2 , . . . , η s ). It turns out that local parameters are exactly the positions of singular points; more precisely, the relative positions of x 2 , . . . , x s with respect to x 1 . So we can travel the isoperiodic leaf through X by "moving" singular points. From an analytic viewpoint such moves are known as Schiffer variations (see [14,2]). We adopt here a more topological cut-and-paste viewpoint. We briefly recall the basic construction, referring to [3,2] for a more detailed discussion.
Let x be a singular point and let γ be a segment, or more generally a path, starting at x. If x has degree d, then γ has d twins, that is to say, paths starting at x with the same developed image as γ (by simplicity we assume here that none of such twins contains a saddle in its interior). Explicitly, if γ is a segment, its twins are segments forming angles 2π, 4π, . . . , d2π with γ. For any twin of γ we can perform a cut-and-paste surgery as follows: We cut along γ and the chosen twin, and then we glue in the unique other way coherent with orientations. This is better described in Figure 1. A first remark on that surgery, is that endpoints of γ and the twin can be both regular, both singular, or one regular and the other singular point. Given the angles at endpoints, and the angle between γ and its twin, we can easily recover angles after the surgery (see In Figure 2, before the surgery the full-dotted singular point has total angle θ + δ, and after it splits in two points. The two empty-dotted points paste together to form a point of total angle α + β. All α, β, θ, δ are multiple of 2π (they are 2π precisely when the corresponding point is regular).
Note that our surgeries take place locally, near a singular point. It follows that they do not affect absolute periods (wile clearly they affect relative periods). It turns out that these moves are the only way to isoperiodically deform a translation surface. (See [3,2]).
It maybe useful to remark at this point that such surgeries may or may not preserve strata. With notations as in Figure 2, if α, β, θ, δ are all 2π, then what we are doing is to move a singular point from the starting point of γ to its endpoint (in this case the stratum does not change).
If δ, θ > 2π, and α, β = 2π, then we are splitting a singular point in two separate singular points and creating a singular point of angle 4π. (The sum of resulting degrees equals that of initial ones). So in this case we are changing stratum.
Similarly, if for instance α = 4π, and θ, δ, β = 2π, the surgery collapses together two singular points, hence again changing stratum. There are more possibilities, and other kind of surgeries are possible (for instance by cut and pasting along many twins simultaneously). We refer the interested reader to [2,3] for further details.
The last needed remark, is that it may happen that γ is a loop, starting and ending at the same point. In this case twins of γ may or may be not loops, and conversely. Also, it can even happen that γ is embedded, but the twin is not. In such cases some topological disasters may happen (the surgery could for instance disconnect the surface) and one has to check what happens carefully.
We will use surgeries where γ is a closed saddle connection, that is to say a straight segment starting and ending at the same singular point, but we will always require that twins of γ are embedded segments. It is readily checked that in this case no disasters occur. We refer to such a cut-and-paste as saddle connection surgery. See Figure 3. Remark 3.1. If X is in H 2g−2 , then a saddle connection surgery produces a translation surface with the same absolute period of X. If in addition the angle between the closed saddle connection and the chosen twin is exactly 2π, then the resulting surface is in H 2g−2 (the full-dotted blue point in Figure 3 is a regular point). So, if different from X, it is a fake X. Moreover, the closed saddle connection used by the surgery, remains a closed saddle connection of the same length and direction after the surgery.
Iterated surgeries on the octagon
In this section we describe a sequence of fake octagons Oct n obtained from the octagon Oct = Oct 0 via a sequence of saddle connection surgeries. In particular, each surgery will be a saddle connection surgery along a fixed closed saddle connection. We will then prove that all fakes Oct n are in fact different from each other.
We parameterise our octagon by gluing parallel sides of two polygons as in Figure 4. Edges have length one, all vertices are identified to each other and form the unique singular point.
The octagon has three horizontal (closed) saddle connections. Only one, which in the picture is BC, has length 1, and the other two AD, EF have length 1+ √ 2. This property will be preserved by all saddle connection surgeries. We therefore describe our surgeries from an intrinsic viewpoint, exploiting this property.
Let γ be the unique unitary horizontal closed saddle connection, being the other two of length 1 + √ 2. By definition of twin, the two twins of γ are sub-segments of those longer saddle connections. Since γ is horizontal, the end of γ forms with the start of γ an angle which is an odd multiple of π. In fact for the octagon that angle is 3π. Since the total angle around the singular point is 6π, then the twins of γ form angles ±π with respect to the end of γ. We orient γ from left to right, and name γ L be the twin on the "left side", The dotted line is the twin of BC that will never be used Figure 4. The octagon that is to say, the angle measured clockwise from the end of γ to γ L is π. Let γ R be the other twin. We define left surgery the saddle connection surgery along γ and γ L , and right surgery that along γ and γ R . (See also Figure 5). The angle between γ L (or γ R ) and γ is exactly 2π, so left and right surgeries produce elements of H 2 (see Remark 3.1) It is immediate to check that the inverse of a left surgery is a right surgery along γ −1 . It will be clear from what follows that left and right surgeries preserve the two properties of having one unitary horizontal saddle connection (and two of length 1 + √ 2), and that the angle between the start and the end of γ is 3π. Therefore, we can iterate left and right surgeries.
Definition 4.1. For n ∈ Z we define Oct n as the translation surface obtained from the octagon Oct 0 by n left surgeries (for negative n we apply right surgeries).
Before giving a global description of Oct n , we start by looking in details at first steps. Coming back to pictures, left surgeries will always affect the horizontal saddle connection γ = BC and its twin on the line AD. Specifically, the twin of BC along EF will never come in play. Also, we never change diagonal identifications AB ∼ C F, CD ∼ EB , nor the vertical one A E ∼ D F . Let's start. We cut and paste along BC and its twin on the line AD. See Figure 5. In that picture, dashed lines mean cuts, i.e. segments that where previously identified and are no longer identified. Colours visualise new identifications. Note that after the 6 surgery, not all vertices are identified to each other. In particular, A ∼ B ∼ D ∼ D is a regular point. All other vertices are identified, give rise to the unique singular point, and the result is indeed a fake octagon: it is our Oct 1 . We will label with a full dot the singular point, and with other symbols those other vertices that are regular points (we use same label for vertices that are identified). Also, we will use the "dot" notation for concatenation of segments, e.g. "XY · ZT " denotes the concatenation of segments ZT after XY , clearly this makes sense only if Y is identified with Z.
When we cut the twin of BC (oriented as BC) we see two avatars of it in the picture: one with the surface on its left, and one on its right. We denote by P 1 the endpoint of the cut having the surface on its left side, and P 1 the other.
After the surgery, the saddle connection BC has again two twins, one emanating from P 1 along the line P 1 D and another emanating from E.
We then obtain Oct 2 via a second left surgery, cutting and pasting along BC and its twin on the line P 1 D. See Figure 6 (left side). As above, when cutting along that twin, we denote by P 2 the endpoint of the cut having the surface in its left side, and P 2 the other.
Second surgery: On the left the cut along BC and its twin P 1 P 2 .
On the right the new identifications: (B C ∼ A P 1 , P 2 D ∼ P 2 D , and) BC ∼ P 1 P 2 , AP 1 ∼ P 1 P 2 .
Third surgery: On the left the cut along BC and its twin P 2 D·B P 3 .
On the right the new identifications: (AP 1 ∼ P 1 P 2 , P 3 P 1 ∼ P 3 C , and) BC ∼ P 2 D · B P 3 , P 1 P 2 ∼ P 2 D · A P 3 . Figure 6. Second and third fakes Oct 2 and Oct 3 .
One more left surgery, along BC and its twin emanating from P 2 , will produce Oct 3 . See Figure 6 (right side). Again, P 3 and P 3 are the endpoints of the cut of the twin having the surface on the left and right side respectively.
We are now ready to describe the gluing pattern of Oct n . For this purpose it is more convenient to pass to a simpler -even if less "octagonal" -viewpoint. Namely, we glue the upper quadrilateral to the bottom one, by identifying sides AB and C F . See Figure 7. (Compare also with [13, Figure 8]).
Horizontal gluings are determined, once we know positions of points P n and P n , as follows. Since B is identified with D, segment B D can be parameterised by a circle of length 2 + √ 2. Points P n−1 and P n+1 are the points of the circle B D at distance 1 from P n , respectively on the left and right side of P n . At step n, segment BC is identified with P n−1 P n -this is the unique unitary horizontal saddle connection -and segment P n D ·A P n is identified with P n P n−1 (which, in Figure 7, is the concatenation of segments P n D · B P n−1 ), the latter being a horizontal saddle connection of length 1 + √ 2. The third horizontal saddle connection, namely EF , is never involved and always has length 1 + √ 2. The unique singular point is P n−1 ∼ P n ∼ P n ∼ B ∼ C ∼ E, and a quick Figure 7. A less "octagonal" viewpoint. P n is identified with P n . P n−1 P n is where BC is glued at step n, while P n P n+1 is the next twin we cut at step n + 1. Segment P n D · B P n−1 is identified with P n D · A P n . This is the n th fake Oct n . check shows that the angle between the start and the end of the unitary closed horizontal saddle connection is 3π.
The twin of BC that will be used in next surgery is P n P n+1 (which is identified with the corresponding segment starting from P n ), and it is readily checked that a left surgery along BC and its twin P n P n+1 produces again a configuration of the same type, with different positions of P n and P n : If we parameterise B D with [0, 2 + √ 2] and A D with [0, 1 + √ 2], we see that P 1 = 2 and P 1 = 1, and in general we have Remark 4.2. Pictures only help in calculations, but left surgeries are intrinsically defined: any of our fakes has three horizontal saddle connections, and only one of them has length one. At any step we cut and paste along that saddle connection and its left twin. This receipt is "picture free". Proof. The invariant that distinguishes fakes octagons from each other is the systole, namely the (family of) shortest saddle connection(s). As the octagon has edge of length one, the systole is always not longer than one. In fact, the shortest saddle connections for the true octagon all have length one, and because the irrationality of √ 2 this never happens again. Looking at Figure 7 we see that systoles are necessarily segments connecting some avatar of the singular point (i.e. P n−1 , P n , P n , E, B, C). Point P n always has distance at least one from other singular points, so no systole starts from P n in Figure 7. Moreover, since the quadrilateral P n−1 P n CB is a parallelogram, for n = 0, we have three possible families of fakes octagons, determined by the position of P n in B D = [0, 2 + √ 2] (see Figure 8): 2 ). The unique systole is the segment P n B.
). There are two systoles: P n−1 B and P n C.
. In this case the unique systole is P n−1 C.
Since 2 + √ 2 is irrational and P n ≡ n + 1 mod (2 + √ 2), the possible positions of P n on B D identified with [0, 2 + √ 2], form an infinite dense set. It follows that the set of lengths of systoles of the family {Oct n ; n ∈ Z} is an infinite set. Hence, the family of fakes {Oct n : n ∈ Z} contains infinitely many different fakes.
Suppose now that there is n, m such that Oct n = Oct m . Then (by Remark 4.2) in this case, also Oct n+i = Oct m+i for any i, and so we would observe a m − n periodic behaviour. In particular we would have only finitely many fakes among our Oct n 's. But, 8 (1) Figure 8. The three possible systole configurations.
since we already proved that we have infinitely many different fakes, this cannot happen. It follows that for any n = m we have Oct n = Oct m .
Remark 4.4. The fact that the possible positions of P n in [0, 2 √ 2] form an infinite dense set, implies in particular that all possibilities described in Theorem 4.3 actually arise. Another consequence is that we can find fakes Oct n arbitrarily close to the octagon Oct 0 , and in general that for any Oct m there is a fake Oct n arbitrarily close to, but different from, Oct m . This is nothing but a manifestation of general density phenomena described in [3] and anticipated in Introduction. This is basically all that can happens.
Proposition 4.6. For any Oct m (with m = 0) there is Oct n with the same systole length and in family (1), more precisely with P n ≡ x ∈ (1, 1 + Proof. For x ∈ [0, 2 + √ 2] let y = y(x) be its symmetric with respect to 1 + √ 2/2. This is the unique other point so that d(x, B) = d(y, B). Explicitly, y is determined by Figure 9. Positions having the same distance form B or C Let z = z(x) = x + 1 and t = t(x) = y(x) + 1. Those are the unique points so that d(x, B) = d(z, C) = d(t, C). Note that Such equations have integer coefficient and 2 + √ 2 is irrational. So, if we want to solve them in Z, they reduce to genuine equalities. Namely, if x ≡ P n ≡ n + 1 mod (2 + √ 2) and y ≡ P m ≡ m + 1 mod (2 + √ 2), then x ≡ −y mod (2 + √ 2) if and only if m = −n, and similarly for points z, t.
The first consequence of this fact is that if P m is placed in (1 + , then there is n such that P n is placed in x ∈ (1, 1 + 2 ) (hence Oct n is in family (1)) and P m is either a y-or z-or t-point for x. In particular, this proves the first claim.
We may therefore assume that we have Oct n in family (1) and search for all possible Oct m with the same systole-length.
From the fact that congruences 1 reduces to genuine identities on Z, we can now deduce second claims.
If P n ≡ x ∈ (1, 1+ √ 2 2 ) mod (2 + √ 2), some possibility disappears because in this case d(y, B) > d(y, C) and d(z, C) > d(z, B). A part the case Oct m = Oct n (if and only if m = n), the only possibility that remains is when Oct m belongs to family (3) and P m is the t-point of x ≡ P n mod (2 + √ 2), namely: • P m−1 ≡ t ≡ −x + 1 mod (2 + √ 2), and this happens if and only if m = n − n + 2.
Remark 4.7 (Generalisations). The construction of sequence (Oct n ) n∈Z used only the existence of a (horizontal) saddle connection γ having an embedded twin such that: • The angle from the start of the twin to the start of γ is 2π. (So that the saddle connection surgery produces a point in the minimal stratum, see Remark 3.1.) • The angle from the end of γ to the start of the twin is π.
• If the (horizontal) continuation of the twin is a saddle connection (which is longer than γ because the twin is embedded), then the angle from its start to its end is π (hence it bounds a cylinder). Second condition implies that first one is preserved by the surgery; third condition is preserved by surgery and guarantees that the length of the twin saddle connection does not change under the surgery (to see this, just draw the twin and angles in Figure 3).
Therefore the sequence of (putative) fakes can be constructed in any such situation via left surgeries. | 2020-12-29T02:15:51.701Z | 2020-12-28T00:00:00.000 | {
"year": 2021,
"sha1": "259e9b6f7a1dc4e8c39613b0ae24a92a18946121",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.14157",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "259e9b6f7a1dc4e8c39613b0ae24a92a18946121",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
14206686 | pes2o/s2orc | v3-fos-license | How social cognition can inform social decision making
Social decision-making is often complex, requiring the decision-maker to make inferences of others' mental states in addition to engaging traditional decision-making processes like valuation and reward processing. A growing body of research in neuroeconomics has examined decision-making involving social and non-social stimuli to explore activity in brain regions such as the striatum and prefrontal cortex, largely ignoring the power of the social context. Perhaps more complex processes may influence decision-making in social vs. non-social contexts. Years of social psychology and social neuroscience research have documented a multitude of processes (e.g., mental state inferences, impression formation, spontaneous trait inferences) that occur upon viewing another person. These processes rely on a network of brain regions including medial prefrontal cortex (MPFC), superior temporal sulcus (STS), temporal parietal junction, and precuneus among others. Undoubtedly, these social cognition processes affect social decision-making since mental state inferences occur spontaneously and automatically. Few studies have looked at how these social inference processes affect decision-making in a social context despite the capability of these inferences to serve as predictions that can guide future decision-making. Here we review and integrate the person perception and decision-making literatures to understand how social cognition can inform the study of social decision-making in a way that is consistent with both literatures. We identify gaps in both literatures—while behavioral economics largely ignores social processes that spontaneously occur upon viewing another person, social psychology has largely failed to talk about the implications of social cognition processes in an economic decision-making context—and examine the benefits of integrating social psychological theory with behavioral economic theory.
What makes social decision-making unique and different from non-social decision-making? Humans are highly social animalsas such, researchers often take for granted the ease with which humans make social decisions. This begs the question whether social decision-making is a simplified type of decision-making. Yet social decision-making should be a complex process-social decision-makers must engage traditional decision-making processes (e.g., learning, valuation, and feedback processing), as well as infer the mental states of another person. These two tasks have been separately studied in the fields of behavioral economics and social psychology, with behavioral economists studying decisionmaking in interactive economic games and social psychologists studying spontaneous inferences about other people. Each of these fields has separately made major contributions to the understanding of social behavior. However, a more cohesive theory of social decision-making results when researchers combine these literatures.
When talking about social decision-making, many different types of decisions may come to mind-decisions about other people (Is Linda a feminist bank teller?), decisions that are influenced by other people (e.g., social conformity and expert advice), as well as decisions that are interactive (e.g., two people want to go to dinner but have to decide on a restaurant). In this review, we focus on strategic interaction decisions often employed in behavioral economics games (e.g., trust game, ultimatum game, prisoner's dilemma game, etc.) that require thinking about the mental states of another person. Research shows that such decisions may differ depending on whether the interaction partner is another person or a computer agent. Here, we suggest that such differences in decision-making arise due to differences when processing human and computer agents. Specifically, viewing another person engages the social cognition brain network, allowing for mental state inferences that function as predictions during the decision phase, as well as spontaneous trait inferences that occur when viewing the other person's behavior in the feedback phase.
To understand how decision-making in a social context is different than non-social decision-making, it is first important to understand what exactly makes humans unique as social agents. Social psychological theory suggests humans differ from objects in important ways (Fiske and Taylor, 2013). First, humans are intentional agents that influence and try to control the environment for their own purposes. Computers on the other hand are non-intentional agents. The decisions made by a computer result from fixed, preprogrammed algorithms, and are usually not as flexible as human decision-making. Second, people form impressions of others at the same time others are forming impressions of them. Therefore, in a social situation people are trying to form impressions of another person at the same time they are trying to manage the impression being formed of them. In meaningful social interaction (most social interactions) the first person usually cares about the reputation the second person is forming of them, wanting them to form a largely positively valenced impression. Each interaction partner is aware that they are the target of someone's attention and may monitor or change their behavior as a result. Third, it is harder to verify the accuracy of one's cognitions about a person than they are about an object. Because things like traits, which are essential to thinking about people, are invisible features of a person and are often inferred, it is harder to verify that a person is trustworthy than it is to verify that a computer, for example, is trustworthy. This may be because the person can manipulate trait information such as trustworthiness-an immoral person can act in moral ways when desired-but a computer has no such desire. Last, and perhaps most importantly, humans possess mental states-thoughts and feelings that presumably cause behavior-that are only known to them. People automatically try to infer the mental states of others because such inferences facilitate social interactions. Computers, however, do not have mental states because they do not have minds. This important distinction-the possession of mental states-allows for the differences mentioned above in intentionality and impression management. These key differences allow us to examine what these social cognitive processes (impression management and intentionality) contribute to the uniqueness of social decisionmaking, though this discussion seems to often elude studies of social decision-making.
There are also important similarities between humans and computers that make computers the ideal comparison in social decision-making studies. With analogies comparing the human brain to a computer, it almost seems natural that many studies have turned to computers as the non-social comparison. Computers, like humans, are agents that can take actions toward a participant. Presumably a computer can "decide" to share money in a trust game as can a human partner. Additionally both humans and computers are information processing systems. Participants' decisions are presumably "registered" by both human and computer agents. Advanced computer programs can take participants' choices into account in order to "learn" to predict another person's behavior using programmed algorithms. For example, website ads learn to predict what a person may purchase based on search history. In some economic games, a computer's responses may be dependent on the participant's past decisions. These similarities allow researchers to compare decisions across agents and examine what social agents add to the decision-making process.
SOCIAL DECISION-MAKING BRAIN REGIONS
One way to understand the unique nature of social decisionmaking is to take a neuroscientific approach. By understanding what goes on in the brain, we can begin to dissociate social and non-social decisions. This strategy is particularly informative and useful because similar behavior is sometimes observed for social and non-social stimuli, but the neural mechanisms underlying those decisions are found to be different (e.g., Harris et al., 2005;Harris and Fiske, 2008). Below, we briefly summarize two brain networks we believe will be involved in social decision-makingthe traditional decision-making brain network, and the social cognition/person perception brain network 1 . As a caveat, the reader must remember when discussing the unique qualities of social decision-making, we are still examining decision-making. As such, traditional decision-making processes and brain structures underlying these processes are involved in social decisionmaking studies. Past studies demonstrate that the social context modulates these decision-making structures (see Engelmann and Hein, 2013 for review). However, exactly how the social context does this is not entirely understood. By looking in the social cognition/person perception brain network, researchers are beginning to explore how these functions are integrated at a neural level (e.g., Hampton et al., 2008;Yoshida et al., 2010;Suzuki et al., 2012). Next, we list brain regions implicated in decision-making and social cognition.
Past research shows decision-making brain regions are also involved in social decision-making. The medial prefrontal cortex (MPFC)-responsible for creating value signals for food, nonfood consumables, and monetary gambles (Chib et al., 2009)-is also active when creating value signals in a social context (Lin et al., 2012). These value signals can be thought of as a quantifiable signal for making predictions-those assigned a higher value predict a better outcome, and those assigned a lower value predict a worse outcome. Recently, it has been suggested that the MPFC works as an action-outcome predictor concerned with learning and predicting the likelihood of outcomes associated with actions (Alexander and Brown, 2011). Similarly, investigations of social reward processing suggest that the striatum responds to both social and monetary rewards (Izuma et al., 2008(Izuma et al., , 2010. The connections between cortical and subcortical regions with the striatum create a network of brain regions engaged during decision-making. The neurotransmitter dopamine provides a vehicle by which these brain regions communicate. Prediction error signals-the firing of dopamine neurons when observed outcomes differ from expectations (or predictions)-also occur for social stimuli in economic games (Lee, 2008;Rilling and Sanfey, 2011) as well as when social targets violate expectations (Harris and Fiske, 2010). Collectively these regions, along with other regions such as the amygdala, posterior cingulate cortex (PCC), insula, and other areas of prefrontal cortex including orbital prefrontal cortex and a more rostral region of MPFC make up a decision-making network often engaged during economic decision-making (Knutson and Cooper, 2005;Delgado et al., 2007).
While social decision-making studies have investigated how the striatum and prefrontal cortex are modulated by the social context, another prevalent question is whether a network of brain regions established in the social neuroscience literature on social cognition and person perception is also active during social decision-making and how these brain regions interact. An important part of social cognition consists of inferring mental states, like the intentions of a social target (Frith and Frith, 2001). During tasks that involve dispositional attributions-an inference of an enduring mental state-areas such as MPFC and superior temporal sulcus (STS) are reliably activated (Harris et al., 2005). Other areas involved in person perception include temporal-parietal junction (TPJ), pregenual anterior cingulate cortex (pACC), amygdala, insula, fusiform gyrus of temporal cortex (FFA), precuneus, posterior cingulate, temporal pole, and inferior parietal cortex (IPL; Gallese et al., 2004;Haxby et al., 2004;Amodio and Frith, 2006). Together these regions represent a social cognition network that can be used to navigate the social world. This network is believed to be activated in a variety of social cognition tasks, including thinking about others' intentions and goals (i.e., theory of mental state tasks), identifying social others (i.e., faces and bodily movement), moral judgments, social scripts, and making trait inferences (see Van Overwalle, 2009, for a review). However, until recently the mention of these regions in social decision-making studies has been scarce, often being relegated to a supplemental analysis or table. Presumably these social cognitive processes are relevant for decision-making when interacting with human agents because they occur automatically and with minimal exposure to the social target (Ambady and Rosenthal, 1992;Willis and Todorov, 2006). Therefore, these automatic social processes are most likely engaged in a social decision-making context and perhaps provide the vehicle through which the social context modulates decision-making brain regions like the striatum and PFC.
DIFFERENCES IN SOCIAL AND NONSOCIAL DECISION-MAKING PROCESSES
Decision-making in its most basic form can be broken down into three key processes 2 , (1) making predictions that guide decision-making, (2) examining the outcome of the decision, and (3) using the outcome to update predictions, a process often described as learning. Next, we discuss differences between humans and computers for each of these aspects of decisionmaking to understand how social decision-making is unique (see Figure 1 for a summary of these findings).
Social predictions
Predictions have received much attention when studying social decision-making. Behavioral economics games such as the trust game, ultimatum game, or the prisoner's dilemma game are often used to study social preferences for trustworthiness, fairness, or cooperation, respectively. However, each of these games requires predicting what another agent (person or computer) will do. The combination of the participant's and the partner's decisions determines the outcome. Therefore, in order to maximize payout, the participant has to predict what the partner will do and decide accordingly. What information do participants rely on when making these predictions? Social psychological theory suggests these predictions rely on trait inferences that occur when viewing the person and learning about their past behavior, while also taking the social context into account. Yet discussions of how these predictions are utilized within a decision-making context have eluded social psychology researchers in favor of understanding the processes by which such predictions are made. Below, we discuss these social cognitive processes and how they influence social decision-making in various behavioral economic paradigms involving human and computer agents.
Social decisions are not made within a vacuum; they are made in a social context. A social context involves the actual, imagined, or implied presence of another person-an intentional agentwhose behavior cannot be predicted with certainty. Although humans have developed ways to try to predict what another person will probably do, the other person has the ability to originate their own actions and only they know their true intentions. Therefore, social decision-making is complicated by the uncertainty of the other person's behavior and requires inferences about a person's mental state. Despite these uncertainties, humans are highly motivated to explain and predict others behavior (Heider, 1958). To facilitate this process, humans have developed skills to automatically assess or infer certain types of social information about another person that will guide predictions about their behavior. The primary dimensions of person perceptiontrait warmth and trait competence-allow for these predictions (Asch, 1946;Rosenberg et al., 1968;Fiske et al., 2007). While trait warmth describes a person's good or bad intentions, trait competence describes the person's ability to carry out those intentions. Research suggests that although these two traits are often assessed together (Fiske et al., 2002), trait warmth carries more weight when forming impressions (Asch, 1946). As such, it is not surprising that the majority of social decision-making studies have capitalized on participants' ability to infer something about warmth-related constructs, including trustworthiness, fairness, and altruism in economic games.
But social predictions are not always formed based on trait inferences alone-social category information (e.g., age, race, gender) and physical features (e.g., facial trustworthiness, attractiveness) can guide initial impressions of a person as well (Fiske, 1998;Ito and Urland, 2003;Ito et al., 2004). Stereotypes-schemas about how people belonging to social categories behave-can act as heuristics for predicting a person's behavior based on this category information (Fiske, 1998;Frith and Frith, 2006). However, these predictions can often be misleading because they do not require mental state inferences for the individual person. Despite this, social category information such as gender and race affect social decisions in an economic context (Slonim and Guillen, 2010;Stanley et al., 2011), suggesting this social information is incorporated into the decision-making process when interacting with human agents.
The basis of these social predictions (e.g., social category information, physical features, and trait inferences) are often assessed automatically and efficiently, with only 100 ms of exposure to a person's face leading to accurate assessments (Willis and Todorov, 2006). These initial impressions may be further supported or adjusted based on the person's behavior. People spontaneously attribute traits to a person based on brief, single acts (thin slices) of behavior. When exposure time to a person's behavior is increased from 30 s to 4 to 5 min, predictions about their future behavior are just as accurate as with minimal exposure (Ambady and Rosenthal, 1992). Therefore, these automatic social processes may influence any social decision-making study that has an actual, imagined, or implied presence of another person.
The development of attribution theory (Heider, 1958;Kelley, 1972;Jones, 1979) further suggests that people are highly motivated to predict and explain behavior and are able to do so quite efficiently. Kelley (1972) suggests only three pieces of information-what other people do (consensus), reliability of a behavior across contexts (distinctiveness), and reliability of a behavior across time (consistency)-are needed for participants to form enduring trait inferences and attribute behavior to a person rather than the situation. Specific combinations-low consensus, low distinctiveness, and high consistency-lead participants to attribute behavior to the agent (McArthur, 1972). Interestingly, research shows that this attribution process may be different for social and non-social stimuli. When this paradigm was taken to the scanner, Harris et al. (2005) showed that attributions for human agents rely on a distinct set of brain regions, including MPFC and STS. However, when the agents are anthropomorphized objects, the same combination of statistical information led to attributions (i.e., the same behavior for human and objects) but a different pattern of brain activity resulted (Harris and Fiske, 2008). Specifically attributions for objects did not engage MPFC but rather STS and bilateral amygdala. These studies, in combination with studies showing increased activity in dorsal regions of MPFC for people compared to objects (cars and computers) in an impression formation task (Mitchell et al., 2005) suggest separable brain systems for people and objects and provide a first hint toward what makes social decision-making different.
What does social psychology teach us about social decisionmaking studies? Participants use a variety of heuristics that allow them to infer traits and mental states about another person. Whether this is information about their identity (e.g., age, race, gender) or information about their past behavior, participants are constantly trying to make predictions about what other people will do (even outside of a decision-making context). As such, traits provide a concise schema suggesting how a person will behave, allowing for generalizations across contexts when making predictions about behavior. In general, if a person is thought to be trustworthy in one context, people predict that they will be trustworthy in other contexts. Whether actual consistency across contexts exists depends on the psychological viewpoint one takes-personality psychologists would suggest traits are an enduring quality that stays consistent across situations, however, social psychologists stress the importance of the situation and the interaction between person and environment (Lewin, 1951;Ross and Nisbett, 1991).
How does this contribute to our discussion of human and computer agents in an economic game? Do participants use the same brain regions when making predictions about what a human will do vs. what a computer will do? Since each type of agent recruits different brain regions, do social predictions rely on the person perception/social cognition network as we hypothesize above? Below we describe three economic games-the trust game, ultimatum game, and prisoner's dilemma game-often used in the neuroeconomics literature on social decision-making and discuss how social cognition and social psychological theory may be useful when studying these games. We also review research that will help us understand the brain regions underlying these predictions, specifically studies that use non-social agents (e.g., computers) as a control and examine activation during the decision phase when participants are making predictions about what the other agent will do (see Table 1 for list of studies).
One tool for studying social predictions is the trust game. In a typical trust game scenario, participants have the opportunity to "invest" with or give a sum of money (e.g., $10) to another person. Alternatively, participants can decide to keep the money for themselves and not invest. If the money is given to the partner, it is multiplied by some factor (e.g., tripled to $30) and the partner decides whether or not to share the profit with the investor. If the partner shares with the participant, each receives an equal payout ($15). However, if the partner decides to keep the profit ($30), the participant receives nothing. Participants must predict what the partner will do in order to maximize their payout. If they predict the partner will not share, the participants should not invest and keep the money for themselves. However, if participants predict the partner will share, the participants should invest with the partner, risking the chance that they will lose the whole amount.
How do participants make these predictions if they have never interacted with their partners before? From a social cognition perspective, spontaneous mental state inferences may guide these predictions, resulting in corresponding activity in social cognition brain regions. In fact, research shows that when making such predictions for human and computer agents in a trust game social cognition brain regions including the prefrontal cortex (PFC) and inferior parietal cortex (IPL) are more active for human compared to computer partners when participants decide to invest (McCabe et al., 2001;Delgado et al., 2005). However, no differences are observed in activation when participants do not invest, suggesting that investing in the trust game requires inferring the mental states of the partner. Past behavior may also inform predictions in the trust game. Remember that people form trait inferences from brief single acts of behavior. In a trust game situation, the partner's decision will allow the participant to infer that the partner is trustworthy (or not) from a single exchange. If this behavior is repeated, the partner will build a reputation (a trait inference) for being trustworthy. When relying on reputation to predict the partner's actions, striatal activation shifts from the feedback phase when processing rewards to the decision phase when viewing pictures of previous cooperators, suggesting that participants are making predictions that previous cooperators will again cooperate in the current trial (King-Casas et al., 2005). Therefore, the striatum is also involved in forming social predictions.
Similarly, participants in the ultimatum game interact with human and computer agents that propose different ways of dividing a sum of money (e.g., $10). While some of these offers are fair ($5 each party), others are unfair ($3 for the participant and $7 for the partner). If the participant decides to accept the offer, the money is divided as proposed. However, if the participant rejects the offer, both parties receive nothing. In an economic sense, any non-zero offer should be accepted in order to maximize payout, especially if partners are not repeated throughout the experiment (one-shot games). However, research suggests that unfair offers are rejected more often when the partner is a human agent than computer agent. Why does the identity of the partner affect decisions if the same economic outcome would result? Perhaps, related to our discussion of flexibility above, participants know that humans respond to the environment and make adaptive decisions. If they see that their unfair offers are being rejected, the participant may predict that the human partner will change their behavior, offering more fair offers. However, a computer may be predicted to propose the same offer regardless of how the participant responds, in which case it would be advantageous to accept any non-zero offer because the participant does not anticipate the computer would respond to his or her rejection of the offers. Rejection may also represent a form of punishment of the partner. If the participant receives a low offer, this suggests that the partner has a negative impression of the participant or is simply a morally bad person (unfair, selfish). Punishment in this light is action against such mental states. However, since computers do not possess mental states, there is no reason to punish them for similar unfair offers.
Research shows that when deciding whether to accept or reject offers proposed by human and computer agents, participants show higher skin conductance responses to unfair offers made by human compared to computer agents ( Van't Wout et al., 2006), suggesting increased emotional arousal. The use of repetitive transcranial magnetic stimulation (rTMS) shows disruption of the right dorsolateral prefrontal cortex (DLPFC) leads to higher acceptance rates of unfair offers from human but not computer agents (Knoch et al., 2006). The authors of this study highlight the role of DLPFC in executive control and suggest this region is essential for overriding selfish impulses in order to reject unfair offers. When this region is disrupted, participants are more likely to act selfishly and are less able to resist the economic temptation of accepting any non-zero offer. Although the role of DLPFC in executive control is not debated, a more social psychological explanation may be useful in understanding this behavior as well. Impression management is believed to be part of executive control function (Prabhakaran and Gray, 2012). Therefore, we may ask if DLPFC is involved in overriding selfish impulses specifically or whether concerns about impression management may also be affected by the DLPFC's role in executive control. Accepting and rejecting offers in the ultimatum game communicates something to the partner about the participant-whether or not they will accept unfair treatment. In other words, the participant's behavior allows the partner to (presumably) form an impression of them. In order to manage this impression, participants may reject unfair offers as a way to communicate that he or she will not stand for being treated unfairly. Therefore, perhaps when DLPFC is disrupted with rTMS, impression management concerns are reduced and unfair offers are more often accepted. Concerns about forming a good reputation are also affected by rTMS to right DLPFC in the trust game (Knoch et al., 2009), further suggesting this region may be involved in impression management.
The prisoner's dilemma game (PDG) is another economic game exemplifying the role of predictions in social decisionmaking. In this game, participants must decide whether to cooperate with a partner for a mediocre reward (e.g., $5 each), or defect in order to receive a better reward at the expense of the partner (e.g., $10 for the participant, $0 for the partner).
However, risk is introduced into the game because if the partner also defects, both players end up with the worst possible outcome (e.g., $0). In this case it is important for the participant to predict what the partner will do because the payout structure that both parties receive depends on what each chooses.
When participants believe they are playing with human rather than computer agents, imaging results show greater activation in regions involved in social cognition, including right posterior STS, PCC, DLPFC, fusiform gyrus, frontal pole, along with decision-making regions like the caudate (Rilling et al., 2004a). Time-course data show specifically within posterior STS and PCC there is an increase in activation in response to the human partner's face that remains elevated until the outcome is revealed. This increase in activity in social cognition brain regions to human partners is further supported by a study examining PDG decisions to agents varying in degree of human-likeness. Participants that played the PDG with a human, anthropomorphized robot (human-like shape with human-like hands), functional robot (machine-like shape with machine-like hands), and computer showed a linear increase in MPFC and right TPJ activity as human-likeness increased (Krach et al., 2008).
In addition to the agent's perceived physical likeness to a human, it seems as though the intentionality of the human agents is essential for activating social cognition regions. In a study that manipulated whether human agents were able to decide freely in the PDG (intentional) vs. following a predetermined response sequence (unintentional), Singer et al. (2004) observed increased activation of posterior STS, bilateral fusiform gyrus, bilateral insula, right and left lateral OFC, and ventral striatum for cooperating intentional humans. Therefore, it is not that all humans activate social cognition regions in the PDG, but specifically intentional human agents. Together these studies suggest activity in social cognition brain regions track whether the partner is a social agent and may influence social decisions.
Although these economic games are most often used to study social decision-making, other games also suggest that social cognition brain regions are essential for predicting the actions of others. For instance, when playing a game of Rock-Paper-Scissors with either a human or computer counterpart, Gallagher et al. (2002) observed bilateral activation in pACC for human compared to computer partners. More recently, the TPJ has been identified as providing unique information about decisions involving social agents. Participants playing a poker game with human and computer agents had to predict whether the agent was bluffing. Using MVPA and a social bias measure, Carter et al. (2012) showed that TPJ contains unique signals used for predicting the participant's decision specifically for socially relevant agents but not for computer agents. And lastly, research suggests there are individual differences in the extent to which people use social cognition in a decision-making context. In the beauty contest game, participants must choose a number between 0 and 100 with the aim of choosing a number that is closest to 2/3 times the average of all the numbers chosen by different opponents. When playing this game with human and computer opponents, Coricelli and Nagel (2009) found that human opponents activated regions involved in social cognition, including MPFC, rostral ACC, STS, PCC, and bilateral TPJ. The researchers then examined individual differences in participants' ability to think about others' mental states. While low-level reasoners do not take into account the mental states of others when guessing, high-level reasoners think about the fact that others are thinking about the mental states of others and try to guess accordingly. Interestingly including this individual difference measure in the analysis showed that activity in MPFC was only significant for high-level reasoners.
Together, across different social decision-making paradigms, there seems to be increasing evidence that human and computer agents engage different brain regions when making predictions. Specifically, making predictions about human agents engages brain regions implicated in the social cognition network, including MPFC, STS, TPJ, along with decision-making regions like the striatum. Next we ask whether these social decision-making paradigms engage different brain circuitry when processing feedback from human and computer agents.
SOCIAL FEEDBACK
While many studies have suggested that social predictions rely on the social cognition brain network, other social decision-making studies have looked at how the outcome of social decisionmaking, or social feedback, affects traditional decision-making brain regions involved in reward processing and valuation. Initial attempts to study the uniqueness of social decision-making include examining whether social and non-social rewards are processed in the same areas of the brain, and how economic decisions are made in the context of social constructs including trustworthiness, fairness, altruism, and the like. Using behavioral economic games described above (e.g., trust game, ultimatum game, etc.) researchers have examined the influence of positive and negative feedback on social decisions. Below, we review the results of such studies in an attempt to continue the comparison between human and computer agents in social decision-making.
Social feedback often allows people to infer something about another person as well as receive information about the impression others have formed of them. In the context of receiving direct social feedback about what other people think, research suggests that being labeled trustworthy activates the striatum in much the same way as receiving monetary rewards (Izuma et al., 2008). This concept of trust is important when making decisions in a social context because it affects existing social interactions as well as whether others will interact with you. In the economic trust game described above, feedback about whether or not the partner returns an investment allows for trait inferences about the partner based on thin slices of behavior that may guide future predictions.
When participants play the trust game with another human, reward related regions such as the caudate nucleus are active (King-Casas et al., 2005). With repeated exposure to the partner's behavior, participants form a reputation (an inferred trait) for the partner as being trustworthy or not. When these partners are human and computer agents, participants differentiate cooperating from non-cooperating humans, investing most often with humans that returned the investment, an average amount with a neutral human, and least often with humans that did not return the investment. Investments for the computer agent were similar to the neutral human. Reflecting this pattern of behavior, brain activity within the left and right ventral striatum reveals increased activity to cooperating compared to non-cooperating humans, but activity to computers looks similar to neutral human partners (Phan et al., 2010). These results suggest that if a human agent provides no informative information that allows for a trait inference (a neutral partner is neither good or bad), behavior and brain activity may be similar to that of a computer agent. Similar results are observed when reading descriptions of hypothetical partners' past moral behaviors. When playing the trust game with a neutral investment partner (neither good or bad moral character) activity within the striatum for positive and negative feedback looks similar to when receiving such feedback about a non-social lottery outcome (Delgado et al., 2005). However, when the human agent is associated with a specific moral character, striatal activity for positive and negative feedback look the same, demonstrating that prior social information can bias feedback mechanisms in the brain, but only when the social information is informative about one's traits.
In the trust game, the outcome phase has a clear start and end-participants make a decision to invest (share) with a partner and then receive feedback in the same trial about whether the investment was returned by the partner. However, in the ultimatum game, the outcome phase is less clear-participants already know the outcome of the social interaction when they decide whether to accept or reject the offer made by the agent. However, this does not make the outcome of the social interaction irrelevant. In repeated ultimatum games (when participants play multiple trials with the same partner), feedback about the participant's decision comes on the next trial when the partner proposes the next division of money. For example, if a participant rejects an unfair offer, feedback about whether that rejection was effective in influencing the partner's next proposal comes on the next trial. In other words, offers can be thought of as feedback within the context of this game. However, researchers often use single-shot ultimatum games to avoid effects of repeated interaction just described. In this case, the offers proposed by the partner allow the participant to infer traits about the partner, and their decision still communicates something to the partner, prompting participants to think about impression management.
How then do participants respond to offers made by human and computer agents in the context of the ultimatum game? Research suggests that unfair offers made by human agents activate bilateral anterior insula to a greater extent than the same unfair offers made by computer agents, suggesting that there is something about being mistreated specifically by human agents that leads to higher rejection rates (Sanfey et al., 2003). Additionally it seems as though the balance of activity in two regions-anterior insula and DLPFC-predicts whether offers are accepted or rejected. Unfair offers that are subsequently rejected have greater anterior insula than DLPFC activation, whereas accepted offers exhibit greater DLPFC than anterior insula. Similarly, when viewing a human partner's offer, social cognition and decision-making regions including STS, hypothalamus/midbrain, right superior frontal gyrus (BA8), dorsal MPFC (BA 9, 32), precuneus, and putamen are active (Rilling et al., 2004a). More recent investigations of unfair offers suggest the identity of the agent (human or computer) determines whether mood has an effect on activity in bilateral anterior insula (Harlé et al., 2012). Specifically, sad compared to neutral participants elicited activity in anterior insula and ACC as well as diminished sensitivity in ventral striatum when viewing unfair offers from human agents but there were no such differences for offers made by computer agents. These differences in brain activity for human and computer agents further highlight that social decision-making (compared to non-social) relies on different neural processing.
Unlike the ultimatum game, the prisoner's dilemma game is similar to the trust game, because the participant and the partner must make a decision before finding out the outcome of both parties' decisions. This outcome period lets the participant know whether their predictions about the partner were correct. When participants played the prisoner's dilemma game in the scanner, Rilling et al. (2002) observed different patterns of brain activation during outcome depending on whether the partner was a human or computer agent. Specifically, both human and computer agents activated ventromedial/orbital frontal cortex (BA 11) after a mutually cooperative outcome (both the partner and participant decided to cooperate). However, mutual cooperation with human partners additionally activated rostral anterior cingulate and anteroventral striatum. A few years later, researchers investigated whether these different activations were limited to when partners cooperate. Comparing social to non-social loss (human partners do not cooperate and losing a monetary gamble), Rilling et al. (2008a) observed higher activation in superior temporal gyrus (BA 22), precentral gyrus, anterior insula, precuneus, lingual gyrus, and anterior cingulate for the human agent. This analysis highlights the importance of human agents' perceived intent in the prisoner's dilemma game, as it controls for differences in monetary payoff, frequency, and emotional valence that may have confounded previous comparisons of cooperation and defection. These studies suggest processing outcomes from human and computer agents is different. Specifically, human agents engage social cognition brain regions, perhaps because outcomes lead to spontaneous trait inferences for humans and not computers. This idea is consistent with social neuroscience research showing different activity when attributing behavior to people and objects (Harris et al., 2005;Harris and Fiske, 2008).
In another study, participants played a time estimation task in which a human or computer agent delivered trial-by-trial feedback (juice reward or bitter quinine). Some brain regions, including ventral striatum and paracingulate cortex (PACC) responded more to positive vs. negative feedback irrespective of whether the agent was a human or computer (Van den Bos et al., 2007). Other brain regions, particularly bilateral temporal pole, responded more to feedback from human than computer agents, regardless of feedback valence. However, the combination of type of agent and feedback valence seems to be important within the regions of anterior VMPFC and subgenual cingulate. Interestingly this study is one of the few comparing human and computer feedback that is relevant to the competence rather than warmth domain but delivers the same take home message-some brain regions like the striatum and prefrontal cortex respond to social and non-social stimuli, but others like social cognition regions are engaged specifically to the human agent. Why are social cognition regions engaged if feedback was dependent on the participant's performance in the task and not the agents' decisions (i.e., delivered feedback did not allow for a trait inference about the agent)? It may be that participants were concerned about the impression the human agent formed of them (i.e., participants know their behavior allows for trait inferences about them in the same way they form trait inferences about others), but these concerns were not relevant for the computer agent because computers do not form impressions.
Another study examining the effects of competing against a human or computer in an auction suggests that differences in brain activity during outcome depend on both the type of agent and the context of the outcome (Delgado et al., 2008). Participants were told that they would be bidding in an auction against another human or playing a lottery game against a computer and had the opportunity to win money or points at the end of the experiment. The points contributed to the participant's standing at the end of the experiment in which all participants would be compared. In other words, the points represented a social reward, allowing participants to gain status when comparing themselves to other participants in the study. In both cases the goal was to choose a number higher than that chosen by the other agent. When the outcome of the bidding was revealed, the authors observed differential activity for the social and lottery trials. Specifically, losing the auction in the social condition reduced striatal activity relative to baseline and the lottery game. The authors suggest that one possible explanation for overbidding in auctions is the fear of losing a social competition, which motivates bids that are too high, independent from pure loss aversion. These differences for social and non-social loss highlight again that although the same brain regions are active, the social context modulates activity within decision-making regions.
But should we be surprised that social loss seems more salient to participants in a social competition such as the one created by the experimenters? Specifically, the experimenters told participants that final results about the participant's standing in relation to other participants would anonymously be released at the end of the study in a list of "Top 10 players." Even though there was no risk of identifying a particular participant, social concerns about impression management may have still been active. Being listed as one of the top players allows the trait inference of being very competent in the auction, a desirable trait to almost anyone. Therefore, participants may have believed that negative feedback (losing the auction trials) would lead people to infer that they were inferior or incompetent compared to other players. On the other hand, losses on the lottery trials were simply relevant to the participants and not their social standing.
Converging evidence suggests that common brain regions, particularly the striatum and VMPFC, are engaged when viewing outcomes from human and computer agents. However, the activity in these regions seems to be modulated by the social context. In addition to these decision-making regions, the ultimatum game and prisoner's dilemma game also activate regions involved in social cognition, including STS, precuneus, and TPJ. Should it be surprising that social cognition regions are also active during outcomes? Social psychology demonstrates that people infer traits from others' behavior. The outcome of a social interaction allows participants to infer these traits, and what perhaps is even more interesting is that these trait inferences are formed in single-shot games where participants do not interact with the partner again. Essentially, trait inferences in this context are superfluous because the participant will not be interacting with the partner again so there is no need to infer traits that allow for predictions. Yet these social cognition regions are still engaged.
SOCIAL LEARNING
So far we have seen that social cognition informs predictions made in social decision-making studies when interacting with human but not (or to a lesser extent) when interacting with computer agents. Social rewards, including being labeled trustworthy by another person (Izuma et al., 2008), gaining social approval by donating money in the presence of others (Izuma et al., 2010), and viewing smiling faces (Lin et al., 2012) engage brain regions that are common to receiving non-social rewards, such as money. However, when receiving feedback from social and non-social agents, though common brain regions including the striatum are engaged, the type of agent may modulate activity in these regions. Moreover, feedback from a social interaction also engages regions of the social cognition network. Next, we examine differences in social decision-making during the updating or learning process.
Research examining learning in a non-social context has highlighted the role of prediction error signals in learning to predict outcomes. In a now classic study, recordings from dopamine neurons show that primates learn to predict a juice reward, shifting the firing of dopamine neurons to the cue rather than reward. When an expected reward is not received, dopamine neurons decrease their firing (Schultz et al., 1997). Similar prediction error signals have been observed to social stimuli in both an attribution task (Harris and Fiske, 2010) as well as in decisionmaking contexts (King-Casas et al., 2005;Rilling et al., 2008b for review). In recent years, it has therefore been suggested that social learning is akin to basic reinforcement learning (i.e., social learning is similar to non-social learning). When interacting with peers, ventral striatum and OFC seem to track predictions about whether a social agent will give positive social feedback and ACC correlates with modulation of expected value associated with the agents (Jones et al., 2011). It has also been proposed that social information may be acquired using the same associative processes assumed to underlie reward-based learning, but in separate regions of the ACC (Behrens et al., 2008). These signals are believed to combine within MPFC when making a decision, consistent with the idea of a common valuation system (which combines social and non-social) within the brain (Montague and Berns, 2002). In fact, value signals for both social and monetary rewards have been found to rely on MPFC (Smith et al., 2010;Lin et al., 2012) and activity in this region also correlates with the subjective value of donating money to charity (Hare et al., 2010).
However, social learning does not inherently appear to be just another type of reinforcement learning. Social decisions often contradict economic models that attempt to predict social behavior, suggesting that simple reinforcement learning models by themselves are not sufficient to explain complex social behavior (Lee et al., 2005). Research shows that reward and value signals are modulated by the social context. For instance, reward related signals in the striatum are affected by prior social information about an investment partner (Delgado et al., 2005) as well as when sharing rewards with a friend vs. a computer (Fareri et al., 2012). Additionally, research shows that social norms can influence the value assigned to social stimuli, specifically modulating activity in nucleus accumbens and OFC (Zaki et al., 2011). Interestingly, functional connectivity analyses show that value signals in MPFC may rely on information from person perception brain regions like the anterior insula and posterior STS (Hare et al., 2010). Studies investigating how person perception brain regions affect social learning suggest that specific types of social information (warmth vs. competence) affect social learning-whereas information about a person's warmth hinders learning, information about a person's competence seems to produce similar learning rates as when interacting with computer agents (Lee and Harris, under review).
Should we be surprised by findings that social stimuli affect learning and the updating process? Social psychology suggests the answer to this question is no. Behaviorally, people have a number of biases that may affect the way information is processed and incorporated into decision-making processes. Tversky and Kahneman (1974) were perhaps the first to point out these biases and heuristics that may be used in a social decisionmaking context. For instance, people use probability information to judge how representative a person is of a specific category (representativeness heuristic), and recent events to assess how likely it is that something will occur (availability heuristic). When asked to give an estimate of some quantity, being given a reference point (an anchor) affects the resulting estimates. These heuristics can be applied to a social decisionmaking context as well. For instance when playing the trust game, participants may use initial impressions formed about the person (based on a representative heuristic about what trustworthy people look like) as an anchor that affects whether or not they invest with the partner on subsequent trials. In addition to this bias, it is harder to verify cognitions about people than objects, making it harder to accurately infer the traits of a person compared to an object (Fiske and Taylor, 2013).
In addition to the heuristics described above, people also possess a number of biases that affect how they interpret information. First, people look for information that is consistent with a preexisting belief. This confirmatory bias is evident in the stereotype literature, which demonstrates that people interpret ambiguous information as consistent with or as a confirmation of a stereotype about a person (Bodenhausen, 1988). This bias is relevant to the economic games employed in social decision-making studies because partners often provide probabilistic (sometimes ambiguous) feedback. Interpretation of this feedback may be influenced by prior beliefs (Delgado et al., 2005). Second, people often exhibit illusionary correlations-that is they see a relationship between two things when one does not exist (Hamilton and Gifford, 1976)-and are more likely to attribute a person's behavior to the person rather than to some situational factor (Jones and Davis, 1965;Jones and Harris, 1967;Ross, 1977;Nisbett and Ross, 1980). This again leads participants in social decisionmaking studies more likely to interpret a partner's decision as a signal of some underlying mental state or trait attribute rather than positive or negative feedback in a purely reward processing sense.
How then can we reconcile these two different literatures, one stating that social learning is similar to reinforcement learning, and another stating that social learning includes a number of biases? In more practical terms, we know that impressions of a person can guide decision-making. Previous studies have shown that facial trustworthiness affects investment amounts in the trust game (Van't Wout and Sanfey, 2008). However, first impressions are not the only influence on social decisions-if someone is perceived as trustworthy that does not make their subsequent behavior irrelevant. Other research has shown the importance of prior behavior on trust decisions (Delgado et al., 2005;King-Casas et al., 2005). To study how the combination of impressions and behavior affect social decision-making, Chang et al. (2010) used mathematical models based on reinforcement learning to test specific hypotheses about how these two types of information guide social decisions in a repeated trust game. Specifically, the authors tested three models that suggest different ways of processing information and investigate whether reinforcement learning or social biases influence decision-making. First, an Initialization model assumes that initial impressions (implicit trustworthiness judgments) influence decision-making at the beginning of the trust game, but eventually participants learn to rely on the player's actual behavior. A Confirmation Bias model assumes that initial impressions of trustworthiness affect the way feedback is processed, the impression is updated throughout the study, and learning is biased in the direction of the initial impression. The third, Dynamic Belief model, assumes that initial impressions are continuously updated based on the participant's experiences in the trust game and these beliefs then influence learning. In this model, equal emphasis is placed on the initial judgment and the participant's experience. That is, initial trustworthiness is simultaneously influencing learning and being updated by experience. Of the three models, the Dynamic Belief model fit the data the best, suggesting that both social cognition processes (initial impressions) and decision-making processes (feedback processing) affect social learning in the trust game.
More recent social decision-making studies have investigated how social processes affect learning. Researchers have proposed different strategies participants may use when learning to predict what their partner will do. One such strategy is learning to simulate other people's decisions and update those simulations once the other's choice is revealed. This process engages different regions of prefrontal cortex involved in valuation and prediction error (Suzuki et al., 2012). Another strategy is to account for the influence one's decisions have on the partner's decisions and decide accordingly. This strategy requires predicting how much influence one has on the partner and updating that influence signal when observing the partner's decision. Computational modeling suggests MPFC tracks the predicted reward given the amount of expected influence the participant's choices have on the partner, and STS activity is responsible for updating the influence signal (Hampton et al., 2008). Although these studies do not provide direct comparisons to non-social controls, they provide exciting insight into how social cognition processes affect social learning.
Frontiers in Neuroscience | Decision Neuroscience
December 2013 | Volume 7 | Article 259 | 10 CONCLUSION Is social decision-making unique? How does it differ from nonsocial decision-making? The answers to these questions have been of interest to researchers in a variety of fields including social psychology and behavioral economics. Combining these literatures can help us understand the answers to these questions.
Economists originally believed that social decision-making was not different from non-social decision-making and tried to model social decisions with traditional economic models. However, after the influential paper by Tversky and Kahneman (1974) demonstrating heuristics and biases affecting decision-making, it became apparent that the decision-making process is not as rational as we may have originally thought. Psychologists have long believed that social cognition is important for predicting the actions of others and that humans are different from objects in some very important ways. More recently, brain-imaging studies have highlighted these differences, with a network of brain regions responding to social stimuli and social cognitive processes that presumably affect social decision-making. Investigations of social decisions have also highlighted the effects of social information on decision-making processes within brain regions like the striatum and MPFC. Although both social and non-social agents engage these brain regions, the social context modulates this activity. The use of mathematical models suggests that both social neuroscience and neuroeconomics studies have each been tapping into different processes. Initial impressions allow for predictions that guide decision-making. These impressions then interact with feedback processing and affect how predictions are updated. In economics, behavioral game theorists recognize that people's beliefs about others matter when modeling social decisions. The models assume that players strategically choose options that maximize utility, and evaluations of payoff options often include social factors beyond pure economic payout (Camerer, 2009). These social factors may include other-regarding preferences, indicating that people care about the well-being of other players (Fehr, 2009). Whether decisions are made in order to increase the well-being of others or manage the impression formed of oneself, mental state inferences are still relevant. For instance, one may assess well-being by inferring the mental state of the person. Similarly, the extent to which one infers the mental state of a person may influence the extent to which other-regarding preferences influence decisions (e.g., do people show other-regarding preferences for traditionally dehumanized targets?).
Humans evolved in a social context in which interacting with other people was essential for survival. As such, these social cognitive processes have been evolutionarily preserved and continue to affect our decision-making in a social context. The fact that human agents engage different brain regions than computer agents should perhaps not be all that surprising. The social brain did not evolve interacting with computers or other types of machines. Therefore, we see differences not only in behavior (most of the time) but also differences in brain activity for these two inherently different types agents. Here we have highlighted that these differences lie in engagement of the social cognition/person perception brain regions for human agents. But the underlying mechanisms-the social processes that engage these brain regions and how they interact with decision-making processes-are still being investigated. Social psychological theory can help answer these questions by providing a theoretical background for why human and computers differ in the first place (e.g., mental state inferences, impression management, etc). Keeping this fact in mind will provide future research on social decision-making with the most informed and cohesive theories.
Finally, decisions are made in a social context everyday. Whether deciding to do a favor for a friend or close a deal with a potential business partner, decisions have consequences that lead to significant rewards and punishments such as a better relationship with the friend or a poor business transaction. Therefore, it is important to understand how decisions are influenced by the presence or absence of others and how we incorporate social information into our decision-making process.
Here we have highlighted differences arising when interacting with human and computer agents and use social psychological theory to provide some explanation for why these differences arise. It is important to point out these differences in social and non-social decision-making because interactions with computers and other machines are becoming more widespread. Businesses often try to find ways to simplify transactions, often replacing human agents with automated computers. However, the decisions made with these different types of agents may affect businesses in unanticipated ways. Financial decisions (e.g., buying and selling stock) are increasingly made through the use of online computers, whereas previously investors had to interact with stockbrokers in an investment firm. Similarly people are able to bid in online auctions for a desired item rather than sitting in a room full of people holding numbered paddles. The decisions to buy and sell stock or possibly overbid in an online auction may be influenced by these different agents, as evidenced by the research described above. | 2016-05-04T20:20:58.661Z | 2013-11-21T00:00:00.000 | {
"year": 2013,
"sha1": "6eae54fd378f23c19ebeed245c393ab27e2fd4fb",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2013.00259/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6eae54fd378f23c19ebeed245c393ab27e2fd4fb",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
23447145 | pes2o/s2orc | v3-fos-license | Magnetically Recyclable Fe 3 O 4 / GO-NH 2 / H 3 PMo 12 O 40 Nanocomposite : Synthesis , Characterization , and Application in Selective Adsorption of Cationic Dyes from Water
In this study, the PMo12O40 3− polyanion was immobilized chemically on amino functionalized magnetic graphene oxide nanosheets. The as-prepared ternary magnetic nanocomposite (Fe3O4/GO-NH2/H3PMo12O40) was characterized by powder X-ray powder diffraction (XRD), fourier transformation infrared spectroscopy (FTIR), Raman spectroscopy, energy dispersive spectroscopy (EDX), field emission scanning electron microscopy (FESEM), BET surface area measurements, magnetic measurements (VSM) and atomic force microscopy (AFM). The results demonstrated the successful loading of H3PMo12O40 (~36.5 wt.%) on the surface of magnetic graphene oxide. The nanocomposite showed a higher specific surface area (77.07 m2/g) than pure H3PMo12O40 (≤10 m 2/g). The adsorption efficiency of this nanocomposite for removing methylene blue (MB), rhodamine B (RhB) and methyl orange (MO) from aqueous solutions was evaluated. The nanocomposite showed rapid and selective adsorption for cationic dyes from mixed dye solutions. The adsorption rate and capacity of Fe3O4/GO-NH2/H3PMo12O40 were enhanced as compared with GO, GO-NH2, Fe3O4/GO-NH2, and H3PMo12O40 samples due to enhanced electrostatic attraction and hydrogen-bonding interactions. The nanocomposite is magnetically separated and reused without any change in structure. Thus, it could be a promising green adsorbent for removing organic pollutants in water.
Introduction
Industrial activities release an increasing amount of contaminants, such as metal ions, organic dyes, and cleaning agents, which has raised public concern. 1,2So, wastewater treatment has attracted much attention in the past decades because of grievous effluent discharge of some organic dyes from plating, textile, and printing paper, plastic, cosmetic, pharmaceutical, and food industries that are resistant to biological degradation, making them quite difficult to remove from the wastewater. 3,46][7][8][9] Owing to their complex aromatic molecular structures, dyes are generally sta-ble to light, heat and oxidizing agents. 10Therefore, effective removal of dyes from dye-wastewater is essential.Among the various technologies such as photocatalytic degradation, 11 electrochemical degradation, 12 and adsorption, 13 adsorption is considered one of the most efficient and economical methods for water purification. 14Many polymeric and inorganic adsorbents such as carbonaceous nanomaterials, 15 porous metal oxides, 16 clays, 17 chitosan, 18 zeolites, 19 and so on 20,21 were developed for removing pollutants from aqueous solutions.However, such adsorbents are associated with certain problems that limit their practical applications, such as low adsorption capacity, slow adsorption rate, and difficult separation of the adsorbents. 22 are generally poor at selectively removing the targeted organic dye wastes.Hence, in this regard, it is extremely imperative to find a new desirable adsorption material, which not only is capable of reducing the organic dyes in dye-wastewater with high efficiency and fast adsorption rate but also can achieve selective separation and recovery of raw materials.
Polyoxometalates (POMs), as an outstanding class of anionic metal oxide clusters, have attracted great attention due to their earth-abundant source, rich topology and versatility, controllable shape and size, oxo-enriched surfaces, high electronegativity etc., 23 which have various applications in many fields, such as catalysis, 24 optics, 25 magnetism, 26 biological medicine, 27 and dye adsorption. 28The strong attraction of POMs to cationic dyes suggests that they are potential and suitable adsorbents for selectively capturing cationic dyes.However, there are still obvious disadvantages for POMs as adsorbents: (i) their relatively small surface area seriously obstructs the accessibility to the active sites and (ii) their excellent solubility in aqueous solution determines that they cannot be reused and recycled in the process of wastewater treatment.Therefore, plenty of remarkable work has been done to encapsulate POMs into porous solid matrices, such as activated carbon 29 and silica 30 for creating composite materials.Unfortunately, these methods sometimes lead to low POM loading; it is thus of vital significance to search for an applicable solid matrix to immobilize POMs, which might greatly improve their adsorption ability for target dyes.
2][33][34][35][36][37][38][39] In addition, in comparison with other carbonaceous nanomaterials, GO may be more environmental friendly and have better biocompatibility. 40However, it is difficult to separate it from aqueous solution because of its small particle size, causing serious health and environmental problems once it is discharged into the environment. 41The centrifugation method needs a very high rate and the traditional filtration method may cause blockages of filters.Compared with traditional centrifugation and filtration methods, the magnetic separation method is considered as a rapid and effective technique for separating nanomaterials from aqueous solution. 42,43][46][47][48] On the basis of the above discussion, in this work, amino functionalized magnetic graphene oxide (Fe 3 O 4 / GO-NH 2 ) was synthesized by a facile method and used as a novel support for immobilizing Keggin-type PMo 12 O 40 3− anions.This magnetically recoverable ternary nanocomposite material (Fe 3 O 4 /GO-NH 2 /H 3 PMo 12 O 40 ) was pre-pared by a simple acid-base electrostatic interaction between H 3 PMo 12 O 40 and amino groups of Fe 3 O 4 /GO-NH 2 .For one thing, PMo 12 O 40 3− anion with highly electronegative and hydrophilic properties and structural stability could be utilized as a potential adsorbent for removal of the cationic dyes in dye-wastewater.For another, magnetic GO possesses outstanding porosity and extremely large surface area, and it is insoluble in water, which is an appropriate solid matrix to anchor Keggin-type PMo 12 O 40 3− anions.The combination of polyoxoanions and Fe 3 O 4 / GO-NH 2 could improve the surface area and avoid the dissolution of POM.The hybrid nanomaterial exhibited superior adsorption rate and selective adsorption ability for the cationic dyes.Remarkably, this material exhibited a largescale adsorption capacity of 426.7 mg/g for MB.Hence, it is a promising and environmental friendly adsorbent for removing and separating organic pollutants in dye-wastewater.
1. Materials and Characterization Techniques
Graphite powder (C, 99.95%), 3-aminopropyltriethoxysilane (APTES, 99%), phosphomolybdic acid (H 3 P-Mo 12 O 40 , 98%), toluene, sulfuric acid (H 2 SO 4 , 98%), and potassium permanganate (KMnO 4 , 98%) were purchased from Merck Chemical Co.All other chemicals were commercially purchased and used without further purification.The infrared spectra were recorded at room temperature using a Shimadzu FT-IR 160 spectrophotometer in the 4000-400 cm -1 region with KBr pellets.Powder XRD patterns were recorded on a Rigaku D-max C III X-ray diffractometer using Ni-filtered Cu Kα radiation (λ = 1.54184Å).The morphology of samples was studied using a MIRA3 TESCAN scanning electron microscope equipped with an energy dispersive X-ray analyzer (EDX) for the elemental analysis.AFM images were recorded by multi-mode atomic force microscopy (ARA-AFM, model Full Plus, ARA Research Co., Iran).Magnetic measurements were carried out at room temperature using a vibrating sample magnetometer (VSM, Magnetic Daneshpajoh Kashan Co., Iran) with a maximum magnetic field of 10 kOe.Optical adsorption spectra were obtained using a Cary 100 Varian UV-Vis spectrophotometer in a wavelength range of 200-800 nm.The Brunauer-Emmett-Teller (BET) surface area was measured by N 2 adsorption measurements at 77 K using a Nova 2000 instrument.The concentration of Mo in the composite was determined by inductively coupled plasma atomic emission spectroscopy (ICP-AES, model OEC-730).A controllable Serial-Ultrasonics apparatus (James 6MD, England) operating at an ultrasonic frequency of 100 kHz with a nominal output power of 50 W was used to disperse samples.
Furthermore, some of them are only effective for wastewater including low concentrations of dyes and they Magnetically Recyclable Fe 3 O 4 /GO-NH 2 /H 3 PMo 12 O 40 ... | 2018-04-03T05:33:51.217Z | 2017-12-12T00:00:00.000 | {
"year": 2017,
"sha1": "c63c998429e647e305e8520b933fb85846526e72",
"oa_license": "CCBY",
"oa_url": "https://journals.matheo.si/index.php/ACSi/article/download/3731/1541",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0d879bd24173e2f0a61a3c40da0031e5d696908f",
"s2fieldsofstudy": [
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
} |
33260676 | pes2o/s2orc | v3-fos-license | Toward a combined SAGE II-HALOE aerosol climatology: an evaluation of HALOE version 19 stratospheric aerosol extinction coefficient observations
. Herein, the Halogen Occultation Experiment (HALOE) aerosol extinction coef fi cient data is evaluated in the low aerosol loading period after 1996 as the fi rst neces-sary step in a process that will eventually allow the production of a combined HALOE/SAGE II (Stratospheric Aerosol and Gas Experiment) aerosol climatology of derived aerosol products including surface area density. Based on these analyses, it is demonstrated that HALOE’s 3.46 μ m is of good quality above 19 km and suitable for scienti fi c applications above that altitude. However, it is increasingly suspect at lower altitudes and should not be used below 17 km under any circumstances after 1996. The 3.40 μ m is biased by about 10 % throughout the lower stratosphere due to the failure to clear NO 2 but otherwise appears to be a high quality product down to 15 km. The 2.45 and 5.26 μ m aerosol extinction coef fi cient measurements are clearly biased and should not be used for scienti fi c applications after the most intense parts of the Pinatubo period. Many of the issues in the aerosol data appear to be related to either the failure to clear some interfering gas species or doing so poorly. For instance, it is clear that the 3.40 μ
Introduction
In the stratosphere, where aerosol composition is predominately mixtures of H 2 SO 4 and H 2 O, aerosol extinction at Halogen Occultation Experiment (HALOE) wavelengths is dominated by absorption. Variation with wavelength is mostly driven by changes in the imaginary index of refraction that are themselves modulated by temperature and relative humidity. The extinction measurements generally exhibit a second order dependence on aerosol size. As such, infrared aerosol extinction coefficient measurements are nearly linearly related to total aerosol volume but provide limited information on aerosol size distribution. This stands in contrast to similar measurements in the visible and near infrared where absorption by sulfate aerosol is effectively zero and extinction is dominated by the positive wing of the aerosol size distribution with limited dependence on the smallest aerosol present. This can be seen in Fig. 1 which shows Mie extinction kernels scaled to per unit volume of aerosol and for measurement wavelength for 75 % H 2 SO 4 -25 % H 2 O aerosol at 220 K at HALOE and some SAGE II/III (the Stratospheric Aerosol and Gas Experiment) measurement wavelengths (a unitless quantity). A substantial history of visible/near infrared extinction coefficient measurements has been produced by the SAGE-series of instruments (listed in Table 1). The information content of infrared 2 L. W. Thomason: Toward a combined SAGE II-HALOE aerosol climatology Fig. 1. This figure shows the Mie aerosol extinction coefficient kernels (per unit volume of aerosol and wavelength-weighted, a unitless quantity) weighted by wavelength for a 75 % H 2 SO 4 /25 % H 2 O aerosol at stratospheric temperatures for SAGE II/III and HALOE measurement wavelengths. Dotted lines are kernels for SAGE II/III wavelengths and solid lines are for HALOE wavelengths. 2002-2006 386, 448, 521, 602, 676, 755, 868, 1019, 1545 nm measurements (from HALOE) and visible/near infrared measurements (from SAGE II/III) is sufficiently different that the combination of these measurements may substantially reduce the uncertainty of indirectly derived aerosol products such as aerosol surface area density (SAD) (e.g., Thomason et al., 2008). HALOE is a gas-filter correlation radiometer that uses solar occultation to measure vertical profiles of a number of important trace gas species and multiwavelength aerosol extinction coefficient from the upper troposphere as high as the thermosphere. It was deployed as a part of the Upper Atmosphere Research Satellite from the space shuttle Discovery on 15 September 1991 and operated through November 2005 when the mission was terminated. HALOE measurement species include O 3 , H 2 O, CH 4 , NO, NO 2 , HCl, HF and temperature (via CO 2 absorption). Data from this mis- Table 2. HALOE aerosol measurement locations with primary gas species and other absorbers, a indicates that the species is cleared using HALOE derived products (clearing is inferred by the inclusion of their uncertainties in the aerosol uncertainty as noted in Hervig et al., 1995). The symbol b indicates that this species is removed using climatological values (M. Hervig, personal communication, 2012). The symbol c denotes that, while NO 2 is measured by HALOE, it is not removed from this aerosol product (E. Remsberg, personal communication, 2012 (Mote et al., 1996) and trends in HCl (Anderson et al., 2000;Jones et al., 2011). In addition to the gas species, aerosol extinction coefficient profiles for the upper troposphere through the stratosphere are reported at four wavelengths (2.45, 3.40, 3.46, and 5.26 μm). Broadly, these data show the immediate aftermath of the June 1991 Pinatubo eruption and the long recovery of stratospheric aerosol levels throughout the 1990s. This is followed by a relatively low aerosol loading period between 2000 and the end of the mission where only a few minor volcanic events disturb an otherwise quiescent stratosphere. HALOE aerosol extinction coefficient profiles are derived as a residual from the gas species retrievals at each wavelength using a methodology that is described in Hervig et al. (1993) with some additional detail reported in Hervig et al. (1996). The aerosol extinction coefficient data is corrected for the gas species measured at those wavelengths as denoted in Table 2. In addition, the contributions by species which absorb at the aerosol channel wavelength (but are not the target measurement) are also removed from the residual extinction particularly if they are measured by HALOE at another wavelength. This is also denoted in Table 1. One important exception is that NO 2 absorption is not removed from the aerosol extinction coefficient measurement at 3.40 μm (E. Remsberg, personal communication, 2012). As will be demonstrated below, this leaves an artifact in the aerosol extinction coefficient values at this wavelength that is noteworthy after 1995. Some species that are not directly measured by HALOE (or not measured at all relevant altitudes) are removed from the residual extinction using climatological values for the interfering species. The climatologies used in this process were constructed as a part of the Upper Atmosphere Pilot Database (UAPD). HALOE makes use of N 2 O and CH 4 (from near the tropopause and below) from this dataset which are based on data from the Stratospheric and Mesospheric Sounder (SAMS) mission in 1979 and balloon-based profiles (Jackman et al., 1989;Jones and Pyle, 1984).
Many climatologies of SAD have been derived from SAGE II and HALOE independently (Thomason et al., 1997;Steele and Turco, 1997;Wang et al., 1989;Yue, 1999;Hervig et al., 1998). The goal of this paper is to evaluate the HALOE aerosol extinction coefficient data as the first necessary step in a process that will eventually allow the production of a combined HALOE/SAGE II aerosol climatology of derived aerosol products including surface area density (SAD) and a mechanism to produce complete aerosol extinction/absorption spectra suitable for use in climate modeling. There are only very limited directly comparable measurements to HALOE aerosol extinction coefficient measurements (e.g., Cryogenic Limb Array Etalon Spectrometer (CLAES) through May 1993). As a result, past evaluations of HALOE aerosol extinction coefficient measurements have been focused on the strongly volcanic periods prior to 1995 and measurements from optical particle counters (e.g., Hervig et al., 1996) or dependent on using HALOE-derived aerosol size distributions to produce comparable bulk properties like SAD or extinction at other visible wavelengths (e.g., Hervig and Deshler, 2002). Herein, I attempt to evaluate each HALOE aerosol channel with as little dependence on assumptions regarding the underlying size distribution as possible. I do not consider the HALOE aerosol size distribution parameters or bulk parameters derived from those values since they are not relevant to the final goal of this study. In any case, the HALOE size distribution fits reported in the official data files are based on multiple HALOE aerosol extinction coefficient values and thus may mask or exacerbate issues at individual wavelengths. and 1996 and 2005 (left) and the median relative standard based on a 3-month running (similar to the analysis shown in Fig. 2) in the four HALOE aerosol extinction channels and the SAGE II 1020nm aerosol extinction channel between 10 • S and 10 • N and 1996 and 2005 (right). Figure 2 shows 3-month running median depictions of aerosol extinction coefficient at 2.45 (a), 3.40 (b), 3.46 (c), and 5.26 μm (d) for 10 • S to 10 • N in units of 1 km −1 . Note that, while the bulk of the following analysis focuses on the tropics, the conclusions HALOE aerosol extinction coefficient data quality are independent of latitude. In this case, I have interpolated the HALOE extinction to a 0.5-km grid that matches the SAGE II measurement altitudes. Medians are used as a crude cloud filter; the median-based results of the analysis are essentially identical to mean-based results above 18 km (where clouds are very rare) and are only modestly affected by cloud presence well into the troposphere. I have also included the SAGE II 1020-nm aerosol extinction coefficient analysis analyzed in an identical manner though a more formal cloud identification scheme for SAGE II is available (Kent et al., 1993). Figure 3 shows a comparison of the reported relative measurement uncertainty (Fig. 3a) and the relative observed variability (Fig. 3b) for tropical measurements for (10 • S to 10 • N) at each of the four HALOE aerosol extinction coefficient measurement channels for the period 1996 through 2005 where aerosol levels are relatively stable. In this case, I show the median value for each parameter coming from the 3-month depictions shown in Fig. 2.
General morphology of the HALOE aerosol observations
In the tropics, low zonal variability is expected during quiescent periods and often observed for a wide collection of stratospheric components, particularly above 20 km. As a result, the observed variability in this region can be used as a rough stand-in for measurement noise or at least an upper bound on these values (e.g., Cunnold et al., 1984). For the 3.40, 3.46 and 5.26 μm channels, the median reported measurement uncertainty slowly increases from about 10 % at 20 km to 20-30 % at 30 km. The observed relative variability in these channels stays roughly in the 10 to 20 % range through the 20 to 30 km range or comparable to or slightly larger than the reported uncertainties. As a result, it appears that the reported measurement uncertainties for these channels are reasonable estimates of the precision of these measurements. On the other hand, the reported uncertainty for the 2.45 μm channel varies from about 20 % at 20 km to over 50 % at 30 km whereas the observed variability runs from 15 % at 20 km to 25 % at 30 km. This difference suggests that the reported uncertainties overestimate the random component of the error budget (precision) for this channel by about a factor of 2.
While there are significant differences in the aerosol extinction kernel ( Fig. 1) among the 1020 nm, 2.45 μm, and the longer wavelength HALOE measurement wavelengths (where the kernels are essentially parallel to each other), grossly similar behavior in all five analyses should be observed. In fact, this is broadly what is shown in Fig. 2 (where contours are given in log 10 of extinction). There is an intense aerosol layer, centered near 20 km associated with the 1991 Pinatubo eruption that slowly relaxes toward non-volcanic levels by the late 1990s and the aerosol extinction coefficient at all wavelengths is fairly stable thereafter. Despite the known NO 2 contaminant issue, the 3.40 μm HALOE channel is qualitatively very similar to the SAGE II 1020-nm channel with a similar structure and duration for the Pinatubo aerosol layer, similar seasonal structure in the clean period and evidence for the impact of two minor eruptions in the 2000 s (Ruang in 2002 andManam in 2005). The 3.40 μm data does suggest a more rapid increase in aerosol extinction with decreasing altitude in the upper troposphere (below ∼ 17 km) than does SAGE II. This may reflect the NO 2 contamination or another issue. Since clouds typically exhibit extinction coefficients that are many times larger than aerosol, mixed views of cloud and aerosol often take on the characteristics of cloud events even when the relative fraction of the field of view occupied by cloud is relatively small. As a result, the more rapid increase in extinction coefficient in at 3.40 μm relative to the SAGE II 1020 nm channel may also reflect an increased spreading of "cloud" observations to higher altitudes given the larger field of view for HALOE (1.6 km) relative to SAGE II (0.5 km) (e.g., Kent et al., 1997). As shown in Fig. 3, SAGE II extinction coefficient relative standard deviation increases more rapidly below 18 km than do HALOE values. This is also consistent with a FOV differences between two instruments as relatively infrequent SAGE II cloud observations between 16 and 18 km have a strong impact on the observed standard deviation but virtually no impact on the median extinction coefficient itself.
The 3.46 μm channel is very similar to the 1020 nm SAGE II channel and the 3.40 μm channel and shows a similar Pinatubo layer, seasonality in the clear period, and both minor volcanic events. In fact, based on the similarity of their 0.4, 0.6, 0.8, 1.0, 3, 5, 7, 9, 20, 40, 60, 80, 100 with an extra color contour between each isopleth. extinction coefficient kernels, the 3.40 and 3.46 μm channels should be virtually identical except for the affect of the aforementioned NO 2 contamination at 3.40 μm. However, while Fig. 2 shows that, as expected, aerosol extinction at 3.46 μm is somewhat less than at 3.40 μm in the stratosphere, the difference become much larger at lower altitudes as 3.46 μm aerosol extinction coefficient decreases rapidly with distance below 16 km falling below 10 −5 km −1 (and continuing to decrease) by 12 km (compared to values around 2×10 −4 km −1 at 3.40 μm). Figure 4 shows the mean difference between the 3.40 and 3.46 μm aerosol extinction coefficient data in northern and southern mid-latitudes and in the tropics. In the midlatitude stratosphere, the differences show a fairly strongly annual cycle peaking in the summer and exhibiting a morphology and magnitude that is roughly consistent with the known NO 2 contamination at 3.40 μm. Below 20 km, Fig. 4 shows that the divergence between the 3.40 and 3.46 μm Atmos. Chem. Phys., 12, 1-12, 2012 www.atmos-chem-phys.net/12/1/2012/ L. W. Thomason: Toward a combined SAGE II-HALOE aerosol climatology 5 aerosol extinction coefficient data begins as high as 19 km in the tropics and as high as 17 km in mid-latitudes or at or slightly above the tropopause. There is clearly a defect in the 3.46 μm data at lower altitudes and the character of the anomaly may suggest the over correction of an interfering gas species (overestimating extinction due to absorption) due to either a biased concentration (most likely a climatology) or spectroscopy for an interfering species in this band. This abrupt transition occurs near the altitude at which methane clearing transitions from HALOE observations (in the stratosphere) to use of the UAPD methane values (in the troposphere). Therefore, UAPD tropospheric methane is a potential source of 3.46 μm aerosol extinction coefficient bias. Aerosol extinction coefficient at 2.45 and 5.26 μm show similar behavior in the stratosphere as the channels at 3.40 and 3.46 μm. An exception to this is that the minor eruptions in the 2000s are not clearly detectable in the 2.45 μm data. At lower altitudes, both channels show a strong increase with decreasing altitude below 20 km that does not conform to what is observed at either 1020 nm by SAGE II or at 3.40 μm by HALOE. It is not likely due to clouds as clouds should influence measurements at all HALOE aerosol wavelengths in a similar manner (at least in terms of how it is manifested as opposed to actual extinction coefficient values). For at least the 5.26 μm channel, there a clue as to the source of this anomaly in an annual cycle between 15 and 23 km whose phase and tilt in time that is very analogous to the water vapor tape recorder (Mote et al., 1996). Taken with the rapid increase at and below the tropopause, the behavior of the 5.26 μm channels data suggests a water vapor-based artifact in this channel though the artifact could be derived from another gas species that follows a temporal/spatial pattern similar to water vapor. HALOE's 2.45 μm aerosol extinction coefficient data has a similar rapid increase below 15 km as shown by the 5.26 μm channel. However, there is no evidence of a tape recorder like structure in the stratosphere. There is some weak absorption by ozone and N 2 O within the 2.45 μm channel filter coverage and it is not clear that either is cleared using measured ozone or UAPD N 2 O.
Anomaly analysis
Quantifying the overall quality of the HALOE aerosol extinction coefficient measurements, particularly in the relatively clean period after 1996, is limited by the lack of directly comparable measurements and has, in the past, been based on the conversion of in situ measurements by the University of Wyoming Optical Particle Counter (OPC) (Hervig et al., 1993) and by conversion of the HALOE measurement themselves into extinction at other wavelengths or other derived products (e.g., SAD) using the size distributions derived using multiple HALOE measurement wavelengths (Hervig et al., 1995(Hervig et al., , 1998Hervig and Deshler, 2002). Since the goal of this work is to assess individual HALOE chan-nels at near background levels, derived quantities based on size distribution fits are not well suited to this evaluation. Also, while a single mode log-normal is often used in modeling the stratospheric aerosol size distribution and is a reasonable approximation for observed size distributions, multimode and more complex aerosol size distributions are commonly observed (e.g., SPARC, 2006) and observed aerosol extinction coefficient are often sensitive to the details of the size distribution. While it is not totally possible to eliminate some sort of a priori model for the size distribution, I have attempted to minimize the sensitivity of the following evaluation to those assumptions and make no attempt to infer the underlying size distribution. Instead, I use a family of single mode log-normal size distributions (ranges in the mode radius and width) to roughly carve out the space in which ratios of pairs of different wavelengths of aerosol extinction coefficient should exist. Using ratios eliminates sensitivity to total aerosol number density and the size distribution can be considered a smoothing function for the aerosol extinction kernels. As a result, "ratio space" defined through log-normal functions should be fairly representative of any reasonable stratospheric aerosol size distribution. I define a good match to be whenever the observed ratio pairs lie within or near the model space (within ∼ 20 % of the bounds defined by the models). I do not require good matches from one channel to lie near the equivalent location within the log-normal space (i.e., the same implied log-normal size distribution) in a further attempt to minimize the impact of aerosol size distribution assumptions. Figure 5 shows the distribution of the 3-month running medians (shown as the symbol "+") for the ratio of a HALOE aerosol extinction coefficients relative to the SAGE II 1020 nm value versus the SAGE II 525 to 1020-nm aerosol extinction coefficient ratio (derived using the methodology used to produce Fig. 2. Also shown are log-normal models results shown as lines for constant size distribution width at values of 1.2, 1.4, 1.6, and 1.8. The SAGE II 525 to 1020 nm aerosol extinction coefficient ratio can be more or less translated as time since the record begins in 1991 when aerosol showed a small extinction ratio (implying large aerosol) and which increases (∼ monotonically) into the 2000s. This figure shows that the early data at this altitude for all four HALOE channels are in fairly good agreement with the SAGE II ratios when the SAGE II ratio is less than 2. This occurs early in the HALOE record when extinction is dominated by the Pinatubo event and are the largest observed during the HALOE mission lifetime. Following that period, the ratios using 3.40 and 3.46 μm HALOE channels remain within or close to the single mode-lognormal space throughout their records. On the other hand, both the 2.45 and 5.26 μm ratios consistently increase to where they lay as much as a factor of 2 (at 5.26 μm) or 3 (at 2.45 μm) outside the area bound by the single-mode lognormal model. While some leeway must be granted for the potential for deficiencies in the SAGE II data and the existence of more complex and 5.26 μm (d) to SAGE II 1020-nm aerosol extinction coefficient ratio relative to the SAGE II 525 to 1020-nm aerosol extinction ratio (shown as "+" symbols) computed following the 3-month running medians as shown in Fig. 2 for the Tropics and at 20 km. Data from prior to 1997 is shown in blue. The lines represent model relationships for sulfate aerosol at stratospheric temperatures based on a single mode log-normal with varying mode radius (along the lines) and width (from 1.2 for the solid line to 1.8 for the dash-dot line). aerosol size distributions, it is virtually impossible to construct any size distribution that would match the observed behavior at 2.45 and 5.26 μm relative to SAGE II observations particularly without simultaneously breaking the positive behavior observed at the 3.40 and 3.46 μm channels. This figure implies that the 2.45 and 5.26 μm HALOE aerosol extinction coefficient measurements are not consistent with the HALOE 3.40 and 3.46 μm values as well as the SAGE II measurements.
Selecting the 3.46 μm channel as the nominal standard for HALOE aerosol extinction coefficient channels (in much the same way as SAGE II's 1020 nm channel is treated as the paramount member of the SAGE II ensemble), Fig. 6 shows an analysis similar to that in Fig. 5 using the 3.46 μm to 1020 nm ratio channel as the independent variable relative to ratios of 2.45, 3.40 and 5.26 μm relative to 3.46 μm as the dependent variable. This reduces the dependence on comparisons to SAGE II measurements particularly for channels at 3.40 and 5.26 μm where the model results show virtually no variation with the 3.46 μm to 1020 nm ratio. On the other hand, the 2.45 to 3.46 μm extinction coefficient ratio remains sensitive to the use of the SAGE II data since sulfate aerosol is only weakly absorbing at 2.45 μm and the resulting extinction kernel (shown in Fig. 1) is a mix of features common to SAGE II aerosol kernels and the longer wavelength HALOE channels where sulfate absorption is very strong. As in the previous figure the 3.46 μm to 1020 nm extinction coefficient ratio is also a rough stand in for increasing .26 μm (c) to 3.46 μm aerosol extinction ratio relative to the HALOE 3.46 μm to SAGE II 1020-nm aerosol extinction ratio (shown as "+" symbols) computed following the 3-month running medians as shown in Fig. 2 for the Tropics and at 20 km. Data from prior to 1997 is shown in blue. The lines represent model relationships for sulfate aerosol at stratospheric temperatures based on a single mode log-normal with varying mode radius (along the lines) and width (from 1.2 for the solid line to 1.8 for the dash-dot line). time and decreasing absolute extinction coefficient. In Fig. 6, the offset of the 3.40 μm channel from its expected relationship with the 3.46 μm channel (and virtually independent of the SAGE II 1020-nm extinction coefficient) averages about +10 % while the 5.26 to 3.46 μm (expected to be similarly independent of the 1020 nm extinction) varies from +20 % at smallest ratios (i.e., the highest aerosol loading) to more than 100 % at larger ratios. Both of these figures yield interpretations consistent with those drawn from Fig. 5. On the other hand, the 2.45 to 3.46 μm figure suggests that the data is more consistent with the 3.46 μm and 1020-nm extinction coefficient ratio than would be inferred from the more SAGE II-based comparison shown in Fig. 5 for at least the first half of the data set before substantial departures from the model predictions appear. However, the relationship among the data shows almost no dependence on 3.46 μm to 1020 nm extinction coefficient ratio unlike the strong dependence based on the model results. As with the 5.26-μm channel, results from the 2.45-μm channel appear incorrect except at the highest levels of aerosol extinction coefficients observed.
I have computed an aerosol extinction coefficient anomaly as a function of altitude using the single mode log-normal model with a width of 1.6 (the dashed lines in Figs. 5 and 6) for all four HALOE aerosol channels relative to the SAGE II ratios shown in Fig. 5 and for the 2.45, 3.40 and 5.26 μm channels relative to the HALOE 3.46 μm and L. W. Thomason: Toward a combined SAGE II-HALOE aerosol climatology 7 Fig. 7. The figure shows the inferred aerosol extinction coefficient anomalies (the difference between the single mode log-normal size distribution model and the observed channel values) based on the SAGE II 525 to 1020-nm model shown in Fig. 5 for 20 km (a) and based on the HALOE 3.46 μm to SAGE II 1020-nm model (b and c) shown in Fig. 6 for a size distribution width of 1.6. The inset in (a) shows the 1020-nm anomaly below 17 km with an expanded extinction coefficient scale (in km −1 ). SAGE II 1020-nm channel ratios shown in Fig. 6. In Fig. 5, the anomaly is computed as the SAGE II 1020-nm extinction coefficient times the difference between the model prediction for the HALOE extinction coefficient values (relative to SAGE II 1020-nm extinction and based on the observed SAGE II 525 to 1020-nm extinction) and the observed HALOE extinction relative to the SAGE II 1020-nm extinction coefficient. For this analysis I have limited the altitude range to 15 to 30 km and show the median differences for 1996 through the end of the record and latitudes from 10 • S to 10 • N. Given the limitations of the anomaly calculation, there is some absolute uncertainty that may vary as a function of time and altitude (as the true underlying size distribution changes) so that the differences shown in Fig. 7 should be considered only estimates of the potential for bias in any of the channels. Also, since I have based this analysis on the 3.46 μm channel care must be used in interpreting the analysis below 19 km where there is obvious degradation in this channels performance. Despite this, I used the 3.46 μm channel as the base HALOE extinction measurement because it lacks the known NO 2 -based bias of the 3.40 μm channel throughout the depth of the profile and other channels are far more suspect. Figure 7a shows the anomalies in the all four channels relative to the SAGE II-based predictions and demonstrates a consistent picture between 20 and 30 km of small anomalies (∼ ± 10 −6 km −1 ) for 3.40 and 3.46 μm but significantly larger ones at 2.45 and 5.26 μm (∼ 10 −5 km −1 ). However, all four channels show much larger deviations from the model predictions below 15 km (shown in the inset on Fig. 7a) and with values larger than 10 −3 km −1 at 5.26 μm. Figure 7b and c shows the anomalies based on the 3.46 μm to 1020-nm aerosol extinction ratio in absolute and relative sense. The differences between the 3.40 and 3.46 μm extinction coefficients are on the order of 10 −6 km −1 or about 10 % of the overall measurement between 20 and 30 km. This difference is fairly consistent with the expected NO 2 absorption at 3.40 μm and suggests that these two channels would be in excellent agreement in this altitude range if the effects of NO 2 absorption were removed from the 3.40 μm channel. On the other hand, the channels at 2.45 and 5.26 μm show differences on the order of 0.5 to 1×10 −5 km −1 which represent a 20 to 50 % anomaly at 5.26 μm and 60 to 95 % at 2.45 μm. It is possible that more complicated size distributions and more diverse composition (e.g., ice or organics) would account for some of the apparent deficiencies observed herein, however, I believe that the bulk of the problems are HALOE data quality issues.
It is beyond the scope of this paper to fully diagnose and repair deficiencies in HALOE aerosol extinction coefficient data. Nonetheless, given inferences I have already drawn regarding the potential for the residuals to arise from the failure to effectively remove interfering species from the aerosol retrievals, I have made further efforts to identify potential candidates as sources for the observed data quality issues. To do this, I have computed the correlation and its uncertainty between the estimated aerosol extinction coefficient anomalies and the seven gas species reported by HALOE. As in Fig. 7, I have limited the analysis to 15 to 30 km and for HALOE events from 1996 to the end of the record and latitudes from 10 • S to 10 • N. It should be kept in mind that correlation between the estimated aerosol anomalies and any gas species may imply a causal relationship between these parameters or a mutual correlation to another, perhaps unmeasured, parameter. Figure 8 shows the correlation of the 3.46 μm anomaly 8 L. W. Thomason: Toward a combined SAGE II-HALOE aerosol climatology (computed using the SAGE II ratios as in Fig. 5) and the seven HALOE gas species. In this case, above 20 km, no consistent evidence for correlations between any gas species and the anomaly is observed though some small but significant correlations with ozone, CH 4 , and HF can be noted. Below 20 km, some moderate and significant negative correlations with HCl, ozone, and H 2 O are also seen near 17 km though they have mostly returned to near zero at 15 km and so it is not clear if they have any relationship to the bias noted in the 3.46 μm channel at and below this altitude. Figure 9 shows the correlation of the 2.45 μm anomaly relative to the HALOE gas species measurements. In this case, we see a clear positive correlation between the anomaly and HF, which is measured at 2.45 μm, between 18 and 28 km which suggests an incomplete removal of HF from the aerosol residual. In addition, significant correlations are noted with ozone between 20 and 25 km (where ozone is a maximum), and NO and NO 2 between 23 and 28 and below 20 km (with a change in sign). Ozone weakly absorbs at this wavelength and is not listed among the species cleared in the retrieval process (Hervig et al., 1996) and so the aerosol anomalies may plausibly be related to the failure to clear this species. On the other hand, NO and NO 2 do not absorb at this wavelength so the significant correlation may be related to another species or some other process that is correlated with NO and/or NO 2 . Figure 10 shows the correlation of the 3.46 μm anomaly with the HALOE gas species measurements. In this case, NO 2 is positively correlated with the aerosol anomaly throughout the 15 to 20 km range and reinforces the impact of the failure to clear NO 2 at this wavelength. NO is similarly correlated but this is most likely due to the correlated between NO and NO 2 themselves. Ozone has a similar correlation, but opposite in sign, which may again be the result of correlation of ozone and NO 2 rather than a signature of residual ozone in the aerosol data though that cannot be totally ruled out. Water vapor also shows some significant positive correlations with some fairly complex vertical structure. Since water is removed using HALOE observations, it is not clear why this correlation exists. Finally, there are some surprisingly strong correlations with HF (in particular) and HCl although neither has appreciable absorption at this wavelength. At this time, I do not fully understand the source of this correlation. Figure 11 shows the correlation of the 5.26 μm anomaly with the HALOE gas species measurements. Here, I find strong correlation with water vapor below 23 km which seems consistent with the observation of a "tape recorder" like feature in the aerosol extinction coefficient data at this wavelength. I also found strong correlations with HF, NO and NO 2 above 23 km although none of them absorb at 5.26 μm. It is possible that the strong correlation with NO and NO 2 is related to either the failure to remove or an ineffective removal of N 2 O which has significant absorption at 5.26 μm. Like correlations at 2.45 μm, I do not fully understand the source of the correlations exhibited by this channel.
Application of HALOE data
The evaluation of HALOE aerosol extinction coefficient data found that one at 3.46-μm channel, is of sufficient quality (without further corrections) to use in an evaluation of aerosol properties (> 19 km). While multiple channels would be preferred, in combination with multiple SAGE II channels, the single channel adds significantly to the information contained in the ensemble and holds the potential to substantially improve estimates of surface area density (SAD) and other bulk properties. Thomason et al. (2008) showed that a notional measurement of aerosol volume substantially reduced the uncertainty inherent in visible wavelength-only measurements as well as modestly increasing the overall estimates for SAD. Following the method used in that paper, I have produced a simple model to combine SAGE II extinction coefficient measurements at 525 and 1020 nm with those at 3.46 μm from HALOE. In this approach, I use the 525 and 1020 nm extinction coefficient values to estimate a monomodal "large" mode aerosol that reproduces the extinction at both wavelengths exactly. SAD calculated from this mode is the minimum possible SAD that is consistent with those measurements (Thomason et al., 2008). Generally, the radius of this mode is between 0.2 and 0.3 μm and the number densities associated with this fit are on the order of 1 cm −3 and much less than the nominal stratospheric value of ∼ 10 cm −3 . The extinction at 3.46 μm implied by this fit is also much less than observed by HALOE and effectively demonstrates the insensitivity of visible wavelength aerosol extinction to small aerosol (< 0.1 μm). I then introduce a second monomode which has a number density which brings the total number density to either 5 or 15 cm −3 . The size of this mode is selected such that the extinction at 3.46 μm is reproduced but do not require either SAGE II extinction to remain fixed. The radii of this mode are usually less than 0.1 μm. I find the extinction computed using both modes reproduces the 1020 nm extinction within a few percent of the measured value. The impact at 525 nm is somewhat larger and is usually between 1 and 2 times the 525-nm measurement uncertainty (10 to 20 %). The impact at 525 nm could be reduced with a slightly more sophisticated model. The average of the two fits is used as the "final" SAD product with the profiles based on the 5 and 15 cm −3 representing the lower (from 15 cm −3 ) and upper (from 5 cm −3 ) bounds for SAD. Figure 12 shows the results for the tropics in March 1999 using the method of Thomason et al. (2008) (based SAGE II alone) with an estimated range of close to a factor of 3 throughout the profile. The figure also shows results using the method described above. These results lie within the SAGE II-only result range but generally the mean value is about 30 % higher. Most significantly the range of SAD values is reduced by more than a factor of 3. Obviously, this model does not (and is not intended to) produce physically realistic aerosol size distributions but is instead intended to produce a viable range for retrieved bulk properties like SAD consistent with observed extinction values but minimally dependent on Thomason et al. (2008) and using SAGE II 525 and 1020nm extinction coefficient data along with that from the HALOE 3.46 μm based on a modified form of the Thomason et al. (2008) method (red). assumptions regarding the underlying size distribution. In that regard, this result demonstrates the potential positive impact of employing a mixed visible/infrared aerosol extinction ensemble in the derivation of climate/chemically important aerosol parameters.
Conclusions and recommendations
In summary, based on the analyses above for the time period after 1996, I have concluded that HALOE's 3.46 μm is of good quality above 19 km and suitable for scientific applications above that altitude. However, I find that it is of increasingly suspect quality at lower altitudes and should not be used below 17 km under any circumstances. The 3.40 μm is biased by about 10 % throughout the stratosphere due to the failure to clear NO 2 but otherwise appears to be a high quality product down to 15 km. Its behavior below 15 km is more cause for concern and should be used with caution. Finally, I find that both the 2.45 and 5.26 μm aerosol extinction coefficient measurements are clearly biased and should not be used for scientific applications after the strongest parts of the Pinatubo period. It is clear that the 3.40 μm aerosol extinction coefficient measurements can be improved through the inclusion of an NO 2 correction and could, in fact, end up as the highest quality overall HALOE aerosol extinction coefficient measurement. This could be done using HALOE's NO 2 product in the production of a future version or done using a relatively simple model for NO 2 absorption (e.g., MODTRAN) as a post-version correction. On the other hand, fully understanding the deficiencies of the 2.45 and 5.26 μm channels probably requires access to the HALOE processing code and supporting data sets and more information regarding these than I have been able to acquire at this time. Nonetheless, careful examination of the UAPD would seem like an important activity. The unique characteristics of the 2.45 μm aerosol kernel (size sensitivity with some absorption) make it potentially very valuable in inferring the underlying characteristics of the aerosol size distribution. Improvements to this channel should be considered a high priority. On the other hand, since 5.26 μm aerosol extinction coefficient measurements do not offer information that is particularly unique relative to other measurements, future improvements to this channel do not have the same value. Finally, a simple model to demonstrate the power of mixed visible/infrared aerosol extinction coefficient ensembles for the retrieval of bulk aerosol properties demonstrates that a combined HALOE/SAGE II aerosol climatology is feasible and may represent a substantial improvement over independently derived data sets. | 2017-08-25T22:09:17.004Z | 2012-08-01T00:00:00.000 | {
"year": 2012,
"sha1": "dad195f4f58caaf92bfae80dbaaedf431882fdc5",
"oa_license": "CCBY",
"oa_url": "https://acp.copernicus.org/articles/12/8177/2012/acp-12-8177-2012.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "3640aa43d7914f979b348c1ee204f49f2dac05e8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
21651089 | pes2o/s2orc | v3-fos-license | Portal ductopathy: Clinical importance and nomenclature
Non-cirrhotic portal hypertension (PHT) accounts for about 20% of all PHT cases, portal vein thrombosis (PVT) resulting in cavernous transformation being the most common cause. All known complications of PHT may be encountered in patients with chronic PVT. However, the effect of this entity on the biliary tree and pancreatic duct has not yet been fully established. Additionally, a dispute remains regarding the nomenclature of common bile duct abnormalities which occur as a result of chronic PVT. Although many clinical reports have focused on biliary abnormalities, only a few have evaluated both the biliary and pancreatic ductal systems. In this review the relevant literature evaluating the effect of PVT on both ductal systems is discussed, and findings are con sidered with reference to results of a prominent center in Turkey, from which the term “portal ductopathy” has been put forth to replace “portal biliopathy”.
INTRODUCTION
Although liver cirrhosis is a major cause of portal hypertension (PHT), in 20% of cases PHT is classified as noncirrhotic, occurring as a result of portal vein thrombosis (PVT), congenital hepatic fibrosis, idiopathic PHT and other rare disorders. The portal vein, which is 12 mm in diameter, carries blood from intra-abdominal organs to the liver at a rate of approximately 1200 mL/min. Thrombotic occlusion of the portal vein, whatever the cause, is rapidly followed by compensatory mechanisms such as attempts at re-canalization and the development of new collaterals around the occluded portal vein, bile ducts and gall bladder, aimed at reestablishing portal blood flow to the liver. The portal vein is eventually replaced by a "cavernoma" after what is now known as portal vein cavernous transformation (PVCT). Splenomegaly, esophageal and gastric varices, portal gastropathy, and rarely ascites, are well recognized and extensively studied complications of PHT due to PVCT. However, the effects of PVCT on the biliary tree and pancreatic duct are as yet to be unequivocally identified. Furthermore, a dispute remains regarding the nomenclature of common bile duct (CBD) abnormalities which occur as a result of PVCT. Till today, many of the published case series have described biliary abnormalities resulting from PVT, but only a few have focused on both duct systems, the biliary and pancreatic [1,2] . In a prospective study published in 1992, abnormalities of the biliary tree in patients with PVCT which resulted in an appearance mimicking cholangiocellular carcinoma on endoscopic retrograde cholangiopancreatography (ERCP) were described [3] . Meanwhile the descriptive terms "pseudosclerosing cholangitis" [4] and "portal biliopathy" have also been introduced into the literature [5] . Till today, the issue of proper nomenclature of this phenomenon has not been sufficiently discussed, and there is a dire need for clarification.
DEFINITION AND NOMENCLATURE
Since the introduction of the term "pseudocholangiocellular carcinoma sign" to describe radiologic abnormalities mimicking cholangiocarcinoma caused by the compression of bile ducts by the thrombosed portal vein and its collaterals [3] , several different terms have been coined, including "portal biliopathy" [6] , "cholangiopathy associated with portal hypertension" [7] , and "portal cavernomaassociated cholangiopathy" [8] . Finally, Dhiman et al [9] proposed the term "portal hypertensive biliopathy" to refer to abnormalities of the biliary tree, cystic duct and gall bladder in patients with PHT.
It would seem that "pseudo sclerosing cholangitis" and "portal biliopathy" do not appropriately represent or define abnormalities of the biliary system which occur as a result of PHT. Biliary strictures in patient with PVCT are smooth rather than irregular, making the term "pseudosclerosing cholangitis" an erroneous description [4] . "Portal biliopathy" is also a misnomer as it implies abnormal content of bile, which has never been reported before in any of the studies describing abnormalities of the biliary tree. Although in cases of PVCT jaundice is a common clinical finding, bile composition is considered to be normal. Additionally, the term "biliopathy," suggests that the pathology is limited to the biliary tree, whereas PVCT has been shown to also affect the pancreatic ducts in most patients. Moreover, ERCP findings of cholangiocarcinomas rarely resemble those associated with PVCT, which also renders the term "pseudo-cholangiocarcinoma sign" inadequate.
The pancreatic ducts of patients with PVCT have been thoroughly evaluated at Hacettepe University for the past two decades (since 1992), where 78 patients with PVCT have undergone ERCP procedures. Seventy of the 78 (90%) patients had involvement of the biliary tree, 54 of whom (70%) had both biliary and pancreatic duct involvement. Considering that PVCT results in "morphological" abnormalities in both ductal systems, it would be expected the nomenclature should reflect the ductal changes observed in these patients instead of misleading physicians to associate this entity with changes in biliary content. The term "portal ductopathy" may provide a more satisfactory means of depicting abnormalities seen in the biliary and pancreatic duct systems in patients with PVT.
PATHOGENESIS OF PORTAL DUCTOPATHY
PVT was first described by Balfour et al [10] in 1964. The re-canalization of the thrombosed portal vein at the hepatic hilum leads to this clinical and radiological condition. Ohnishi et al [11] demonstrated that after complete obstruction, the "venous rescue" begins immediately and is completed in about 5 wk. The newly formed small collaterals are mostly observed around the intrahepatic and extrahepatic biliary tract, cystic duct and around the gall bladder. There are two venous plexuses of the bile ducts and gall bladder; the so-called epicholedochal venous plexus of Saint [12] , and the paracholedochal veins of Petren [13] . Saint's plexus, which forms a fine reticular web located on the outer surface of the CBD and hepatic ducts, becomes dilated and causes fine irregularities in the biliary tract [12][13][14] . Petren's plexus, on the other hand, runs parallel to the CBD and is connected to the gastric, pancreaticoduodenal and portal veins and to the liver directly. When the portal vein is occluded by a thrombus or tumor, both plexuses become dilated and cause extrinsic compression of the CBD. External compression and protrusion of these newly formed vessels on the biliary tree have been shown to be responsible for portal ductopathy in the biliary tree. However, the reasons behind changes to the pancreatic duct are yet to be elucidated. Extension of newly formed vessels towards the pancreas may be implicated, although this remains largely speculative.
In spite of the well established role of newly developed vessels around the bile system in the development of biliary abnormalities (more appropriately called portal ductopathy), most studies have overlooked other important factors. Ischemia, fibrosis, direct compression of the thrombosed vessels and excessive connective tissue formation around the biliary system have a major impact on the formation of biliary abnormalities. These factors contribute to the formation of a "frozen portal hilum" so that even if the PHT is relieved by any effective means, the cholestasis usually does not improve [15][16][17] . The entire process mimics the reaction of wound healing in which neovascularization, collagen formation and tissue turnover occur and re-cycle for a long period of time. Duration of PVT does not seem to have an effect on the extent of the radiological appearance of the ductal abnormalities. Biliary strictures leading to complete biliary obstruction may be caused by ischemia or by encasement within a solid tumor-like cavernoma [18] . The mechanism of ischemia causing bile duct changes in patients with PVCT remains unexplained. Venous damage due to portal thrombosis results in ischemic necrosis of bile ducts by compressing the vascular supply at the level of capillaries and the arterioles [9] , resulting in biliary strictures and cholangiectasis [19] . Segmental strictures and dilatations seen on ERCP may involve both intra-and extra-hepatic bile ducts, morphologically very similar to those seen in ischemic cholangiopathy after liver transplantation [20] .
In a prospective study [21] , the biliary tree, either intra-or extra-hepatic, was found to be affected in almost all patients with known PVCT. Additionally, pancreatic duct abnormalities were apparent in a large proportion of this patient group. Use of the term "portal double ductopathy" was suggested to describe involvement of both systems. Involvement of any of the ductal systems individually would be referred to as either "portal biliary ductopathy" or "portal pancreatic ductopathy".
BILIARY DUCTOPATHY
It is well known that PHT, whatever its etiology, results in many complications such as ascites, portal gastropathy, esophageal and gastric varices, hypersplenism and severe coagulopathy, which pose a great challenge for clinicians in daily medical practice. Although PVT has been associated with many biliary abnormalities, the majority of cases are asymptomatic and only a small percentage of this patient population develop signs and symptoms of biliary obstruction, presenting with jaundice, pruritus, fever and abdominal pain.
Cholestasis, one of the main clinical features of portal biliary ductopathy, may be explained by the mass effect of enlarged collaterals or chronic thrombus compressing on the intra-and/or extra-hepatic biliary lumen. Ensuing ischemia and fibrosis may also be implicated. In some cases compression may be so severe as to result in secondary biliary cirrhosis due to longstanding severe cholestasis. Fortunately this complication is rare; one which at Hacettepe University has been encountered in only 2 cases, both of whom eventually underwent liver transplantation. Both patients are still under follow-up and are healthy, productive members of society. Mild jaundice seen in these cases because of incomplete obstruction of CBD is not uncommon, usually leading to unnecessary investigative tests towards the cause of the direct hyperbilirubinemia. Cholestatic enzymes are generally elevated in parallel with bilirubin levels.
PVCT has been reported to result in an increase in the frequency of biliary stone disease and related complications. Regardless of age, sex and underlying etiology, an association between PVCT and an increased incidence of biliary tree diseases has consistently been reported in the literature [4,6,7,15,22] . In such cases, direct bilirubin levels are quite elevated. Stone formation is facilitated by the chronic but incomplete obstruction caused by the above-mentioned factors. It is necessary to stress that incomplete obstruction at multiple levels of the intra-and extra-hepatic biliary system may exist simultaneously. The occurrence of fever and abdominal pain during followup of a patient with biliary stones associated with PVCT should raise a suspicion of cholangitis.
According to Dhiman et al [9] , choledocal varices were observed in 7.5% of cases with PVCT. It is important to note that these varices may bleed severely, thus complicating the clinical picture. Additionally, endoscopic pro-cedures such as stenting and stone extraction may also result in bleeding from these otherwise silent varices [23][24][25] . Endoscopic ultrasonography (EUS) with Doppler is a particularly useful technique in diagnosing bile duct varices and differentiating them from bile duct stones. Great care should be taken when undertaking interventional procedures such as stone extraction and sphincterotomy, as even gentle balloon dilatation may lead to bleeding from these small varices. Of note, such patients may already have thrombocytopenia and some degree of coagulopathy because of splenomegaly and tissue congestion caused by PHT. Liver function tests are typically normal in patients with PVT in the absence of an underlying disorder such as polycythemia vera or Behçet's disease. However, according to unpublished data at Hacettepe University, most patients have prolonged prothrombin times compared to healthy controls, without having any other signs of compromised liver function, an entity which as yet remains unexplained.
PANCREATIC DUCTOPATHY
Chronic congestion due to PHT affects almost all intraabdominal organs, including the pancreas. The effects of PVCT on pancreatic parenchyma and duct have not been fully established. In the only study to investigate pancreatic exocrine function in this patient group, Egesel et al [21] demonstrated that the pancreatic ducts of PVCT patients tended to be smaller than normal controls. Additionally, in 15 of the 18 patients with chronic PVT who had pancreatic atrophy, they found that urinary excretion of paraaminobenzoic acid was significantly less than in control subjects. Moreover, the authors have attributed some uncertain symptoms in these patients, such as abdominal discomfort, abdominal pain and anorexia, to latent pancreatic insufficiency shown by bentiromide test. As the pancreas has a tremendous capacity to manipulate the body needs, the manifestation of symptoms occurs after a latent period, requiring significant pancreatic parenchymal pathology. More sensitive tests are needed to clarify the extent of exocrine and endocrine dysfunction of the pancreas associated with this disorder.
Three other studies have demonstrated significant changes in the pancreatic ducts of patients with PVCT [1,2,21] . In a report published in 2008 [1] , 31 of 36 (86.1%) patients with PVT had luminal narrowing throughout the pancreatic duct, local atrophy at the head of pancreas with moderate dilatation behind the narrowed segment and other unclassified pancreatic duct abnormalities. Since 2008, 22 more patients with PVCT due to portal thrombosis have been evaluated at Hacettepe University, 16 of whom (72%) had pancreatic duct abnormalities demonstrated by ERCP. In total, 78 patients with PVCT have been seen since 1992, all of whom underwent ERCP as part of a work-up for unexplained elevations in ALP, GGT and direct bilirubin levels. Approximately 70% of patients had pancreatic abnormalities. Although the clinical sig-nificance of these ductal abnormalities has not been well delineated, as stated above, partial pancreatic insufficiency and some other patient complaints such as abdominal pain and dyspepsia may be explained by chronic congestion due to PHT. It is possible that PVT contributes to more severe pancreatic congestion when compared to cirrhotic causes of PHT, as extension of the thrombus to the splenic vein may hinder pancreatic venous drainage. As a result, pancreatic duct and parenchyma may be more significantly affected. Further studies are needed to investigate this phenomenon.
Biochemical tests
Characteristically, the majority of patients with PVCT have a predominant cholestatic pattern of elevated liver enzymes. This biochemical finding reflects the biliary duct changes secondary to PVCT. Despite the presence of PHT manifesting as massive splenomegaly and large esophageal varices, serum albumin levels are usually within normal limits, unless massive bleeding occurs. Mild ALP and GGT elevations occur at any one time during the follow-up period of such patients. Clinicians must bear in mind that portal ductopathy may be responsible for such mild elevations, in order to avoid further unnecessary testing towards a cause of the cholestatic picture. In the presence of biliary strictures or stones, more marked elevations in cholestatic enzymes and bilirubin levels may be observed.
Liver biopsy
Although not part of the diagnostic work-up for portal ductopathy, a liver biopsy is essential in establishing whether PVT is due to cirrhotic or non-cirrhotic causes. It is necessary to perform this procedure to rule out the presence of liver cirrhosis, particularly in patients with atrophic livers with heterogeneous parenchyma on sonographic examination. Usually liver biopsies show normal or nearly normal histology, sometimes with signs of portal vein dilatation in the portal tract, or segmental luminal narrowing of bile ducts with dilated small intrahepatic bile ducts. In congenital hepatic fibrosis, where histopathological examination is vital for making a diagnosis, portal vein abnormalities mimicking PVCT are relatively more common [26,27] .
Ultrasonography
Ultrasonography has traditionally been the most widely utilized modality for demonstrating biliary duct abnormalities in patients with chronic PVT. However, this technique has many shortcomings. For example, the presence of a high level of echoes in the porta hepatis may obscure the biliary system. Similarly, CBD may be hidden behind multiple collaterals demonstrating themselves as anechoic tubular and fibrotic structures. Color Doppler examination may help confirm the presence of multiple tortuous structures in the porta hepatis of patients with PVCT. The nature of these tubular structures may not be correctly identified as blood vessels initially on grayscale. Real time and Doppler sonographic findings compatible with PVCT may prompt further evaluation by splenoportography, either with digital subtraction angiography or by computed tomography to confirm the diagnosis.
ERCP
This modality has established itself as one the most important procedures in diagnosing and defining the extent of involvement of the intrahepatic and/or extrahepatic biliary tree. As PVT may occur in either or both intra-and extra-hepatic portions of the portal vein, any part of the biliary system, including the gall bladder, may be affected by this thrombotic event. The development of cavernous changes, located at the portal hilum, may still affect the left and right intrahepatic biliary channels, usually manifesting as dilatations. Changes described so far include undulation on the CBD along with narrowing and irregularity of various lengths and degree, sometimes leading to nearly complete obstruction (Figure 1), as well as segmental upstream and asymmetrical dilatation.
In contrast to obstruction of the CBD where both intrahepatic and extrahepatic bile ducts are proportionately dilated above the level of the obstruction, the most consistent radiologic finding that has been encountered at Hacettepe University is that the CBD tended to be narrower than the intrahepatic biliary ducts. In other words, the left or right hepatic ducts were usually dilated either alone or in combination with a dilated common hepatic duct.
Irregularities on the gallbladder wall may be seen, most probably because of intraluminal varices in a few cases. These varices may also be present in the CBD, manifesting themselves as filling defects, although rarely leading to bleeding, or so-called hemobilia.
There are mainly three types of pancreatic duct abnormalities: (1) Diffuse pancreatic duct abnormality in which the whole duct is narrowed with kinking, distortion of the normal anatomic pathway, thumb-printing type of compression and with local luminal irregularities. The whole pancreatic duct is atrophic ( Figure 1); (2) Proximal pancreatic duct abnormality in which the narrow part of the duct is limited to the level of head of pancreas, in contrast to the normal pancreatic duct in which the widest caliber is at the head of pancreas. Interestingly, the distal segment of these ducts appeared relatively dilated compared to the head region; and (3) unclassified abnormalities in which multiple small ducts connect to each other (different from pancreas divisum). Indentation, displacement and angulations occur.
Splenoportography with digital subtraction angiography
Although splenoportography is invasive with several reported complications, we have been utilizing this procedure in our hospital for a long time for the diagnosis of PVCT without the occurrence of any adverse effects. This technique is best for providing a clear image of the PVCT as well as other collaterals; however it does not evaluate the biliary system. Nowadays the use of CT portography as a less invasive imaging modality may be preferred.
Computed tomography with contrast
This procedure which helps to diagnose portal vein obstruction is particularly useful in identifying the presence of cavernous transformation, as well as for evaluating the dimension of bile duct abnormalities. Recently, CT portography has been introduced as an alternative to conventional angiographic splenoportography.
Magnetic resonance cholangiography with magnetic resonance portography
Not only is this technique non-invasive, but it is also more informative in that it allows for clear visualization of portal vein collaterals when confirming the presence of a cavernoma. However, in some cases ERCP is superior with regards to evaluation of the bile duct system. With the advent of high-resolution magnetic resonance (MR), MR cholangiography (MRCP) may eventually replace ERCP as the modality of choice for examining abnormalities of the intra-hepatic bile ducts, as it offers the advantage of being less invasive with fewer associated complications. If available, MRCP with MR portographic evaluation should follow real time or Doppler ultrasonography when investigating the bile duct system and portal vein.
EUS with Doppler
After conventional ultrasonography and Doppler ultrasound, the advent of EUS has been particularly useful in identifying CBD varices and/or bile duct stones, both of which may be the primary cause of biliary obstruction in patients with PVCT [28] . Recognizing stones and differentiating them from varices is important as any intervention in the form of balloon dilatation or stone extraction may result in severe bleeding. In patients where an obstructive clinical picture is predominant, EUS with Doppler should be performed to properly identify the cause of the obstruction, whether it is due to bile duct varices, stones, strictures or a tumor. In this perspective, EUS is a quite useful, even inevitable procedure.
TREATMENT
Most patients with portal ductopathy who are asymptomatic do not require any treatment. When symptoms due to stone formation, obstructive jaundice and cholangitis occur, treatment should be adjusted individually according to patient characteristics. Naturally, the occurrence of PVT warrants investigation into the cause, whether there is an underlying myeloproliferative disorder, deficiency of anticoagulant proteins, or an autoimmune disease. In the presence of an underlying thrombophilic condition, anticoagulant treatment may be indicated. On a different note, a proportion of patients may first present with variceal bleeding from the upper GI tract. Although the management of variceal bleeding is beyond the scope of this paper, it is important to stress that bleeding from gastric and esophageal varices may prove very challenging.
As mentioned before, PVCT results in a pathological condition involving ductular organs; CBD and pancreatic duct. Since bile composition is not an issue, all efforts should instead focus on the management of strictures, stones or sludge in the biliary tree. In cases such as these, endoscopic papillotomy and, if indicated, stone extraction and balloon dilatation, are the treatment modalities of choice. In some cases, concomitant presence of severe biliary stricture and biliary stones may be observed [14] . This poses a challenge for the endoscopist, since after performing a sphincterotomy followed by stricture dilatation, great care should be taken while extracting any stone, since underlying thrombocytopenia due to hypersplenism and the presence of small or large varices in the peri-ampullary area and inside the CBD result in the risk of severe bleeding, further complicating an already complex and delicate clinical condition. In fact, some cases may even require the use of a mechanical lithotripter to crush large stones into small pieces.
Liver transplantation should be reserved for patients who develop secondary biliary cirrhosis or severe liver failure. At Hacettepe University, only 2 female patients developed secondary biliary cirrhosis due to biliary strictures and stone formation, both of whom underwent successful liver transplantation. To date, both patients are under follow-up with no significant restrictions in their daily activities.
CONCLUSION
Several disorders result in thrombosis of the portal vein which eventually undergoes cavernous transformation. The intra-and extra-hepatic bile ducts are affected by these changes in almost all cases, with relatively less frequent involvement of the pancreatic duct. Although sev-eral terms such as "portal biliopathy" and "pseudocholangiocarcinoma sign" have been postulated to describe these changes, to allay any doubts regarding abnormalities in bile composition, use of the term "portal ductopathy" may be more appropriate. Involvement of both duct systems may be further described as "portal double ductopathy". Clinical implications of portal ductopathy consist of cholestasis, stone formation, and consequently cholangitis. | 2018-04-03T06:19:37.874Z | 2011-03-21T00:00:00.000 | {
"year": 2011,
"sha1": "2c86fede39842e427fa2fb877feb680e57fda9bc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v17.i11.1410",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "5515a6456997884353bcfc58c6439f016b584235",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258014080 | pes2o/s2orc | v3-fos-license | Research Perspective in Flexible Hand Exoskeleton
. The purpose of writing this review paper is to make it more convenient for the public to use and adapt to hand exoskeletons, especially to enhance the promotion of flexible hand exoskeletons, and to put forward some of my own views. In recent years, a flexible exoskeleton human-machine intelligent system has become a new research hotspot in the fields of robotics, electromechanical engineering, automatic control, bioengineering and artificial intelligence, and has been widely used in scientific research, industrial production, space or deep-sea exploration, entertainment, etc. Sports rehabilitation and daily life have gradually been widely used. Flexible exoskeletons are most commonly used in the medical and biological fields. The human-machine intelligent technology of a flexible exoskeleton takes man-machine integration technology as the core, and the advantages of people and intelligent machines can be fully utilized. Through organic human-machine coupling, perception and decision-making at the execution level, the performance of the system is enhanced. Teleoperated exoskeletons and augmented exoskeletons are two important research directions for flexible exoskeleton human-machine intelligent systems.
Introduction
In recent years, the development prospects of exoskeletons have become a very hot topic.In view of the inconvenience of post-stroke rehabilitation in Chinese people, a new exoskeleton manipulator was proposed based on the analysis of the biological characteristics of the human hand [1].It is used for postoperative rehabilitation of traumatized fingers.In order to assist patients who, lose their or her hand motion function to complete the grasping of daily necessities, a flexible exoskeleton robot system based on line tension feedback has been developed, which can realize fingertip grasping [2].Stable control of power take-off, the flexible hand exoskeleton uses materials that can be closer to the human body to improve portability and safety.The safety of the wearable flexible exoskeleton human-machine intelligent system mainly depends on the mechanical structure design of the system.Selection and placement, design of protective measures and security control strategies.Many paraplegics are still very dependent on caregivers in daily living.For those with paraplegia, even if the orthosis is used to enhance walking function, they still require a lot of baseline strength and a huge energy expenditure.This leads to the fact that options for movement for people with complete paraplegia have traditionally been confined to wheelchairs, and although wheelchairs offer improved independence, wheelchair users still face difficulties with access and mobility.At the same time, exoskeleton hand robots have been rapidly and extensively developed in the recovery of sensorimotor disorders after central nervous system (CNS) injury and central nervous system (CNS) injury.Many of these innovations are technology-driven, restricting their use and effect on clinical clinics.However, the concept of recovery exoskeleton hand robots demands neurophysiological vision into regular and jeoparded sensorimotor function, which demands multidisciplinary cooperation and backdrop information.The rehabilitation of sensorimotor effect after medium tight setup injury depends on the utilization of neuroplasticity and emphasizes the recovery of movement required for self-independence.This demands physiologic limb flesh activation, which can be implemented by practical hand/arm and ham motor exercise and activation of suitable surface sense organs.These things have led to the growth of innovative exoskeleton hand machines with high-ranking interactive command programs, as well as the application of imploded implodeds to always monitor and adjust sustain to the practical condition of the patient, but many challenges reserve.To have an active effect on functional outcomes, recovery methods should be based on neurophysiology and clinical vision, keeping in the brain that functional rehabilitation is restricted.Therefore, the device of exoskeleton hand machines demands an association of customized undertaking and neurophysiological knowledge.When used correctly, machine auxiliary therapy can offer more advantages than traditional ways, comprising a standardized cultivated condition, adaptive sustainability, and the capability to enhance the strength and potion of therapy while decreasing the burden on the therapist's body.Therefore, exoskeleton hand robots ideal to complement traditional clinical therapy and continuous therapy have great potential and aid in the use of simple equipment at home.
There are also many classifications of flexible hand exoskeletons, such as the pneumatic soft hand exoskeleton, a new pneumatic soft exoskeleton glove proposed by Harvard University that produces deft movements of finger joints with hardly any restriction on mind movement.Four pneumatic manmade muscles are placed on every finger to form two pairs of antagonistic muscles which are similar to human anatomy, enabling diverse postural control over a single joint [3][4][5].The advantage is the use of soft materials, good portability and good wearing comfort.The disadvantage is that the driving force is less.The second is a tendon-driven hand exoskeleton designed by Shenyang Automation.This hand exoskeleton is a ductile wearable robot of hand, with gloves made entirely of polymer material, operated by a tendon-driven actuation device for spinal cord injury (SCI).EGP II can recover the capability of SCI patients to grasp and hold any item in their daily routine.The glove design makes it terse and expands the scope of hand measurement which can be fitted.A passive pollex construction was developed to counter the pollex for enhanced grip.The advantage is that it adopts a tension drive, a compact structure and good portability.The disadvantage is that tendons tend to entangle with objects and cause discomfort to the hand.
Clinical Trials
The Toronto Institute for Rehabilitation conducted an experiment in which researchers tested the hand function of nine participants with spinal cord damages to evaluate the functioning of an arm robot.Since spinal cord damage is a overpowering disease which greatly hinders the motor function of the arm, assistive devices, both active and passive, are becoming increasingly common to correct and improve loss hand intensity and flexibility.Mild skeletal robotics is an new branch of learning that combines the working principle of robotics with mild stuff into a fresh kind of active auxiliary equipments.Soft robotic aids enable human-robot interplay through compliance and lightweight construction.The goal of this research is to illustrate that based on fabric soft-skeletal robotic mitten can help participants influenced by spinal cord damage disease effectively manipulate and use objects they need in their daily lives.The test was performed twice for every participant; first time without assistance gloves to offer baseline data, and second times with assistance gloves.Object-operator subtests were assessed using linear mixed models, comprising influences of interactions and variables, for instance how long was injured.Lift measurements were independently assessed using the Wilcoxon Signature Rating Test [6].Soft robotic gloves improve target operation in ADL missions.Distinct in average marks among baseline and auxiliary requirements were major across every participants and every manipulations.This suggests that gloves sufficiently enhance hand function during the ADL task.In addition, the lift force when applying the assistance flexible robotic mitten is also increased, Further more proof of the positivity and usefulness of the flexible hand exoskeleton in assistance hand function.The outcomes collected in this research project confirm that the fabricbased soft-skeletal extra-hand robot can be used as an useful rehabilitation device to restore hand function in injured patients with upper extremity paralysis following spinal cord injury, and the same is true for stroke patients.step.
While considering how to assist patients with loss of hand motor function to complete the grasping and use of daily objects, scientists have also developed a flexible exoskeleton robot system based on wire tension feedback, whose stable control is embodied in the fingers.The gripping force of the fingertips.In addition, the scientists introduced the structural design of the flexible exoskeleton glove and their control strategy.Then, by establishing a static mechanical model of the finger, the lateral line tension of the lasso entrance was calculated to obtain the contact force between the fingertip and the object.In order to solve the problem of friction loss in the process of cable transmission, the variation range of the cumulative bending angle of the lasso is limited within the range of 0° to 90° by physical methods, and the friction loss is compensated by the method of median compensation.In order to verify the practical effect of the flexible exoskeleton glove, the grasping experiment of the flexible exoskeleton glove was carried out on the patient who lost the function of hand movement [7].The experimental results show that the flexible hand exoskeleton can assist the patient to complete grasping in daily life.Take the need.
Application Advantages of Flexible Hand Exoskeleton
A problem that frequently arises in this field is that both existing robotic hand exoskeletons and commercially available robotic hand exoskeletons can provide technical rehabilitation of the shoulder and elbow, which greatly reduces the clinician's ability to deal with personalization, possible ways of effectively rehabilitating the problem.Scientists have released and introduced fresh software that lets FES-assisted holding be integrated with ArmeoSpring (Hocoma AG).Man-In-The-Loop is the control method used by this system, where both surface EMG signals from the proximal muscle can be applied to spark and modulate a multi-channel FES used to the distal muscle, permitting the sufferer to adapt through induction and force movement trajectories and movement patterns of the hand.Integrating voluntary commanded FES with arm weight recompense permits early application of FES-assisted treatment, enhancing its function and extending ArmeoSpring's training capabilities.In daily life, wearable robots can help people with sensorimotor disorders in daily life, or sustain industrial labor to complete manual labor tasks.In this case, low quality and terse devices are crossover elements for equipment commitment.Remote Actuation Systems (RAS) have become a prevalent method for wearable machines to decrease perceived iteration and improve availability.Distinct RAS have been proposed in the literature to suit a broad scope of appliances and concerned devices.The shove for out-of-lab applications to use wearable robotics in clinics, home conditions or industries leads to a shift in RAS demand.In this situation, high endurance, ergonomics, and uncomplicated retention become increasingly important.However, although these factors are drivers of end-user abandonment of devices, they are considered uncommon and assessed in study announcements.In this paper, the existing RAS methods in wearable assistive technology are summarized in the literature commentary, and their merits and hurts are compared, focusing on the concrete assessment standard for outdoor application, so as to supply guidance for the choice of RAS.Based on the insights obtained, we current the growth, majorization, and access of cable-based RAS for off-lab applications of wearable assisted soft hand exoskeletons.The demonstrated RAS has complete wear resistance, high endurance, high efficiency and an attractive design while meeting ergonomic standards such as low quality and high wear consolation.This work aims to sustain the transmission of RAS for wearable robots from controlled laboratory circumstances to out-oflaboratory applications [8].
On the other hand, inspired by the flexible and curved properties of octopus tentacles and the external actuation of traditional exoskeletons, a novel adaptive underactuated digital mechanics is proposed, which scientists call OS finger.Similar to an octopus' tentacles, OS fingers consist of artificial muscles that pass through all joints and are driven by fluid, eight serially articulated joints, and changeable components.The force-variable component is mainly composed of a spring and an elastic rubber membrane, coordinated by a layer of rubber material on the surface of the finger to stabilize the grip.The OS fingers can perform different grasping modes according to the shape and size of the grasped object, and grasp the object in a gentle and snug way.OS fingers combine the rigid grip of traditional fingers with the good qualities of a form-fitting grip of flexible fingers.The kinematic analysis and experimental results show that the OS robot hand uses four OS fingers to perform precise pinch, self-adaptive, strong double-teaming, and grasping force, which can vary freely in a wide range [9,10].The OS exoskeleton arm has the advantages of strong adaptability, various grasping configurations, and a wide range of grasping forces.
Research on an exoskeleton arm rehabilitation training robot has been widely recognized in the field of rehabilitation.Compared with the traditional rehabilitation methods, the rehabilitation robot optimizes the rehabilitation training effect of hemiplegic patients.Combined with virtual reality technology, the rehabilitation training effect is remarkable, and the rehabilitation process is developing in the direction of programmatic, digital and convenient.At the same time, evaluation also plays an important role in the rehabilitation process of patients.Based on the first generation of exoskeleton upper limb rehabilitation robots, combined with the feedback of clinical experiments, the second generation of exoskeleton upper limb rehabilitation robots is designed, and the matching virtual reality software game program of rehabilitation training and virtual reality motion control evaluation program are compiled.In the aspect of rehabilitation evaluation, in addition to writing the virtual reality motion evaluation software, the effect of rehabilitation training was evaluated by combining the body surface EMG signal.First of all, the research progress of upper limb rehabilitation robot system at home and abroad was thoroughly studied, and the principles of hemiplegia movement disorder caused by stroke and its rehabilitation were deeply studied, including brain remodeling theory, motor relearning theory and other basic medical theories.At the same time, the evaluation of rehabilitation motor function and daily life function, functional electrical stimulation theory and surface EMG signal processing theory were fully studied.Second, the optimization of the exoskeleton structure of the upper limb rehabilitation robot should ensure that the mechanical structure is more safe, flexible and scientific.Not only the weight of the robot is reduced in structure, but also the overall structure of the support part and the brake part of the robot is improved through the mobile point cloud computing, and the collection structure of the hand grasping force is increased.In addition, a complete set of training devices can be formed, which can focus on the recovery of hand, finger, wrist and other parts of patients with neuromuscular injuries.
Point of View
Although there are always shortcomings in flexible and rigid exoskeleton robots, this can be compensated for, and the goal is to combine the mechanical power advantages of rigid exoskeleton robots with the portable and flexible characteristics of flexible exoskeleton robots to achieve breakthroughs results.From the experimental results of the soft material exoskeleton robot gloves in this paper, it can be seen that the soft robot has a very significant help in assisting people who need postoperative rehabilitation after suffering from devastating diseases.This robot has obvious advantages in material selection and assistance, but it is slightly insufficient in power due to material problems.Use hard material hand, on the other hand, the advantage of the exoskeleton robot is very obvious, because in material selection of the exoskeleton hand robot can have a powerful incentive to help patients restore hand movement, but the disadvantages are also obvious, is easy to cause the patients were injured in the treatment process, treatment for patients with late did not convenient at the same time, it can't be carried around like soft material robots.
Conclusion
Rehabilitation of human exoskeletons worn by potentially disabled people is becoming easier and easier to treat with exercise rehabilitation therapy while reducing costs and labor.Flexible exoskeleton human-machine intelligence technology is based on human-machine integration technology, which can maximize the advantages of intelligent machines and people, and enhance the performance of the system through organic human-machine coupling at the level of perception, decision-making and execution.Teleoperated exoskeletons and augmented exoskeletons are two important research directions for flexible exoskeleton human-machine intelligent systems.At present, although flexible hand exoskeletons have been used in many fields, there are still many shortcomings to be improved.In recent years, technical research and related product development in related fields at home and abroad have shown that flexible exoskeleton human-machine intelligent systems have great basic scientific research.Significance and application prospects. | 2023-04-08T15:07:06.524Z | 2023-04-01T00:00:00.000 | {
"year": 2023,
"sha1": "f77a169c789959c7453f03481eefb3a3672746b5",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/HSET/article/download/6550/6345",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "0febe374381ddf3867cb425c4d50340b17299c64",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
17651586 | pes2o/s2orc | v3-fos-license | Involvement of TRPA1 activation in acute pain induced by cadmium in mice
Background Cadmium (Cd) is an environmental pollutant and acute exposure to it causes symptoms related to pain and inflammation in the airway and gastrointestinal tract, but the underlying mechanisms are still unclear. TRPA1 is a nonselective cation channel expressed in sensory neurons and acts as a nociceptive receptor. Some metal ions such as Ca, Mg, Ba and Zn are reported to modulate TRPA1 channel activity. In the present study, we investigated the effect of Cd on cultured mouse dorsal root ganglion neurons and a heterologous expression system to analyze the effect of Cd at the molecular level. In addition, we examined whether Cd caused acute pain in vivo. Results In wild-type mouse sensory neurons, Cd evoked an elevation of the intracellular Ca concentration ([Ca2+]i) that was inhibited by external Ca removal and TRPA1 blockers. Most of the Cd-sensitive neurons were also sensitive to cinnamaldehyde (a TRPA1 agonist) and [Ca2+]i responses to Cd were absent in TRPA1(−/−) mouse neurons. Heterologous expression of TRPA1 mutant channels that were less sensitive to Zn showed attenuation of Cd sensitivity. Intracellular Cd imaging revealed that Cd entered sensory neurons through TRPA1. The stimulatory effects of Cd were confirmed in TRPA1-expressing rat pancreatic cancer cells (RIN-14B). Intraplantar injection of Cd induced pain-related behaviors that were largely attenuated in TRPA1(−/−) mice. Conclusions Cd excites sensory neurons via activation of TRPA1 and causes acute pain, the mechanism of which may be similar to that of Zn. The present results indicate that TRPA1 is involved in the nociceptive or inflammatory effects of Cd.
Background
Cadmium (Cd) is a ubiquitous environmental pollutant distributed in rocks, soil, the atmosphere and water [1]. Human exposure occurs mainly from consumption of contaminated food, smoking and inhalation by workers in metal industries (reviewed by [2,3]). Cd is more efficiently absorbed by the lungs (25-50%) than the gastrointestinal tract (5%) [4]. Thus, Cd inhalation is often a problem in metal workers and cigarette smokers.
Chronic Cd toxicity has been well studied and the main target organs are the kidney, liver, lung cardiovascular, immune and reproductive systems (reviewed by [5]), with resultant, renal tubular dysfunction and subsequent induction of osteomalacia known as "itai-itai disease" (reviewed by [6]). In acute exposure, on the other hand, Cd causes symptoms related to pain and inflammation in the airway or gastrointestinal tract. After exposure to a high level of metal dust or fumes containing Cd causes irritation of the upper respiratory tract and induces coughing in the early stage, various clinical manifestations known as metal fume fever occur, including airway inflammation, pulmonary edema, chest pain and flu-like symptoms. Cd ingestion causes abdominal pain, severe nausea and diarrhea (reviewed by [2]). These reports suggest that Cd stimulates sensory processing involved in pain and inflammation, but the underlying mechanisms are unclear.
It has been reported that some metal ions regulate TRP channel activities. For example, nickel activates TRPV1 in the mM range, which is associated with nickel-induced contact dermatitis [17]. It has also been reported that Na, Mg and Ca are able to activate TRPV1, which contributes to the nociceptive responses to elevated ionic strength [18]. Moreover, Na negatively regulates TRPV1, since removal of external Na activates TRPV1 [19].
In the present study, we examined the effect of Cd on sensory neurons in vitro and on nociceptive behavior in vivo using wild-type and TRPA1(−/−) mice. To examine the neuronal activity, we used fura-2-based Ca-imaging techniques since some TRP channels are highly Ca permeable [24]. We investigated the effect of Cd on cultured mouse dorsal root ganglion (DRG) neurons, which are a useful model of nociception in vitro. We also used a heterologous expression system to analyze the effect of Cd at the molecular level using Ca-imaging and patch-clamp techniques. In addition, we examined whether Cd induced acute pain in vivo.
Cd-induced [Ca 2+ ] i increase in mouse DRG neurons
Using the Ca sensitive dye fura-2, we examined the effect of Cd on changes in the fura-2 ratio, which reflects the intracellular Ca concentration ([Ca 2+ ] i ). Cells were stimulated with Cd (100 μM) for 2 min and subsequently with KCl (80 mM) for 1 min. Actual traces of [Ca 2+ ] i showed that Cd elicited a [Ca 2+ ] i increase in some cells responding to KCl (i.e., neurons) ( Figure 1A and B). [Ca 2+ ] i was increased by increasing concentrations of Cd (1-300 μM). [Ca 2+ ] i responses to Cd peaked during the application to Cd, then returned to the basal level (<100 μM), but were sustained even after its washout at 300 μM ( Figure 1C). Figure 1D shows the concentration-response relations for Cd; the EC 50 value was 21.9±3.4 μM. The percentages of Cd-responding neurons increased with increasing concentrations of Cd.
Involvement of TRPA1 in the Cd-induced [Ca 2+ ] i increase
Next, we examined the effects of removal of extracellular Ca and TRP channel blockers. Repetitive application of Cd (30 μM, 1 min) after an interval of 30 min induced similar [Ca 2+ ] i increases (Figure 2A). Then the second application was carried out in the absence of extracellular Ca ( Figure 2B) or presence of TRP channel blockers ( Figure 2C, D). Removal of extracellular Ca abolished the Cd-induced [Ca 2+ ] i increase, indicating that Cdinduced [Ca 2+ ] i elevation resulted from extracellular Ca influx. Ruthenium red (10 μM), a broad TRP channel blocker, and HC-030031 (10 μM), AP18 (10 μM) and A967079 (10 μM), specific TRPA1 blockers, but not BCTC (10 μM), a TRPV1 blocker, inhibited the Cdinduced [Ca 2+ ] i increase ( Figure 2E). These pharmacological analyses suggested that Cd evoked the [Ca 2+ ] i increase in mouse sensory neurons through the activation of TRPA1.
Absence of [Ca 2+ ] i response to Cd in TRPA1(−/−) mouse DRG neurons
To verify the relationship between Cd and TRPA1, we used TRPA1(−/−) mice. Figure 3A and B show actual traces of [Ca 2+ ] i responses to Cd (100 μM) and subsequent cinnamaldehyde (CA, a TRPA1 agonist, 300 μM), capsaicin (a TRPV1 agonist, 1 μM) and KCl (80 mM) in wild-type and TRPA1(−/−) mouse DRG neurons, respectively. In wild-type mouse DRG neurons, most of the Cd-sensitive cells were also CA sensitive ( Figure 3C). In the TRPA1(−/−) mouse, on the other hand, Cd failed to increase [Ca 2+ ] i ( Figure 3B, D). These data clearly indicated that TRPA1 was involved in the Cd-induced [Ca 2+ ] i increases in mouse sensory neurons.
Cd-influx through TRPA1
As shown in Figure 1C, when Cd was applied at the concentration of 300 μM, a sustained fura-2 ratio rise was observed even after the washout of Cd. This may have been due to Cd entry into the cell, since fura-2 is sensitive to not only Ca, but also Cd [25]. Thus, we used Leadmium Green, a Cd indicator for intracellular Cd imaging to examine the possibility of Cd influx. Cd induced an increase of the fluorescent intensity of Leadmium Green-loaded wild-type mouse DRG neurons, whereas its intensity was significantly less in TRPA1(−/−) mouse DRG neurons (Figure 4), suggesting that Cd entered Traces shows mean ± SEM of representative data from 8-17 cells. (E) Summarized effects of these pharmacological treatments. S1 and S2 are the first peak amplitude and the second one without (No blocker) and with these treatments, respectively. Data are shown as S2/S1. (No blocker; n=13, Ca0; n=29, ruthenium red; n=39, HC-030031; n=16, AP18 (10 μM); n=22, A967079 (10 μM); n=18, BCTC (10 μM); n=32, from 3 mice) **,P<0.01 vs. No blocker. DRG neurons through TRPA1 channels. To confirm Cd influx into cells through TRPA1 channel, we carried out Cd-imaging in human TRPA1-expressing HEK293 cells. As shown in Figure 4C, Cd evoked increases of the fluorescence of Leadmium Green in HEK293 cells expressing human TRPA1 but not in untransfected HEK293 cells.
[Ca 2+ ] i responses to Cd in TRPA1-expressing RIN-14B rat pancreatic cancer cells To confirm the effect of Cd on TRPA1, we used RIN-14B cells, rat enterochromaffin cell line, which express TRPA1 endogenously [26]. As shown in Figure 5, Cd (30 μM, 4 min) induced [Ca 2+ ] i increases in RIN-14B cells that were suppressed by removal of extracellular Ca or the application of HC-030031 (10 μM).
Effect of Cd on TRPA1 mutant channels less sensitive to Zn
Zn, in the same metal ionic group as Cd, has been shown to be an agonist of TRPA1 [21,22]. Thus, we hypothesized that Cd recognized the same amino acid residues to activate TRPA1 as Zn. We used two mutant TRPA1 channels in which three amino acids were replaced by others (C641S/C1021S, H983A). As shown in Figure 6, these mutant channels exhibited low responsiveness to Zn (3 μM To obtain direct evidence for Cd-induced TRPA1 channel activation, we performed whole-cell patch-clamp experiments in HEK293 cells expressing hTRPA1. Figure 7A shows a representative whole-cell current evoked by 10 μM Cd in HEK293 cells expressing hTRPA1. At a holding potential of −60 mV, an inward current with an outwardly rectifying current-voltage relationship by ramp pulses from −100 mV to +80 mV every 5 s was observed ( Figure 7D). Functional hTRPA1 expression was confirmed by the response to 50 μM ally isothiocyanate (AITC, a TRPA1 agonist). In two Zn-insensitive mutant channels (H983A, C641S/C1021S), TRPA1 activation by Cd (10 μM) was almost abolished although the responsiveness of AITC was intact ( Figure 7B, C and E). These results suggested that the cysteine and histidine residues are important for the activation of TRPA1 by Cd, the manner of which is similar to Zn.
Cd causes acute pain via activation of TRPA1
We showed that Cd stimulated mouse sensory neurons through TRPA1 in vitro. Next, we examined whether Cd could actually induce acute pain in vivo. Intraplantar injection of Cd (2 nmol/paw) caused licking, biting ( Figure 8Aa) and flicking (Figure 8Ab) of the injected paw as pain- related behaviors. These nociceptive behaviors began just after its application and ceased within 5 min. Figure 8B shows the total number of nociceptive behaviors for 5 min after the Cd administration in wild-type and TRPA1(−/−) mice. TRPA1(−/−) mouse displayed a significant attenu-ation of Cd-induced nociception. In a control experiment, no responses were observed in mice injected with the same amount of HEPES-buffered solution as a vehicle. These results indicated that activation of TRPA1 was associated with pain or irritation induced by Cd.
Discussion
The present study indicates that Cd induced [Ca 2+ ] i increases in mouse primary sensory neurons, which were also sensitive to the TRPA1 agonist cinnamaldehyde and selectively inhibited by TRPA1 blockers. The Cdinduced [Ca 2+ ] i responses were absent in TRPA1 (−/−) mouse DRG neurons. [Ca 2+ ] i responses to Cd were confirmed in RIN-14B cells expressing TRPA1 endogenously. Cd evoked current responses in heterologously expressed TRPA1. In wild-type mice, intraplantar injection of Cd induced pain-related behaviors, which were largely attenuated in TRPA1 (−/−) mice. These results suggested that Cd elicited acute pain through the activation of TRPA1.
It is known that Cd and Zn bind cysteine and histidine residues in metallothionein, a cysteine-rich metal-binding protein [27], and in some metal ion transporters [28]. Zn directly activates heterologously expressed TRPA1 [21,22] and specific intracellular cysteine and histidine residues of TRPA1 bind Zn [22]. Our findings indicated that two cysteine (C641, C1021) residues and one histidine (H983) residue affected Cd sensitivity, since mutant TRPA1 channels (C641S/C1021S, H983A) showed attenuation of Cd sensitivity like Zn. These properties were confirmed by the patch-clamp experiments using heterologously expressed mutant TRPA1 channels. These amino acid residues were located in the intracellular domain and the N-terminal C641 has been identified as a residue involved in some reactive chemicals [29]. Thus, it seems likely that Cd activates TRPA1 through recognition of the same specific amino acid residues for its recognition sites as those for Zn.
Furthermore, Cd imaging using Leadmium Green showed that Cd entered mouse sensory neurons. The increases of fluorescent intensity of Leadmium Green in TRPA1(−/−) mouse DRG neurons were significantly lower than in wild-type ones. These results suggested that Cd may be able to permeate into neurons though TRPA1. It was confirmed in human TRPA1-expressing HEK293 cells. It has been reported that Zn permeates through TRPA1 and acts on the inner domain of this channel [22]. In TRPA1(−/−) mouse DRG neurons, however, Cd influx remained at higher concentrations, suggesting that Cd might enter through other pathways. It has been reported that Cd permeates through voltage-dependent Ca channels (L-type; [25], T-type; [30]), an Fe transporter (DMT1; [31]), Zn transporters (ZIP8; [32], ZIP14; [33]) and TRPV6 [34]. These channels or transporters are reported to be expressed in neurons [33,[35][36][37]. Thus, in wild-type mouse DRG neurons, Cd entering through these pathways might also activate TRPA1.
In this study, we used 30-300 μM Cd, concentrations that were much higher than reported blood or urine Cd levels in exposed humans (blood-Cd:10 μg/L, urine-Cd:7 μg/L creatinine, [38]). It is reported that workers exposed to Cd fumes at 8.63 mg/m 3 for 5 h exhibited symptoms of coughing and slight irritation of the throat and mucosa [39]. In the case of acute Cd exposure such as airway instillation, local concentrations would be higher than blood or urine concentrations. Similarly, in research on acute toxicity to Zn in vitro, submillimolar concentrations were used (300 μM; [40], 30 μM; [22], 30 μM, [23]). In the present study, intraplantar injection of Cd (2 nmol/paw), the amount of which was somewhat higher than reports for Zn in in vivo experiments (0.6 nmol, intraplantarly; [22], 0.05 nmol, intratracheally; [23]), elicited nociceptive behaviors in wild-type mice. On the other hand, Cd induced significantly fewer behavioral changes in TRPA1(−/−) mice, suggesting the involvement of TRPA1 in Cd-induced acute pain.
Recently, functional TRPA1 expression has been reported in non-neuronal cells such as lung fibroblast cells, epithelial cells and smooth muscle cells, which release IL-8 in response to TRPA1 agonists and contribute to lung inflammation [41]. It is also reported that Cd promotes secretion of IL-8 and IL-6 from airway epithelial cells [42]. For lung inflammation, therefore, not only neuronal but also non-neuronal TRPA1 may be involved in Cd toxicity.
It is reported that Cd produces reactive oxygen species (ROS) [43] that mediates Ca signaling involved in Cdinduced cell death [44]. Since ROS are also known to activate TRPA1 [13,45], we examined whether Cd elicited ROS production in mouse DRG neurons. However, Cd failed to produce ROS under our experimental conditions (30 or 300 μM, 2 min), using CM-H 2 DCFDA, a fluorescent ROS indicator (data not shown).
Conclusions
The present study demonstrates that Cd excites sensory neurons via activation of TRPA1 and causes acute pain, the mechanism of which may be similar to that of Zn. Our present data show that TRPA1 contributes to the nociceptive or inflammatory effects of Cd. However, further studies are necessary to completely understand the pathological conditions of acute Cd toxicity.
Methods
All protocols for experiments on animals were approved by the Committee on Animal Experimentation of Tottori University (♯11-T-2). All efforts were made to minimize the number of animals used.
Isolation and culture of mouse DRG neurons
We used adult mice of either sex (4-16 weeks old). C57BL/6 J mice, TRPA1-null mice (kindly provided by Dr. D. Julius, University of California) were euthanized by inhalation of CO 2 gas. Mouse DRG cells were isolated and cultured as described previously [46]. In brief, DRG cells were removed, dissected and freed from connective tissue under a dissecting microscope in phosphatebuffered saline (PBS: in mM, 137 NaCl, 10 Na 2 HPO 4 , 1.8 KH 2 PO 4 , 2.7 KCl) supplemented with 100 U/ml penicillin G and 100 μg/ml streptomycin. Then isolated ganglia were cut into small pieces and enzymatically digested for 30 min at 37°C in PBS containing collagenase (1 mg/ml, type II, Worthington, USA) and DNase I (1 mg/ml, Roche Molecular Biochemicals, USA). Subsequently, the ganglia were immersed in PBS-containing trypsin (10 mg/ml, Sigma, USA) and DNase I (1 mg/ml) for 15 min at 37°C. After enzyme digestion, the enzymecontaining solution was aspirated and the ganglia were washed with culture medium (Dulbecco's-modified Eagle's medium [DMEM, Sigma] supplemented with 10% fetal bovine serum [Sigma]), penicillin G (100 U/ml) and streptomycin (100 μg/ml). DRG cells were obtained by gentle trituration with a fine-polished Pasteur pipette. Then the cell suspension was centrifuged (800 rpm, 2 min, 4°C) and the pellet-containing cells were resuspended with the culture medium. Aliquots were placed on glass cover slips coated with poly-D -lysine (Sigma) and cultured in a humidified atmosphere of 95% air and 5% CO 2 at 37°C. In the present experiment, cells cultured within 24 h were used.
Calcium imaging
The intracellular Ca imaging in individual cells were performed with the fluorescent Ca indicator fura-2 by dual excitation using a fluorescent-imaging system controlling illumination and acquisition (Aqua Cosmos, Hamamatsu Photonics, Hamamatsu, Japan) as described previously [19]. Briefly, to load fura-2, cells were incubated for 40 min at 37°C with 10 μM fura-2 AM (Molecular Probes, Eugene, Oregon, USA) in HEPES-buffered solution (in mM: 134 NaCl, 6 KCl, 1.2 MgCl 2 , 2.5 CaCl 2 , and 10 HEPES, pH 7.4). A coverslip with fura-2-loaded cells was placed in an experimental chamber mounted on the stage of an inverted microscope (Olympus IX71) equipped with an image acquisition and analysis system. Cells were illuminated every 5 s with lights at 340 and 380 nm, and the respective fluorescence signals of 500 nm were detected. The fluorescence emitted was projected onto a charge-coupled device camera (ORCA-ER, Hamamatsu Photonics) and the ratios of fluorescent signals (F340/F380) for [Ca 2+ ] i were stored on the hard disk of a computer (Endeavor pro 2500, Epson). Cells were continuously superfused with HEPES-buffered solution at a flow rate of~2 ml/min through a Y-tube pipette. The composition of high-K solution was (in mM) 80 KCl, 1.2 MgCl 2 , 2.5 CaCl 2 , and 10 HEPES, pH 7.4). For Ca-free external solution, Ca was omitted. All experiments were carried out at room temperature (22-25°C).
Whole-cell current recording HEK293 cells expressing hTRPA1 and mutants of hTRPA1 on coverslip were mounted in an experimental chamber and superfused with HEPES-buffered solution as for Ca imaging experiments. The pipette solution contained (in mM: 140 KCl, 1.2 MgCl 2 , 2 ATPNa 2 , 0.2 GTPNa 3 , 10 HEPES, 10 EGTA, pH 7.2 with KOH). The resistance of patch electrodes ranged from 4 to 5 MΩ. The whole-cell currents were sampled at 5 kHz and filtered at 1 kHz using a patch-clamp amplifier (Axopatch 200B; Molecular Devices, Sunnyvale, CA) in conjunction with an A/D converter (Digidata 1322A; Molecular Devices). Membrane potential was clamped at −60 mV and voltage ramp pulses from −100 mV to +80 mV for 100 ms were applied every 5 s.
Cadmium imaging
For single-cell Cd imaging, we used Leadmium Green (Molecular Probes), a specific indicator for lead and cadmium. To load Leadmium Green, cells were incubated for 40 min at 37°C with 50 ng/μl Leadmium Green-AM in HEPES-buffered solution. Cd imaging was performed using the same apparatus as for Ca imaging. Cells were illuminated with light at 490 nm and fluorescence signals of 520 nm were collected. For Cd imaging, cells were superfused with Ca-free external solution. Signals were expressed as the relative change in fluorescence, F/F 0 , where F and F 0 indicate fluorescence at any given period and the initial period, respectively.
Behavioral experiment
Mice were placed in cages for 30 min before experiments. Twenty microliters of the HEPES-buffered solution (vehicle), which was similar in composition to that used in in vitro experiments, was first injected intraplantarly into the right hind paw as a control. The numbers of times each mouse licked, bit and flicked the injected paw were counted for 15 min after the injection. Subsequently, the same amount of Cd (2 nmol/paw) was injected into the left hind paw and the number of painrelated behaviors were counted for 15 min.
Chemicals
The following drugs were used (vehicle and concentration for stock solution). Capsaicin (ethanol, 0.01 M), | 2017-06-29T13:55:52.481Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "d877a274b9784f57066198ca9fde55942cd3e8ea",
"oa_license": "CCBY",
"oa_url": "http://journals.sagepub.com/doi/pdf/10.1186/1744-8069-9-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d877a274b9784f57066198ca9fde55942cd3e8ea",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14858789 | pes2o/s2orc | v3-fos-license | Ccn Properties of Organic Aerosol Collected below and within Marine Stratocumulus Clouds near Monterey, California
The composition of aerosol from cloud droplets differs from that below cloud. Its implications for the Cloud Condensation Nuclei (CCN) activity are the focus of this study. Water-soluble organic matter from below cloud, and cloud droplet residuals off the coast of Monterey, California were collected; offline chemical composition, CCN activity and surface tension measurements coupled with Köhler Theory Analysis are used to infer the molar volume and surfactant characteristics of organics in both samples. Based on the surface tension depression of the samples, it is unlikely that the aerosol contains strong surfactants. The activation kinetics for all samples examined are consistent with rapid (NH4)2SO4 calibration aerosol. This is consistent with our current understanding of droplet kinetics for ambient CCN. However, the carbonaceous material in cloud drop residuals is 1591 far more hygroscopic than in sub-cloud aerosol, suggestive of the impact of cloud chemistry on the hygroscopic properties of organic matter.
far more hygroscopic than in sub-cloud aerosol, suggestive of the impact of cloud chemistry on the hygroscopic properties of organic matter.Keywords: marine aerosol; organics; stratocumulus clouds; cloud condensation nuclei
Organic compounds, depending on their source, are classified as "primary" and "secondary".Primary organic marine aerosol (POMA) can include high molecular-weight compounds transferred onto sea-salt aerosol from the surfactant-rich surface ocean during the bubble-bursting process [9].The presence of POMA is mostly attributed to biological activity, and its concentration varies with season [9,11,12].POMA can exhibit low hygroscopic growth factors but maintain high CCN activity [13] or vice versa [14].Secondary organic marine aerosol (SOMA) can be produced during cloud processing [15][16][17][18][19]; perhaps the most studied chemical pathway for SOMA is glyoxylic acid oxidation [20] via several aqueous phase intermediates [21][22][23].Continental biogenic emissions can also contribute to organic mass in marine clouds at high altitudes [24].Anthropogenic emissions can substantially contribute to marine organics; for example, particulate emissions from ships are composed roughly of up to 10% carbon [25,26], in the form of sparingly soluble poly-aromatic hydrocarbons, ketones and quinones (PAHs, PAKs, and PAQs, respectively [13,27])."Ship tracks" are a natural laboratory for studying aerosol indirect effects, as clouds with uniform dynamics, are exposed to a strong gradient in emissions concentrations and often exhibit droplet number, effective radius and drizzle rates responses that are consistent with local emission rates [28][29][30][31][32].
In this study, marine aerosol samples influenced by ship emissions are collected in-situ (in and out of cloudy regions) and studied for their cloud-droplet formation properties.Given the complexity of the water-soluble organic fraction, characterization is done by measuring the size-resolved CCN activity of the material, surface tension depression and using Köhler Theory Analysis (KTA) [33][34][35] to infer the thermodynamic properties (average molar volume, surfactant tension depression) of the organic fraction and its potential impact on droplet growth rate kinetics.These properties are then related to the influence of primary emissions and in-cloud oxidation processes on the CCN activity of the organic compounds.
Aerosol Sampling and Chemical Composition
The samples analyzed in this study were obtained during the Marine Stratus/Stratocumulus Experiments (MASE) that took place near the coast of Monterey, California, from July to August 2005.The airborne platform used the Center for Interdisciplinary Remotely-Piloted Aircraft Studies (CIRPAS) Twin Otter that sampled boundary layer air over 13 flights.A full description of the aerosol and cloud instrument payload aboard the plane during the MASE campaign can be found in Lu et al. [36].Six of thirteen flights encountered strong, localized perturbations in aerosol concentration, size, and composition consistent with ship emissions.The data presented here were sampled on 13 July in the vicinity of ship tracks.An overview of airborne studies conducted in this region has been summarized by Coggon et al. [18].
Two sample types were collected aboard the aircraft in this study: cloud droplet residuals (CR) from evaporated cloud droplets, and sub-cloud (SC) aerosol sampled below cloud (and occasionally in clear-sky conditions).CR samples were collected with a counter-flow virtual impactor (CVI) [37,38], in which cloud droplets with diameter greater than 5 μm were inertially separated from interstitial (unactivated) aerosol and evaporated before collection.Analysis of cloud droplet residuals with this approach has been instrumental in understanding the origin of CCN in ambient clouds [18].A Brechtel Manufacturing Inc. Particle-Into-Liquid sampler (PILS; [20]) was used to collect the water-soluble fraction of SC and CR by exposing the aerosol to supersaturated steam and growing them into droplets that are subsequently collected by inertial impaction.The liquid stream was then collected in vials over 3.5 to 5 min each.The chemical composition of the ionic species in each sample was measured with a dual Ion Chromatography (IC) system (ICS-2000 with 25 μL sample loop, Dionex Inc.); the IC detection limit of aerosol species is less than 0.1 μg•m −3 for inorganic ions (Na + , NH4 + , K + , Mg 2+ , Ca 2+ , Cl − , NO3 − , NO2 − , and SO4 2− ) and less than 0.01 μg•m −3 air for the organic acid ions (dicarboxylic acids C2-C9, acetic, formic, pyruvic, glyoxylic, maleic, malic, methacrylic, benzoic, and methanesulfonic acids).The WSOC content was also measured off-line with a Total Organic Carbon (TOC) Analyzer (Sievers Model 800 Turbo, Boulder, CO) (Table 1).The contents of the vials were subsequently analyzed for their CCN activity and surfactant characteristics (Sections 2.2-2.4).The aerosol samples analyzed in the study were obtained on 13 July (when the highest organic acid concentrations were reported) from within and below cloud (cloud base and cloud top were measured at 101 and 450 m altitude, respectively) [36].During this flight, the aircraft focused on sampling air in the vicinity of ship tracks.Backward trajectories calculated using the NOAA HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model (http://www.arl.noaa.gov/ready/hysplit4.html) indicate that the air mass on the 13 July originated from the North Pacific; the vertical profile indicates that the aerosol masses sampled on 13 July originated from the free troposphere before descending into the marine boundary layer (Figure 1).
CCN Activity of Soluble Material Collected
The aqueous contents of the PILS vials were atomized with a Collison-type atomizer (Figure 2) operated at 5 psig pressure.The atomized aerosol was then dried through two silica gel diffusional dryers, charged by a Kr-85 bipolar charger, and classified with a Differential Mobility Analyzer (DMA 3081) (Figure 2).The classified monodisperse aerosol was then split and passed through a TSI 3025A Condensation Particle Counter (CPC) to measure aerosol number concentration (CN); the other stream was sampled by a Droplet Measurement Technologies Continuous-Flow Streamwise Thermal Gradient CCN Counter (CFSTGC) [39][40][41].Given the limited amount of sample, size-resolved CCN activity and growth kinetics measurements were obtained using scanning mobility CCN Analysis (SMCA) [42].The SMCA process couples CFSTGC measurements with a scanning mobility particle sizer (SMPS).An inversion procedure was used to compute the ratio of CCN to CN as a function of aerosol size as the SMPS scans from 10 and 250 nm dry mobility diameter for a fixed supersaturation, s.The data were fit to a sigmoidal function, corrected for diffusion and multiple charges in the DMA [42]; the particle dry diameter size, d, for which 50% of the particles activated into droplets, represents the dry diameter of the particle with critical supersaturation, sc, equal to the instrument supersaturation.The activation experiments were repeated (a minimum of four times) for each s level, which varied from 0.2% to 1.2%.The compositional data, aerosol surfactant behavior, and the dependence of d with respect to sc, were used to infer the molar volume and surfactant characteristics of the organic fraction with Köhler Theory Analysis [33][34][35]43] (Section 3).The CFSTGC was calibrated using (NH4)2SO4 (density = 1.77 g•cm −3 , and molar mass of 132 g•mol −1 ) generated with the same experimental setup, and operating the DMA with a sheath:aerosol flow-rate ratio of 10:1; d for ammonium sulfate was then related to critical supersaturation by applying classical Köhler theory, using an effective van't Hoff factor of 2.5 [41,44].The constant van't Hoff value is an approximation that can be improved with the use of the Pitzer Method.Multiple calibrations of the instrument were performed over the period of measurements, and each supersaturation was within 10% of the average.
Surface Tension Measurements
A CAM 200 pendant drop goniometer was used to measure bulk surface tension, σ, of the original samples and prescribed dilutions of them, following the approach of [33,35].The instrument uses 5-6 mL of sample to form a drop at the end of a needle.The optical goniometer captures ~100 images of the droplet and computes droplet surface tension through application of the Young-Laplace equation; the standard deviations for σs/a are <0.05mN•m −1 for a given sample at one concentration.The measurements were then fit to the Szyskowski-Langmuir equation [45]: (1) where α, β are optimally fitted constants, T is temperature, σ w is the surface tension of pure water and c is the dissolved carbon concentration (mg•C•L −1 ).Each pendant drop was suspended 60 s before surface tension was measured, in order to allow organics in the bulk to equilibrate with the droplet surface layer [46].Table 1 provides a summary of the α and β parameters for all samples considered.Bulk measurements are known to not represent the droplet activation regime.Thus parameters derived from bulk measurements (Table 1) are applied to inferred droplet concentrations at activation [33][34][35]47].
Droplet Size Measurements of Activated CCN
The optical particle counter used for detection of CCN concentrations also provides the size of the activated droplets; thus the SMCA procedure can determine the size of activated CCN with known dry diameter at the exit of the CFSTGC.We adopt the method of Threshold Droplet Growth Analysis (TDGA) (e.g., but not limited to [39,48,49]) to detect the possible presence of droplets growing more slowly than calibration (NH4)2SO4 aerosol [50].If present then, a coupled measurement modeling approach can be used to infer the effective water vapor uptake coefficient [14,51].TDGA compares the Dp of CCN with sc equal to the instrument saturation (i.e., CCN with a dry diameter equal to the cutoff diameter, d) against the wet diameter, D p, (NH4)2SO4 of CCN with identical sc, but composed of (NH4)2SO4.If the presence of organics does not delay droplet growth, D p ~Dp, (NH4)2SO4 (e.g., [34,52]).Supersaturation depression is observed at high CCN concentration and can affect droplet size [53].Thus both CCN measurement and analysis should account for CCN concentrations.
Köhler Theory
Köhler Theory Analysis (KTA) [33] can be used to infer the average molar volume (molecular weight, M j , over density ρj) of the organic fraction, j, of CCN.It has been shown to work well for low molecular weight species, such as those presented here.Using measurements of s c versus d to determine the Fitted CCN Activity parameter (FCA), ω = scd −1.5 , KTA infers the average molar volume of the WSOC, [33,43], where Mw, ρw are the molecular weight and density of water, respectively, R is the universal gas constant, T is the ambient temperature, σ is the droplet surface tension at the point of activation, ε is the volume fraction, and ν is the effective van't Hoff factor.Subscripts I and j refer to the inorganic and organic components, respectively.εk is related to the mass fraction, mk, of solute k (k being either of i or j) as: Two measures of molar volume uncertainty are used: (i) the standard deviation of all the inferences (Equation ( 2)), and, (ii) estimates determined from, Δ = ∑ (Φ Δ ) , where ∆x is the uncertainty of each of the measured parameters x, (i.e., any of σ, ω, εi, εj, νi, and νj) and Φx is the sensitivity of molar volume to x, = ( ), using the formulas of [33,34,43].The maximum of both estimates is the reported uncertainty of .KTA has been shown to constrain molecular weight estimates for laboratory aerosol (having organic mass fraction between 20 and 50%) with an average error of 20% [33], 40% for complex biomass burning aerosol [43], 30% for secondary organic matter [35], and to within 25% for primary marine organic matter [34].
Inferring Surface Tension
The low concentration of WSOC in the PILS samples limits the determination (using direct measurements) of their surface tension depression for CCN-relevant concentrations.However, if CCN activity data are available for mixtures of WSOC and a salt (e.g., (NH4)2SO4), KTA can be used to concurrently infer and σ using an iterative procedure [34].If enough salt is present in the sample, the contribution of organic solute to sc is small, and an iterative procedure is not required; the effect of the organic on CCN activity amounts to its impact on surface tension, and can then be inferred as [35], where sc is the measured critical supersaturation, and sc * is the predicted value (from Köhler theory).Assuming σ = σw, the surface tension of pure water computed at the average CFSTGC column temperature [35], then * = 2 3 4 where "i" denotes all inorganic solutes present in the aerosol.Each surface tension inference is then related to the WSOC concentration at the critical diameter (Equation ( 6) of [34], and fit to the Szyskowski-Langmuir adsorption isotherm (Equation ( 1)).The partitioning of the surfactant from the bulk to the droplet monolayer should be considered.This method of inferring surface tension depression has been shown to work well for dissolved organic matter isolated from seawater [34].Thus in the case of marine POMA, where the surfactant may partition mostly to the surface, partitioning effects are less important than bulk properties and why Moore et al. [34] were able to achieve good closure.
Surface Tension
For the low WSOC concentrations measured in the PILS samples, organics have minimal effect on surface tension (Figure 3).Hence, Equation ( 4) is used to infer σ at carbon concentrations relevant for CCN activation (roughly 1000 mg•C•L −1 ).WSOC from biomass burning aerosol and marine aerosol have been shown to contain strong surfactants that depress surface tension by 25%-42% [43,47,54,55] at similar concentration.The inferred σ values for both SC and CR aerosol (Figure 3) exhibit weak surface tension depression (−Δσ/σ ≃ 5%) at concentrations >1000 mg•C•L −1 .The surface tension depression results are similar to results from dissolved organic marine matter [34].Both CR and SC samples were influenced by ship emissions, SOMA, and POMA and contained soluble organics.The solubility of the organics is unknown, however the potential effects of limited solubility on CCN activity for SC samples may be outweighed by significant depression in surface tension at the droplet surface (Figure 3).Finding the average organic molar mass from KTA will help constrain if the SC and CR samples indeed have different organic aerosol compositions that affect aerosol solubility and surfactant properties.For comparison, curves are added with data from freshly emitted biomass burning aerosol (solid black; [43], marine organic aerosol (grey dash-dot; [54]) and dissolved marine organic matter (solid red; [34]).
CCN Activity
Figures 4 and 5 show the critical supersaturation, sc, as a function of dry diameter, d, for aerosol atomized from both CR and SC samples.The data for (NH4)2SO4 aerosol have been added for comparison.Data for (NH4)2SO4 are consistent with the expectation that the aerosol is highly CCN active, lies to the left of both CR and SC data sets, and behaves as classical Kohler CCN without surface tension depression.The activation slope of particles composed of soluble and insoluble material should exhibit a sc that scales with d −1.5 (e.g., (NH4)2SO4 data in Figure 4).At low supersaturations (sc ≤ 0.6%), the WSOC concentration at the point of activation was low (<1000 mg•C•L −1 ) and surface tension depression was small (Figure 3), hence sc ~ d −1.5 for both samples, and KTA is applied for this region of the CCN spectrum.The CR sample contained material that was less hygroscopic than sulfate, but more hygroscopic than in the SC sample (i.e., for a given s, the d of the CR is greater than SC, Figure 4).SC and CR activation curves converge at high s, likely because the WSOC concentration is high enough to notably decrease surface depression, more for organics in SC than for CR (Figure 3).The difference in CCN activity is consistent with studies to date (e.g., [16,56]), showing that hygroscopic components tend to be incorporated in cloud droplets where their less hygroscopic counterparts prefer to remain in the interstitial air.It is also noted that the CR samples excluded droplets <5 μm diameter thus the excluded smaller droplets may potentially contain less hygroscopic materials.
Inferred Molar Volumes and Uncertainties
Application of KTA requires the input of the organic mass concentration, m organic , which is obtained by dividing the WSOC carbon concentration (Table 1) by the carbon-to-organic mass ratio, C/OC.In this study, C/OC ≈ 0.27 is applied, the value for oxalic acid (C2H6O4, 90 g•moL −1 ), the most abundant organic acid measured in our samples (Table 1).C/OC ≈ 0.27 is very close to 0.29, the value for dissolved primary organic marine matter and marine coarse mode aerosol [34,54].Here we expect it to apply to the whole sample.The van't Hoff factor for the organic fraction is assumed to be unity.The presence of organics from ship emissions are assumed to introduce little variability to the organic mass fractions estimated.The ionic composition taken from the chemical analysis of the PILS vials is then converted into a mixture of inorganic salts and organics with the ISORROPIA II aerosol thermodynamic equilibrium model [57].Lu et al. [36] find the composition of aerosol sampled to be bimodal, composed of ammonium bisulfate and sea-salt; when samples are mixed in the PILS, chloride depletion is both expected and observed in the samples (Table 2).The results of the KTA analysis for each sample are summarized in Tables 3 and 4. Assuming an organic density of 1.4 g•cm −3 [35,58], the SC sample has a high inferred molecular weight (Mj = 2413 ± 536 g•mol −1 ; Table 4), possibly long-chained aliphatic compounds within primary organic matter transferred to the aerosol from bubble bursting at the seawater surface [16,34,54,59,60].The marine nature of the SC sample is further supported by its inferred surface tension depression (Figure 3, which is remarkably consistent with dissolved organic matter [34].The average molecular weight inferred for CR aerosol is substantially less (143 ± 25 g•mol −1 ; Table 3), consistent with the presence of low molecular weight carboxylic acids (i.e., C2-C9 mono and dicarboxylic acids) measured in cloud-processed marine aerosol [15].Water-soluble oxidation products from ship VOC emissions can also contribute to the WSOC, although the information at hand is not sufficient to conclusively show this.As in previous KTA studies (e.g., [33]), inferred average molar volume is subject to an estimated 30% error; the greatest source of uncertainty stems from the FCA parameter (Tables 3 and 4).The low depression in surface tension (Figure 3) suggests that the molecular weight distribution in the WSOC may not contain compounds characteristic of HULIS (300-750 g•mol −1 ) [43,47], but instead can be described by the superposition of two "modes", one from SOMA (low molecular weight compounds), and one from POMA (higher molecular weight compounds).For the SC sample, we assume that the SOMA is primarily composed of oxalic acid (90 g•mol −1 , the predominant compound identified in the PILS samples and constitutes 0.6% of the organic mass, Table 1), and the remaining mass (99.4%) is POMA characteristic of Redfield-type molecules, (CHO2)106(NH 3 )16H3PO4, 3444 g•mol −1 [61,62].With this assumed two-component model composition, the average Mj in SC is equal to 2852 g•mol −1 , and consistent (to within uncertainty) with the inferred KTA value (Table 4).Similarly, if most of the organic in the CR aerosol is a mixture of ship emissions and oxalate, and that the POMA mode is consistent with phenathrene (178 g•mol −1 and 99.6% of the organic mass), and SOMA with oxalic acid (90 g•mol −1 and 0.4% of the organic mass, Table 1), the average molar mass for the organic distribution yields 177 g•mol −1 , a value within 25% of our KTA estimate.
The Effect of Organics on Droplet Growth Kinetics
Figure 6 illustrates the OPC droplet size measurements for all supersaturations and samples considered.The flow rate within the instrument was maintained at 0.5 L•min −1 and the sheath to aerosol flow rate ratio was 10:1 to so that particles have the same residence time in the CFSTGC.Similar to the WSOC droplet data presented in [34,35,52,63], almost all of the growth droplet data lie within the measurement uncertainty and are in agreement with (NH4)2SO4 calibration aerosol.According to TDGA, the water uptake coefficient is similar to that of the water uptake coefficient of (NH4)2SO4 ( c ~ (NH4)2SO4).Given that the water uptake coefficient, (NH4)2SO4 ~ 0.2 [50,51], this suggest that the WSOC would not slow down activation kinetics of CCN compared to (NH4)2SO4.This is consistent with the analysis of CCN data collected from a wide range of environments [51].The WSOC does not appreciably impact droplet growth kinetics (or the effective water vapor mass transfer coefficient) as aerosol particles produced from the SC and CR grow like (NH 4 )2SO4.
Summary and Implications
CCN activity, chemical composition and droplet growth measurements coupled with Köhler Theory Analysis are used to characterize the cloud droplet formation potential of water-soluble organic matter collected from cloud droplet (CR) residuals and sub-cloud (SC) aerosol during the MASE 2005 campaign over the eastern Pacific Ocean off the coast of California.The organics within CR samples were found to be more hygroscopic than in the SC sample, most likely from the enhanced levels of soluble organic acids (e.g., oxalic) formed during cloud processing.Both the direct and inferred surface tension measurements show neither sample contained strong surfactants; in fact, surface tension depression is consistent with the effect from primary marine organic matter.
Inferred average molecular weights of both CR and SC samples are consistent with a bimodal molecular weight distribution; one composed of oxalic acid (produced through glyoxylic oxidation or the oxidative decay of larger diacids in cloud droplets [15]) with primary organic matter (SC sample), or, organics from ship emissions (CR sample).These results are consistent with recently published work [18,64] that shows SOMA as a significant contributor to CCN activity in marine regions.Finally, all samples display similar droplet growth kinetics to CCN composed of (NH4)2SO4, suggesting that water-soluble organics do not significantly impact the growth rate of CCN.Hence, CCN activity of the organics affects CCN activity through their contribution of solute from SOMA and possibly a slight surface tension depression from POMA.In both cases, increasing the size of CCN will also have a major impact, which is not shown here.
Figure 1 .
Figure 1.NOAA HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) Model Back Trajectories for air masses sampled aboard the Twin Otter on 13 July 2005.
Figure 2 .
Figure 2. Experimental setup for characterizing size-resolved Cloud Condensation Nuclei activity of samples analyzed in this study.
Figure 3 .
Figure 3. Surface tension depression as a function of dissolved carbon concentration.Measurements (closed symbols) and Inferred values (open symbols) of the CR (blue triangles) and SC aerosol (green squares) are shown as data points.For comparison, curves are added with data from freshly emitted biomass burning aerosol (solid black;[43], marine organic aerosol (grey dash-dot;[54]) and dissolved marine organic matter (solid red;[34]).
Figure 5 .
Figure 5. CCN activity of MASE samples with the addition of (NH4)2SO4.Salt contents are expressed in terms of mass fraction.(a) Cloud Residuals and (b) Sub-Cloud samples.
Figure 6 .
Figure 6.Size of activated CCN generated from the cloud residual samples (CR, solid symbols), sub-cloud samples (SC, open symbols) and (NH4)2SO4 (solid black line).Droplet sizes are presented for CCN with sc equal to the instrument supersaturation.
Table 2 .
Mass fraction (%) of constituents in the aerosol samples considered in this study.Composition of the inorganic fraction was obtained by an aerosol thermodynamic equilibrium model (ISORROPIA-II), using the ionic composition of Table1. | 2016-03-14T22:51:50.573Z | 2015-10-28T00:00:00.000 | {
"year": 2015,
"sha1": "afef1a83db7f65459bf296d3caa28d593b7a9f48",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4433/6/11/1590/pdf?version=1446035690",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "afef1a83db7f65459bf296d3caa28d593b7a9f48",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.