id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
55690336 | pes2o/s2orc | v3-fos-license | Szeg\"o Limit Theorems for Singular Berezin-Toeplitz Operators
We consider Berezin-Toeplitz operators whose multipliers are compactly supported densities carried by a submanifold of ${\mathbb C}^N$ . We compute asymptotically the moments of their spectral measures, and we prove Szeg\"o limit theorems in cases when the submanifold is isotropic or co-isotropic, from which Weyl estimates follow. We also obtain asymptotics of the Schatten norms of such operators. Rescaled versions of these operators can be thought of as quantum mechanical mixed states, and our results give the semi-classical limit of their entropy.
Let Γ ⊂ C N ∼ = R 2N be a smooth submanifold. Although some of our examples of Γ are noncompact, we will only be considering symbols or amplitudes on Γ that are compactly supported.
As a motivation for the type of operators we consider in this article, let dσ be the measure induced on Γ by Lebesgue measure and L 2 (Γ) the corresponding L 2 space. Let be the restriction operator, which is bounded, and let R * k : L 2 (Γ) → B k be its adjoint. Then the self-adjoint operator (2) T Γ := R * • R : B k → B k is an example of what we will call a singular Toeplitz operator, since it can be considered as a Toeplitz operator with a multiplier that is a distribution supported on Γ, as the following lemma shows: Lemma 1.1. Let Π k (z, w) be the reproducing kernel for B k , that is, the Schwartz kernel of the projection Π k L 2 (C) → B k . Then ∀ψ ∈ B k T Γ (ψ)(z) = Γ Π k (z, w) ψ(w) dσ(w).
Now use the reproducing property: and using Fubini's theorem we get that Since this is true for all ψ, as desired.
More generally, in this paper we consider the following operators: Definition 1.2. Let a : Γ → C be a smooth function of compact support. Then T adσ : B k → B k is the operator T adσ (ψ)(z) = Γ Π k (z, w) ψ(w) a(w) dσ(w).
In the theory of Bergman spaces, one considers the holomorphic parts of the functions in B k , that is, the Fock space B k for k > 0, consisting of all entire functions f on C N such that f ∈ L 2 (C N , dL k ), where It is clear that the multiplication operator M k f (z) = e −k|z| 2 /2 f (z) is an isomorphism of B k onto B k . The operator M −1 k T adσ M k is the Toeplitz operator on B k T adσ f (z) = Γ e kzw f (w) k π N e −k|w| 2 dσ(w).
Therefore our results have direct translations to corresponding Toeplitz operators in the Fock space. We chose to work on B k for convenience.
A second motivation for the study of these operators comes from quantum mechanics. for each w ∈ C N consider the coherent state at w, e w ∈ B k : e w (z) = Π k (z, w).
In other words, up to an overall normalization factor, T adσ is a weighted superposition of the rank-one projectors P w over Γ. From the expression (8) it follows that (9) Tr(T adσ ) = k π N Γ a(w) dσ(w).
If a > 0, the operator (10) ρ a := 1 Tr(T adσ ) T adσ is non-negative and of trace one. In the language of quantum mechanics ρ is a mixed state or a density matrix.
Mixed states of this kind (with Γ Lagrangian) have been studied by Yohann Le Floch in [9], with C N replaced by a compact quantized Kähler manifold. (This work of Le Floch focuses on estimating the so-called fidelity of such states.) Quantum mechanically, they are partial traces of a pure state shared by the Bargmann space and by L 2 (Γ). In a standard interpretation, by taking a partial trace over L 2 (Γ), the latter is acting as the host of the (unknown) background state space. If {ψ j } is an orthonormal basis of eigenfunctions of ρ a with eigenvalues p j (so that p j ≥ 0 and j p j = 1), a possible quantum mechanical interpretation of ρ a is that ∀j it represents the state ψ j with probability p j , see [6] §19.3. Since the greatest eigenvalue of T Γ is (11) λ max (k) = sup (as follows from (2)), the the eigenvector corresponding to the greatest probability p j is the function ψ ∈ B k that is most concentrated on Γ in the L 2 sense. Incidentally, the case Γ Lagrangian is special in that the restriction operator R is injective since Γ is then totally real and middle-dimensional.
In this paper we study the asymptotics of the spectrum of T adσ and related operators in the semi-classical limit, that is, as k → ∞. We will obtain the asymptotic behavior of the moments of their spectral measure, and Szegö limit theorems of normalizations of these operators for special classes of submanifolds Γ. We now turn to stating our main results.
1.1. The norm estimate. Our first result is an upper bound on the operator norm of T adσ : Theorem 1.3. Let Γ be a smooth manifold in C N of dimension d ≤ 2N and a ∈ L ∞ (Γ) of compact support. Then there exists C > 0 such that the operator norm of T adσ satisfies As we will see this estimate is sharp in general. A Szegö limit theorem applies to a family of operators whose spectra are contained on a fixed interval, and this result sets the dependence on k of the normalization one needs to make on the T adσ in order to obtain such a theorem.
1.2. The trace of a multiple composition. Next we state a general theorem on the asymptotics of the composition of n operators of the form T adσ associated to an arbitrary submanifold Γ ⊂ C N .
To state our result we need to introduce an endomorphism of the tangent bundle of Γ which we denote by K : T Γ → T Γ and is defined as follows. Let Π w : R 2N → T w Γ be the orthogonal projection. Then K w : T w Γ → T w Γ, is defined by Specifically, the right-hand side is the dot product: In complex notation u · v = N j=1 u j v j , and, in accordance with (5), J is multiplication by √ −1. One can say that K is the projection of J on the tangent spaces of Γ. K w is skew-adjoint with respect to the Euclidean inner product on T w Γ, and therefore its eigenvalues are purely imaginary and come in conjugate pairs. Let us write the non-zero eigenvalues with multiplicities as Here r is half the rank of K w . In general the λ and r depend on the base point w ∈ Γ, but we have suppressed that dependence from the notation, for simplicity.
If we specialize to the case a 1 = a 2 = · · · = a n = a, this theorem gives us the asymptotics of the moments of the spectral measure of T adσ . This theorem is proved by the method of stationary phase, and ∆ n arises as a factor of the square root of the Hessian of the phase. ∆ n will be constant (with respect to w ∈ Γ) and explicit in the special cases we will consider below. Its dependence on n however introduces a new feature with respect to the "standard" Szegö limit theorem, which is the previous theorem in the case Γ = C N : . Then for each n = 1, 2, · · · Proof. Since d = 2N , the constant in front of the integral in (15) evaluates to 2 N (n−1) k π N . On the other hand in this case K = J, so r = N and all the λ are equal to one. Therefore ∆ n = 2 N (n−1) , and the powers of 2 cancel each other out.
This Theorem is stated in [2] with C N replaced by a compact Kähler manifold. It is a "folk" theorem in the C N case, and it clearly holds for more general amplitudes a, for example for a in the Schwartz class. We were not able to find a reference for it in the literature.
More generally, in case Γ is a complex submanifold of real dimension d = 2r, all λ are equal to one which implies (by a similar argument as before) that ∆ n = 2 r(n−1) , and Theorem 1.4 in turn implies that This indicates that T adσ is really a standard Berezin-Toeplitz operator on a Kähler manifold of real dimension d = 2r except that it has a large kernel, corresponding to all holomorphic functions vanishing on Γ.
1.3. The Szegö limit theorem. We now specialize Theorem (1.4) to certain classes of submanifolds Γ, which will allow us to obtain finer results. We begin by recalling some notions from symplectic geometry. Given a subspace V ⊂ R 2N ∼ = C N , let us denote its symplectic annihilator by If we denote by V ⊥ the subspace perpendicular to V with respect to the Euclidean inner product, it is easy to check that and conversely. It follows that if we define K : V → V as above, namely This will be a useful fact.
By the skew symmetry of ω it is automatic that any curve is isotropic and any hypersurface is co-isotropic.
We now assume that Γ is isotropic or co-isotropic. The Szegö limit theorem in both cases can be stated at the same time, if we introduce the following Notation: Let d be the dimension of Γ. Then we let In order to obtain a Szegö limit theorem we have to normalize T adσ as follows: In both the isotropic and co-isotropic cases, define By the norm estimate S adσ = O(1).
We will prove that Theorem 1.4 implies the following: Let ϕ be any function such that for some p > 0 ϕ(t)/t p ∈ C[0, R]. Then where O −α is the operator Remark 1.10. The operators O −α satisfy, for p > 0, that For α > 0 this follows from the change of variables x = p log(t/s): Remark 1.11. If d > 0, the right-hand side of (20) equals By Fubini's theorem we can also express this as the integral of ϕ with respect to an absolutely continuous measure dν a = D a ds on [0, R], Tr(ϕ(S adσ )) = max(a) 0 ϕ(s) D a (s) ds with a density given a.e. by This density is supported in [0, max(a)] and is bounded in any closed interval I ⊂ (0, max(a)] if d ≥ 2. In case d = 1 it can be shown that D a is continuous at regular values of a. The graphs of D a in case a ≡ 1 for d = 1 and d = 4 appear in the figure below. For all values of d the density of eigenvalues concentrates at zero, which is expected by compactness of the operators. For d = 1 the density of eigenvalues concentrates at s = max(a) as well (albeit it temains integrable at max(a)). The case d = 2 is particularly simple, as the log function disappears. Remark 1.12. The previous Theorem includes the extreme co-isotropic case Γ = C N . Since O 0 is the identity, the limit of the spectral measure of S adL is just the push-forward by a of the volume form on R 2N . A similar statement can be derived in the case when Γ is a complex submanifold, but we will not write it explicitly.
1.4. Corollaries and further results. Proof. N I (k) = Tr [1 I (S adσ )] where 1 I is the characteristic function of I. It is easy to construct two sequences of trapezoidal functions {ϕ n } and {ψ n } such that ∀n ϕ n ≤ 1 I ≤ ψ n and such that lim n→∞ ϕ n = 1 I = lim n→∞ ψ n pointwise except at the endpoints of I. For each n and for each k we have Tr(ϕ n (S)) ≤ N (k) ≤ Tr(ψ n (S)). Now multiply through by 2 d /2 π k d/2 and take limits as k → ∞ to get that, for each n, where we have used the notation By the Lebesgue dominated convergence theorem (recalling that D a (s) ds is absolutely continuous) and the result follows.
In case a ≡ 1 An integration by parts allows one to compute the integral. The result is as follows: Corollary 1.14. Assume Γ is compact, isotropic or co-isotropic, and d > 0. Let I = [λ, µ] ⊂ (0, 1] and let N I (k) denote the number of eigenvalues of S dσ in I. Then Asymptotics of the Schatten norms. We will also prove the following result on the Schatten norms: Theorem 1.15. Let Γ be isotropic or co-isotropic and let a be smooth compactly supported complexvalued function on Γ. Then for every 0 < p < ∞, In particular The limit of the entropy of a mixed state. As a corollary of the Szegö theorem we can estimate the entropy of the mixed states mentioned above. Let us fix Γ ⊂ R 2N either an isotropic or coisotropic submanifold with d > 0, and a ∈ C ∞ 0 (Γ) such that a ≥ 0 and Γ a dσ = 1. Then is a mixed state. Let us denote by p 1 ≥ p 2 ≥ · · · ≥ 0 the eigenvalues of ρ a listed with multiplicities.
We are interested in the information entropy of ρ a , that is p j log(p j ).
Our Szegö limit theorems are on the spectra of the operators Taking traces in (27) one gets C d k −d/2 ∞ j=1 µ j = 1 and so By the Szegö theorem using the test function ϕ(s) = s log(s) (which is in our class of test functions), and therefore the second term in (28) is O(1). However the first term is universal (it only depends on d). After a short calculation of constants one can conclude: This result has the same form as the general relationship between the differential entropy of a continuous distribution and the entropy of its discretization in bins of size Incidentally, we can directly apply Theorem (1.15) to obtain the limit of the trace distance between two mixed states associated with the same Γ; it is just the L 1 distance: Remark 1.17. We finish this subsection with the following general remark. In all the previous statements, the only difference between the isotropic and the co-isotropic cases is that, in the latter, d is the codimension of Γ instead of its dimension. This can be interpreted as follows. Assume Γ is co-isotropic. Then Γ is foliated by leaves tangent to the spaces T w Γ • , w ∈ Γ. The dimension of the leaves is the codimension of Γ, and the leaves are isotropic submanifolds of R 2N . It is clear from the definition that in this case T adσ can be thought of (albeit non rigorously) as an integral of singular Toeplitz operators, one for each isotropic leaf. By Theorem 1.7 the "crossed terms" between these isotropic operators do not contribute to the asymptotics of Tr[ϕ(S)]. The fact that the co-isotropic case formally boils down to a sum of isotropic cases indicates that the construction of mixed states, as we are considering here, is more naturally adapted to the isotropic setting. This is in agreement with the general philosophy that single quantum states aren't naturally associated with co-isotropic submanifods. Rather, if the null foliation of a co-isotropic is fibrating, π : Γ → X, then X is symplectic and to Γ one ought to associate the equivalent of the quantizaton of X-worth of quantum states ("quantization commutes with reduction").
Examples. We present here some examples. (For examples with C N replaced with a compact
Kähler manifold see [9], where the harmonic oscillator is also treated.) Clearly T Γ must commute with the S 1 representation on B k induced by its action on C, so it is diagonal on the monomial basis consisting of |n = z n e −k|z| 2 /2 , n = 0, 1, . . . .
One computes that
from where it follows that the length of the circle times k/π, in agreement with (9). The associated mixed sate is ρ Γ := 1 2kr T Γ . Its eigenvalues are just With respect to n, p n is exactly a Poisson distribution with λ = kr 2 .
To estimate the operator norm of T Γ we find the maximum eigenvalue λ max (the mode of the Possion distribution). For this one considers the quotients λn λn−1 = kr 2 n . It follows that λ max corresponds to n ∼ = kr 2 , which, in the present context, can be seen as a kind of Bohr-Sommerfeld condition. In fact if we impose that k be of the form k = r 2 /n, n = 1, 2 · · · , then by Stirling's formula, showing that (12) is sharp in this case. The associated eigenvector |n concentrates semi-classically on Γ. The normalized operator is S Γ = π 2k T Γ and has greatest eigenvalue ∼ 1, and the Szegö limit theorem, in this case, reads for all functions ϕ such that ϕ(s)/s p is continuous on [0, 1] for some p > 0.
We can generalize the previous example to a product of d circles in C N : Once again the monomials z n e −k|z| 2 /2 , n = (n 1 , . . . , n N ) ∈ N N are eigenfunctions of T Γ . The eigenvalues are simply the product of the one-dimensional eigenvalues, namely where |r| 2 = r 2 j , n! = n j !, |n| = n j . Once again estimating the greatest eigenvalue shows that the operator norm of T Γ is O(k N −d/2 ), showing that in general (12) is sharp.
1.5.2.
A symplectic but non-complex example. This example shows that ∆ n (w) does not have to be constant with respect to w ∈ Γ. Let z = (z 1 , z 2 ) be the variable in C 2 , and let z j = x j + √ −1y j . Let Γ be defined by the equations: Γ : x 2 = 1 2 x 2 1 and y 2 = 0.
Then (x 1 , y 1 ) → x 1 , y 1 , 1 2 x 2 1 , 0 is a parametrization of Γ, and { 1, 0, x 1 , 0 , 0, 1, 0, 0 } is the moving frame associated to it. Clearly Γ is a symplectic submanifold. An elementary calculation shows that the matrix of K in the parametrization is and therefore r = 1, The paper is organized as follows. In the next section we compute the covariant symbol of T adσ and its kernel, and derive some easy consequences. Theorem 1.3 is proved in §3. In §4 we establish Theorem 1.4 and the Szegö theorem (Theorem 1.7), first for polynomials and then for general test functions ϕ. A key step in this extension is to show that for every p ∈ (0, 1) Tr(S p adσ ) is O(k d/2 ), which we do in §4. 3. In §5 we prove the Theorem on the Schatten norms. Finally, in §6 we consider the case when Γ is a Lagrangian submanifold satisfying the Bohr-Sommerfeld condition and obtain lower bounds for the maximum eigenvalue of T adσ , by using as test functions Lagrangian pure states associated with Γ (see Proposition 6.4).
Our proofs are direct, based on the explicit formula for the reproducing kernel and the method of stationary phase. It is expected however that there is a symbol calculus for a class of operators that includes the operators treated here, and generalizations. (For example, one can envision forming mixed states by integrating projectors over more general coherent states.) The symbols will be symplectic spinors, as in [4] and [7]. Such a symbol calculus will allow us to deal with many other issues related to these operators, for example propagation under a quantum Hamiltonian, as well as the extension of the theory to quantized compact Kähler manifolds. Since the Szegö projector on such manifolds has the same form asymptotically as in the Euclidean case, it is clear that the results presented here will take the same general form in that setting.
Symbols and kernels
We keep the notation of the previous section.
2.1. The covariant symbol. By definition, the covariant (Wick or Berezin) symbol of T adσ is the function A straightforward calculation shows: Note that in particular aσ k is exponentially small as k → ∞ unless z ∈ Γ. Moreover, the knowledge of aσ k for every k determines adσ. In fact the previous expression shows that aσ k is the heat evolution at t = 1/4k of the measure adσ in C N , and therefore lim k→∞ adσ k dv = adσ in the w * topology of C 0 (C N ) * . Also, by the general theory of Berezin ( from which one can easily recover (9).
Another general formula due to Berezin is that the trace of the composition of two operators is the integral of the product of the covariant symbol of one times the Toeplitz (or anti-Wick or contravariant) symbol of the other. This leads to: Below we generalize (35) to a composition of n operators.
It is straightforward to estimate the trace of an ordinary Berezin-Toeplitz operator composed with T adσ . Let H : C N → C be smooth and, say, of polynomial growth as well as all its derivatives. Define ∀ψ ∈ B k T H (ψ) = Π k (Hψ). Then we have which, by the method of stationary phase, implies the following: (in fact there is a full asymptotic expansion of this trace).
2.2.
The Wick kernel. Let A : B k → B k be any bounded operator. By the reproducing property of the coherent states A(ψ) This shows that Furthermore, (34) is equivalent to The following is immediate: Let Γ ⊂ C n be a submanifold and dσ a fixed positive measure on it. For each a ∈ C ∞ 0 (Γ), the Berezin kernel (37) of T adσ is For future use we record here a formula for the trace of a composition of n singular Toeplitz operators associated with the same Γ ⊂ C N : and ω is the symplectic form ω(z, w) = 1 2i (zw − wz). Proof. Let us denote by K the Wick kernel of the composition T a1dσ •· · ·•T andσ . Then by induction on (38) one can prove that Π(ζ i , ζ i+1 ) Π(ζ n , w) a 1 (ζ 1 ) dσ(ζ 1 ) · · · a n (ζ n ) dσ(ζ n ).
The desired trace, (41), is the integral C N K(z, z) dL(z). Using the reproducing property and the result follows.
Proof of the norm estimate
We now prove Theorem 1.3. We need the following: where γ(s) = Θ N 2 s 0 t N −1 e −t dt and Θ N is the surface area of the unit sphere in C N .
Proof. Let S be the unit sphere in C N , τ the Lebesgue measure in S so that Θ N = τ (S). Then the sub-harmonicty of the function of w, |f (a + ρw)e ρw·a | 2 , for a and ρ fixed implies that Proof. (of Theorem (1.3)) Assume without loss of generality that a ≥ 0. Let r > 0 and and {z n } a numbering of the lattice rZ 2N in C N . For every λ > 0, there exists a number L ∈ N such that no more than L balls B(z n , λr) intersect for any n. The constant L is independent of r. Let dµ = adσ.
Notice that since Γ is smooth and a ∈ L ∞ then µ(B(z, r)) ≤ M a ∞ r d .
Since C ⊂ ∪ n B(z n , 2N r) (the diameter of a cube in R 2N of side r is √ 2N r), then if we let L correspond to β = (2N + 1)r in the argument above we have, for ψ = f (z)e −k|z| 2 ∈ B k , If we choose kr 2 = 1 we obtain that for a constant C > 0 for every ψ ∈ B k and the proposition follows.
Proof of the Szegö limit theorem
The proof of Theorem 1.7 begins with the proof of Theorem 1.4. After that we show that the traces of the S operators are bounded, and a final argument concludes the proof.
We will estimate this trace using the method of stationary phase.
For this reason one can easily show that the integrand of (41), integrated over the complement of any neighborhood of Γ ∆ , is exponentially decreasing. Therefore, asymptotically to all polynomial orders we can restrict our attention to a small neighborhood of Γ ∆ . In addition we will show that ( * ) Γ ∆ is a non-degenerate manifold of critical points of Φ.
In particular, in a sufficiently small neighborhood of Γ ∆ any critical point of Φ is in Γ ∆ .
We now compute the Hessian of ψ 2 , using (53) and (54). Obviously the partial derivative because terms containing second derivatives of γ appear in pairs that cancel each other out, by the skew symmetry of ω. It follows that the Hessian of ψ 2 is 0)) .
In conclusion, the Hessian of the full phase ψ = iψ 1 + ψ 2 with respect to the s variables is the block tri-diagonal Toeplitz matrix Hess n−1 (t) = iS n−1 (t) where (59) It is shown in the appendix (see (110)) that In particular the determinant of this matrix is positive. Since the s variables are variables normal to Γ ∆ , this proves the claim ( * ).
We now let {χ α } denote a partition of unit of a neighborhood of Γ ∆ in Γ n so that α χ α ≡ 1 in a neighborhood of the support of n j=1 a j (ζ j ), subordinated to a cover for which the previous calculations apply. Then since α χ ∆ = 1 in the support of j a j . .
4.2.
The Szegö limit theorem for polynomials. We now specialize to the case when Γ is isotropic or co-isotropic.
Let us now assume that Γ is co-isotropic, so that d = 2N − d and Once again by (15) we can conclude that We now compute ∆ n . By (17), at each point in Γ Recall that r is half the rank of K, and since T w Γ • and T w Γ have complementary dimension the dimension of Γ must equal d = N + r. In particular r is constant. Note that d = N − r.
To continue we need a lemma from linear algebra: Proof. The inclusion ⊃ is obvious. To prove the reverse inclusion, let v = Π(J(u)) with u ∈ V .
Then v = J(u) + w with w ∈ V ⊥ = J(V • ). Therefore ∃a ∈ V • such that w = J(a) and finally v = J(u + a) with u + a ∈ V .
Therefore V = E ⊕ V • and the mapping K : V → V is diagonal with respect to this decomposition. It is zero on V • and agrees with J on E. Therefore, there exists a basis of V where the matrix for the transformation K is the canonical one, namely In particular all the eigenvalues λ are equal to one, and one computes
It follows that
As the constant in front of the integral is 2 d/2−N k π d/2 = 2 −d /2 k π d/2 , the proposition is proved.
Taking a 1 = a 2 = · · · = a n = a, using the linearity of the trace and since O −n/2 (s n )(t) = t n n d/2 , we immediately obtain: Corollary 4.3. The Szegö limit Theorem 1.7 holds for ϕ a polynomial without constant term.
4.3.
Bounding the traces. Let us now assume that a ≥ 0 and that Γ is isotropic or co-isotropic. The next step in order to obtain Theorem 1.7 is to show that the rescaled traces k −d/2 Tr (S p adσ ) are bounded as k → ∞. If p ≥ 1 this clearly follows from Corollary 4.3. Our immediate goal is to extend this bound to 0 < p < 1. We will prove: Lemma 4.4. Assume a ≥ 0 is compactly supported and that Γ is isotropic or co-isotropic. Then for every p ∈ (0, 1) there is C > 0 such that for all k k −d/2 Tr (S p adσ ) ≤ C. As we will see the proof reduces to estimating the integral as k → ∞.
4.3.1.
Localization to a tubular neighborhood. Let us introduce a tubular neighborhood of Γ, where d(z, Γ) = min w∈Γ |z − w| is the distance from z to Γ, and > 0 is small enough so that N is a bundle N → Γ whose fibers are (2N − d)-dimensional disks. We will prove: Lemma 4.5.
Proof. Let W N = C N \ N be the complement of N , and partition it as follows, where R > 0 is chosen large enough so that |z| ≥ R and w ∈ Γ ⇒ |z − w| ≥ |z|/2.
It is clear that there is C > 0 such that and therefore and it is not hard to see that the last integral is O(k −∞ ) as well.
4.3.2.
Integration over N . Using a (finite) partition of unit we can assume without loss of generality that the support of a is contained in the image of a parametrization of Γ, Let us introduce coordinates s = (s 1 , . . . , s d ) ∈ R d and z = (x 1 , . . . , x 2N ) ∈ R 2N ∼ = C N . Also introduce smooth orthonormal vector fields E j , j = 1, . . . ν := 2N − d, defined on the image of γ which, at each point p ∈ Γ span the normal space T p Γ ⊥ . We will regard the E j as functions of s via the parametrization. Letting B(0, ) = {t ∈ R ν ; |t| < }, we can define a parametrization of N by . We now apply the method of stationary phase to the integral where h(u)du = dσ. Choosing small enough we can guarantee that the only critical point of the phase is at u = s, which is the minimum of the function u → |z(s, t) − γ(u)| 2 . Since the normal moving frame {E j } is orthonormal and the method of stationary phase gives that uniformly in (s, t). Here H(s, t) is the determinant of the Hessian of the phase at u = s. On the other hand, by the assumption on the support of a, where W (s, t)ds dt = dL(z). Therefore, for k sufficiently large Recalling that ν = 2N − d, we obtain: Lemma 4.6. There is C > 0 such that for all k sufficiently large Proof of Lemma 4.4. Let T p be the Berezin transform (or Berezin symbol) of T p , namely, where k z = e z / e z are the normalized coherent states. Then On the other hand, for 0 < p ≤ 1 we have that if e is a unit vector then T p (e) , e ≤ T (e) , e p (see for example [11] Proposition 1.31). Therefore If we recall that we see that
4.4.
End of the proof of Theorem 1.7. Let us fix a ∈ C ∞ 0 (Γ), a ≥ 0. In the remainder of this section we denote S adσ simply by S.
We begin by introducing the functional that appears on the right-hand side of (20), namely We will need: Tr[S p f (S)] = F(t p f (t)).
Proof. Let [0, R] be an interval containing the spectra of S adσ for all k, and let (g j ) a sequence of polynomials such that lim j→∞ g j (t) = t p uniformly on [0, R].
We can estimate , Letf be the polynomial whose coefficients are the absolute values of the coefficients of f . Then Tr (|f (S)|) ≤ Tr f (S) and therefore, using the Szegö theorem for polynomials (or just the estimates for the traces of powers of S), we have for some C > 0 and all k. Let > 0. There exists j large enough so that The first of these conditions together with (75) implies that I < /3. Now for each j the quantity II tends to zero as k → ∞, by Corollary 4.3. Therefore the left-hand side of (74) is less than if k is large enough.
End of the proof of Theorem 1.7. Let ϕ ∈ C, that is, for some p ∈ (0, 1), the function ψ(t) := ϕ(t) t p is continuous on [0, R]. In the argument below p can be arbitrary in (0, 1). Therefore, without loss of generality we can assume that ψ(0) = 0.
Let (f ) be a sequence of polynomials such that Without loss of generality we can assume that f (0) = 0 for all . We have that in the operator norm uniformly in S provided S remains bounded. Also, by Lemma 4.7 We can now estimate: By Lemma 4.4 the numbers k −d/2 Tr(S p ) are bounded. Therefore there exists C > 0 such that that is, uniformly in k. Applying Lemma 4.8 (which is possible since f (0) = 0 for all ) and exchanging the limits → ∞, k → ∞ we get Tr[ϕ(S)] = F(ϕ(t)), as desired.
Asymptotics of the Schatten norms
When Γ is isotropic or co-isotropic one has (64), that is Tr(S a1dσ · · · S andσ ) = 1 Hence for any polynomial p such that p(0) = 0, Recall that a bounded operator S in a Hilbert space belongs to the Schatten class S r , with r > 0 if where |S| = (S * S) 1/2 . S r is a norm for r ≥ 1, and for 0 < r < 1 After these preliminary remarks we now prove Theorem 1.15.
Proof. For a = a 1 + ia 2 a smooth complex function in Γ we can write where C = a ∞ so all the operators in this linear combination have positive symbols. From (80) and Lemma 4.4, it follows that k −d/2 Tr (S * adσ S adσ ) r/2 is bounded in k for any r > 0. Denote A = S * adσ S adσ . Let > 0 and consider a polynomial p 1 (t) such that where C 1 (r) = sup k 2 d /2 k π −d/2 Tr(A r/4 ). Then we have Similarly, let C 2 = sup k 2 d /2 k π −d/2 Tr (|p 1 (A)|) < ∞ and let p 2 be a polynomial such that Thus we have found a polynomial p = p 1 p 2 such that Notice that we can choose the p i so that t r/4 − p i L ∞ [0,R] is small enough to have Finally, by (79) there exists M such that if k > M if k > M and the proof is complete.
Estimating λ max when Γ is a Bohr-Sommerfeld Lagrangian
In this section we obtain an asymptotic lower bound for the greatest eigenvalue of T adσ , under the assumption that a is real-valued and Γ satisfies a Bohr-Sommerfeld condition. We begin by recalling that the greatest eigenvalue of T adσ is The Bohr-Sommerfeld condition allows us to construct a sequence {ψ k ∈ B k ; k = 1, 2, · · · } whose micro-support is Γ, and the lower bound is obtained by considering the asymptotics as k → ∞ of In this section we work with the symplectic form on C N Ω = −2ω, that is becomes Ω = j dp j ∧ dq j . This rescaling of ω of course does not change the notions of isotropic/co-isotropic.
The reproducing kernel of B k is now We will also need the potential one-form z j dz j − z j dz j so that Ω = dη. Denote by ι : Γ → C N the inclusion. By the hypothesis that Γ is isotropic we have: dι * η = ι * dη = ι * Ω = 0, that is, ι * η is a closed one-form on Γ. It is rare that it is an exact form. However the following is relatively more common: Definition 6.1. We will say that Γ satisfies the Bohr-Sommerfeld condition iff there is a smooth map Φ : Γ → S 1 such that We henceforth assume that this condition holds. It will be convenient to introduce the notation where Γ → Γ is the universal cover of Γ. We will abuse the notation and write Φ(w) = e iϑ(w) , identifying Γ with a fundamental domain in Γ. Condition (86) now reads Definition 6.2. Let α : Γ → R be a smooth function. With the previous notation we let ∀k = 1, 2, · · · More specifically, 2 Ω(z,w)+ϑ(w)+ i 2 |z−w| 2 ] α(w) dσ(w).
We can estimate the norm of ψ k as follows in case Γ is lagrangian (for an analogous result on compact Kähler manifolds see [3]): Proof. By the reproducing property Now apply the method stationary phase in the inner integral (with respect to w 1 ) with w 2 fixed, that is, to In a local parametrization of Γ w 1 = w 1 (t), t ∈ U ⊂ R N an open set, the derivative of the phase is where we letẇ 1 = ∂ ∂tj w 1 (t) for simplicity. This shows that the critical points of the phase in (92) are the values of t such that It is not hard to see that, in real terms, where T ⊥ Γ is the metric orthogonal to T Γ and is the symplectic annihilator of T Γ. In the lagrangian case , and therefore the only critical point of (92) is the value t 0 such that w 1 (t 0 ) = w 2 . The hessian matrix of the phase at the critical point is To proceed, let us assume without loss of generality that the parametrization of Γ is such that the matrix (g ij ) of the metric is the identity matrix at t = t 0 . Together with the assumption that Γ is isotropic this implies that ∂w1 ∂ti · ∂w1 ∂tj = I N ×N . Therefore the method of stationary phase gives that (92) equals and therefore That (92) equals (96) gives the pointwise estimate where the constants implicit in the O estimate can be taken uniformly on z ∈ Γ by compactness. Therefore and therefore (100) Γ |ψ k (z)| 2 a(z) dσ(z) ψ k 2 = 2k π N/2 Γ |α(z)| 2 a(z) dσ(z) Γ |α(w)| 2 dσ(w) Finally we obtain: Proposition 6.4. If λ max (k) is the largest eigenvalue of T adσ , then, for any α ∈ C ∞ 0 (Γ) such that α L 2 = 1 (101) λ max (k) ≥ 2k π N/2 Γ |α(z)| 2 a(z) dσ(z) + O(k N/2−1 ).
Remarks:
(1) In particular, if a ≡ 1 the asymptotic lower bound obtained is simply 2k π N/2 . It is universal (independent of Γ). (2) If we take α to be constant, by virtue of Theorem (1.3) we can conclude that ∃C, C > 0 such that Appendix A. Computing det(Hessian) We present here the computation of the determinant of the matrix S equal to − √ −1 times the Hessian, that is, the matrix (59). For convenience we write q = n − 1. The matrix (59), partitioned into a q × q array of d × d blocks, is equal to where M q is the block tri-diagonal Toeplitz matrix All blocks consist of d×d matrices. Since H is skew-symmetric B is Hermtian and M q is symmetric. We will prove that det(M q ) = ∆ 2 n .
In the calculation of det(S q ), we follow the approach of [10]. Let R be the ring of d × d complex matrices generated by the identity and the matrix W := G −1 H.
Clearly R is a commutative ring (in particular [B, B ] = 0). The matrix M q is a q × q matrix with entries in R. Any such matrix has a determinant, which we denote with a capital D, that is an element in R; in particular Det(M q ) ∈ R.
Det is defined by the usual formula which is unambigous since R is commutative. All the usual rules for computing determinants carry over to computing Det, and one has the theorem that for any d × d matrix L with entries in R, its numerical determinant equals det(L) = det(Det(L)).
We will use this result to compute det(M q ), first recursively and later in closed form. Since det(S q ) = det(G) q det(M q ), we will obtain a formula for det(S q ).
Then one has It is possible to solve (105) in closed form, as follows. Let us write, for simplicity of notation, The recursion relation D q+1 = 2D q − ZD q−1 can itself be written in matrix form and therefore, introducing the matrix with entries in R T := 0 I −Z 2I , we see that Let us diagonalize T to compute its powers. Det(T ) = Z, and the "eigenvalues" in R of this matrix are Λ 1,2 := I ∓ √ I − Z = I ∓ iW. One can check that the column vectors of Now Λ 1,2 = I ∓ iW , therefore The right-hand side is a polynomial in W without constant term. Substituting back into (108) concludes the proof of: Proof. The definition of the matrix W is equivalent to GW = H, where G = (γ i · γ j ) and H = (ω(γ i , γ j )).
Since ω(v, w) = v · J(w), we can re-write this in the form ∀i, k j G ij W jk = H ik , or ∀i, k γ i · j W jk γ j = γ i · J(γ k ).
Since {γ i } is a basis of V , this is equivalent to ΠJ(γ k ) = j W jk γ j . | 2017-12-31T22:23:11.000Z | 2017-12-31T00:00:00.000 | {
"year": 2020,
"sha1": "fff74eaa181664f66505ecad0f3bc66aa859a74c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1801.00366",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "fff74eaa181664f66505ecad0f3bc66aa859a74c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
253593906 | pes2o/s2orc | v3-fos-license | Second order iterative functional equations related to a competition equation
The functional equation related to competition ([2]) fx+y1-xy=fx+fy1+fxfy,x,y∈R,xy≠1,\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left( \frac{x+y}{1-xy}\right) =\frac{f\left( x\right) +f\left(y\right)} {1+f\left( x\right) f\left( y\right)},\qquad x,y\in\mathbb{R}, xy\neq 1,$$\end{document}for y = cx with a fixed c > 0, leads to the equation f1+cx1-cx2=fx+fcx1+fxfcx,x∈R,x<1c.\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$f\left( \frac{\left( 1+c\right) x}{1-cx^{2}}\right) =\frac{f\left(x\right) +f\left( cx\right)} {1+f\left( x\right) f\left( cx\right)},\qquad x\in \mathbb{R}, \left\vert x \right\vert <\frac{1}{\sqrt{c}}.$$\end{document}The case c = 1 (a first order iterative functional equation) was treated in [3]. In this paper we consider the case c ≠ 1 (when the equation is of the second order). We show that a functionf:R→R,f0=0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f:\mathbb{R} \rightarrow \mathbb{R},\,f\left( 0\right) =0}$$\end{document}, differentiable at the point 0 satisfies this functional equation iff there is a realpsuch thatf=tanh∘ptan-1\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${f=\tanh \circ \left( p\tan ^{-1} \right) }$$\end{document} which extends the main result of [3].
Introduction
The functional equation on a restricted domain (following [1] a conditional functional equation), the so-called competition equation, considered first in [2], was also treated in [3]. If f is a solution then either f (0) = 0, or f (0) = −1 or f (0) = 1. It is shown in [2] (cf. also [3]) that, if f takes value −1 or 1, then it is a constant function (cf. also Remark 1). Therefore only the case f (0) = 0 is interesting. The main result of [3] says that, in this case, a function f : R → R satisfies the equation for all real x, y such that xy < 1, iff f = tanh •α • tan −1 where α : R → R is an additive function. Thus, if f is measurable or continuous at least at one point, then there is a p ∈ R such that f = g p where g p = tanh • p tan −1 .
Taking y = x in (CE) we obtain the diagonalization of the competition equation, which is a first order iterative functional equation (cf. [5,6]). In [2] it is proved that if a function f : R → R , such that f (0) = 0, satisfies this equation for all x ∈ (−1, 1) , and is twice differentiable at the point 0, then f = g p for some real p.
In [4] the following stronger result is presented. A function f : R → R, f (0) = 0, differentiable at the point 0, satisfies this functional equation iff there is a real p such that f = g p . Moreover g p (0) = p (Theorem 1). Applying the theory of iterative functional equations ( [5,6], cf. also [7,8]) it is also shown that in this result the assumption of the differentiability of the solution f at the point 0 cannot be replaced by the continuity of f at 0.
In the present paper we consider a generalization of the above diagonalization problem. Namely, in Sect. 2, for a fixed c > 0 , restricting the competition equation to the straight line y = cx, we obtain the iterative functional equation which is of the second order if c = 1. (For the definition of the nth order of an iterative functional equation see [5], chapter XII, and [6] pp. 237-239.) Our main result says that a function f : R → R, differentiable at the point 0 and such that f (0) = 0, satisfies this equation iff there is a p ∈ R such that f = g p , where g p = tanh • p tan −1 . In Sect. 3 we discuss the case when c ≤ 0. In Sect. 4, recalling the motivation coming from a meteorological phenomenon (hail suppression by competition of small particles via cloud seeding), we discuss the mutual relation between (CE) and a Riccati differential equation.
Proof. It is easy to verify that if f : R → R is differentiable at the point 0, f (0) = 0, and f satisfies equation (1), then the function ϕ : R → R defined by is continuous at 0 and satisfies the functional equation Suppose that the functions ϕ 1 : R → R and ϕ 2 : R → R satisfy this equation, are continuous at the point 0 and Put The function h : is odd, continuous, strictly increasing, convex in (0, ∞) , concave in (−∞, 0), maps I c onto R and It follows that its inverse β Of course, the function γ := cβ has similar properties. Moreover and, consequently, for every t ∈ R, the sequences (β n ) n∈N and (γ n ) n∈N of iterates of the functions β and γ converge uniformly to 0 on compact subsets of R.
With the above solutions ϕ 1 , ϕ 2 of equation (2), we have Hence, putting and subtracting the respective sides of these equalities for j = 1 and j = 2, we get, for all x ∈ I c , Since ϕ 1 and ϕ 2 are continuous at 0 and Moreover, for x ∈ (−δ, δ) , we have (3) and considering the definition of γ, we get Vol. 89 (2015) Competition functional equation 111 where κ (t) = k (β (t)) , μ (t) = m (β (t)), hence and, of course, Of course, σ is continuous, strictly increasing and From inequalities (4) and (5) we obtain, for all t ∈ I, Hence, by induction, where σ n denotes the n-th iterate of σ. Since the decreasing sequence of intervals (σ n (I)) n∈N tends to {0} , this inequality, the continuity of ψ at 0 and ψ (0) = 0 imply that sup |ψ (I)| = 0. Hence, from the definition of ψ we get ϕ 1 (x) = ϕ 2 (x) for all x ∈İ. Now, according to the theory of iterative functional equations (cf. [5], p. 68, Lemma 3.1) we conclude that ϕ 1 = ϕ 2 . Taking into account the definition of the function ϕ, we have proved that, for any p ∈ R, there exists at most one solution f of equation (1) that is differentiable at 0 and such that f (0) = p. Since, in view of Lemma 1, the function f = g p satisfies equation (1), is differentiable at 0 and f (0) = p, the proof is complete.
To justify the assumption f (0) = 0 consider the following
Discussion of the case when c is nonpositive
In this section we consider the functions f : R → R, f (0) = 0, satisfying the competition equation (CE) restricted to the straight line {(x, cx) : x ∈ R} , i.e. the equation where c ≤ 0. Assume, for instance that −1 < c < 0; setting y = −cx in the competition equation (CE) we get whence, calculating f (x) , we get the functional equation If f : R → R is differentiable at the point 0, f (0) = 0, and f satisfies this equation, then the function ϕ : R → R defined by is continuous at 0 and satisfies the functional equation Suppose that the functions ϕ 1 : R → R and ϕ 2 : R → R satisfy this equation, are continuous at the point 0 and Hence we have Setting and subtracting the respective sides of these equalities for j = 1 and j = 2, performing simple calculations, we get where Since the functions y 1 , y 2 , z 1 , z 2 are continuous at x = 0 and it is easy to check that there is a real number δ > 0 such that Hence, making use of (6), we obtain Now we can argue similarly as in the proof of Theorem 1.
Motivation and remarks on a Riccati-type differential equation and the competition functional equation (CE)
In [2] it was shown that a model of a meteorological phenomenon in cloud physics, interpreted as competition and described with the aid of a Riccati differential equation, can be fully characterized by the functional equation (CE) that does not involve any derivative. The functional equation (CE) reflects symmetry properties of the corresponding model differential equation (and of its solution).
Remark 4. (Differential equation and initial condition)
Some physical models present themselves in this way, with two arbitrary parameters a, b > 0, fixed. (In [2] there is a = b = 1.) This may be reduced to a one-parameter model: (7) can be written in the form Therefore, setting we can write this equation in the form √ a and, consequently, we can write this equation in the form With initial condition f (x 0 ) = f 0 (where x 0 , f 0 ∈ R, fixed), the solution of equation (8) (obtainable by separation of variables) reads as Check: f (x 0 ) = f 0 , and we have . It is to be noted that the last two versions of (9) do not require |f 0 | < 1 (fixed) but admit f 0 ∈ R (fixed). In general, (9) is not an odd function of x. Only for the special initial condition f (0) = 0 (i.e. x 0 = 0, f 0 = 0, called "standard" initial condition), the result (9) reduces to an odd function (called "standard" solution), With this standard solution g p we can represent the general solution f of the differential equation (8) via (9) as a rational (fractional linear) expression in g p .
From (10) we get g −p (x) = g p (−x) = −g p (x), implying that we may relax the condition p > 0 (fixed) to p ∈ R (fixed) for f : R → R. Already from (8) we see that changing the sign of p is equivalent to changing the sign of f [and f 0 in (9)].
Remark 5. If (in a certain physical situation) a slightly more general fourparameter model appears adequate or desirable (e.g. for easy interpretation), with parameters A, B, C, D ∈ R (fixed) and CD = 0, it can be reduced immediately to the two-parameter model (7) by writing abbreviating a := A D , b := BC D 2 , F (X) := D C G (X), we get equation (7). [Further reduction to the one-parameter model (8) may be done as above.] Remark 6. If a differentiable function f : R → R satisfies the functional equation (CE) then it satisfies the Riccati differential equation (8).
Proof. Assume that a differentiable function f : R → R satisfies the functional equation x,y∈ R, xy < 1. | 2022-11-18T14:35:50.765Z | 2014-10-04T00:00:00.000 | {
"year": 2014,
"sha1": "aeb612afbd95e84eb0b0f294fc335022a3f4b372",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00010-014-0307-1.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "aeb612afbd95e84eb0b0f294fc335022a3f4b372",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
2099788 | pes2o/s2orc | v3-fos-license | The Effects of a Skill-Based Intervention for Victims of Bullying in Brazil
This study’s objective was to verify whether improved social and emotional skills would reduce victimization among Brazilian 6th grade student victims of bullying. The targets of this intervention were victimized students; a total of 78 victims participated. A cognitive-behavioral intervention based on social and emotional skills was held in eight weekly sessions. The sessions focused on civility, the ability to make friends, self-control, emotional expressiveness, empathy, assertiveness, and interpersonal problem-solving capacity. Data were analyzed through Poisson regression models with random effects. Pre- and post-analyses reveal that intervention and comparison groups presented significant reduced victimization by bullying. No significant improvement was found in regard to difficulties in practicing social skills. Victimization reduction cannot be attributed to the program. This study contributes to the incipient literature addressing anti-bullying interventions conducted in developing countries and highlights the need for approaches that do not exclusively focus on the students’ individual aspects.
Introduction
School bullying refers to acts among peers characterized by intention, repetitiveness and imbalance of power among students [1]. These acts can be physical (e.g., hitting, kicking, pushing), verbal (e.g., calling names, swearing at the victim, laughing), or relational (e.g., socially isolating the victim, spreading rumors, or manipulating relationships) [1,2]. Children and adolescents can become involved with bullying as bullies, victims, reactive-victims, or bystanders [3]. The rates at which this phenomenon occurs range among countries: between 7% and 43% refer to victims and from 5% to 44% refer to bullies [4]. Rates in Brazil range from 7% to 22% for victimization and from 17% to 21% for aggression [2,5]. The presence of bullying in the school context hinders learning and the healthy development of students [5], while also collaborating to create a perception that school is not a very safe place [6]. Bullies and victims can present higher rates of depressive symptomatology [7], anxiety [8], insecurity [9], loneliness [5], learning problems [10], juvenile delinquency [11], and suicidal ideation [12]. The negative effects of bullying indicate the need to develop interventions to prevent or reduce the occurrence of this phenomenon in schools.
The international literature shows that various anti-bullying programs have been implemented, such as whole-school anti-bullying programs [13,14], curriculum interventions [15,16], and social skills training [17,18]. A meta-analysis including 44 studies reports that the success of interventions varies. The average decrease in aggression is 23%, while victimization has been reduced by 20%, though only some programs presented significant results [19]. The most efficacious elements of interventions intended to decrease victimization include strict disciplinary methods, training for parents, meetings, videos and cooperative group work for students, as well as programs of greater duration and intensity directed to children and teachers [19]. Another two meta-analyses identified very few effects to have practical relevance [20,21].
Due to the inexpressive results presented by most anti-bullying interventions, some researchers have drawn attention to the need to implement programs that indirectly approach the phenomenon [22,23]. Indirect approaches include programs in which the prevention or reduction of bullying occur by promoting social and emotional skills and encouraging pro-social behavior that favors non-violent social interactions with peers and adults through conflict resolution and establishing friendships, for instance [24]. This type of intervention was successful in decreasing the frequency of physical aggression [25] and victimization [26] in the United States, a country where most anti-bullying programs are less successful than those implemented in European countries.
From another perspective, since only a small number of students are directly involved in bullying, the inexpressive results of most interventions may be related to the fact that they include all students [27]. Recently, a recommendation was made to implement programs exclusively directed to either bullies or victims, focusing on promoting skills among children and adolescents [28]. Lack of appropriate social skills is one of the predictors of victimization [4]. Social skills represent classes of behaviors individuals use to successfully complete a social task [29]. Bullying victims present a lack of appropriate social skills, such as social isolation and inefficient coping strategies such as crying and ignoring the bully [30]. These strategies, in general, signal to bullies that the victims lack self-defense skills, which combine with violence to become even more intensified [31].
Therefore, improved social skills, especially assertiveness, represent an important aspect upon which to ground interventions intended to reduce bullying among victims [17,18]. There are few studies addressing selective interventions directed to victims of bullying [32]. A study developed in English schools identified that the training of social skills improved the self-esteem of children, though victimization was not significantly reduced [18]. A significant reduction of victimization was reported by a study implemented in Australian schools addressing male adolescent bullying victims who presented symptoms of anxiety [33]. The components of the program developed in Australia that may have ensured its success were social skills, the objective of which was to help children establish supporting friendships and develop assertive coping strategies. The focus on emotional skills using strategies to control anger and frustration is another aspect to highlight, as it may have contributed to the intervention's success, as well as increased the focus on anxiety [33]. In Brazil, there are no studies with this type of approach.
Internationally, the interventions with the best results are those addressing the entire school [19]. One of the lines of this approach considers bullying a group phenomenon and, for this reason, focuses on students not involved with bullying, or bystanders, because they can either defend the victim or reinforce the bullies' behavior [14]. Having peers willing to stand up for them is very important for victims; however, victims also need to have self-defense skills and must be able to establish friendships to improve the social support they receive, which may be facilitated by the improvement of social skills. Competent social skills are more needed in some periods of school life, such as when transitioning between school cycles or levels within the organization of the educational system, so social competence is important to properly cope with changes that occur in this period [34].
Studies have shown that behavioral problems, lack of discipline and bullying become more frequent during school transitions because students have to relate to a larger number of unknown peers and make new friends and form new social groups [1,35]. A greater concern over social status during this period may encourage aggressive behavior as a way to achieve self-affirmation and become popular among peers [14,36]. A lack of appropriate social skills among victims can hinder self-defense, the establishment of friendships and social adaptation during school transitions. Hence, even though victimization tends to decrease between the ages of eight and 16 years old, a peak usually occurs in the 6th grade [37]. Such violence can negatively affect the quality of victims' school experiences and the relationships they establish with their peers [34].
This study was developed in Brazil, a developing country in Latin America. With a pre-and post-test format, this investigation focuses on students who were victims of bullying. Mixed groups, however, were included in the intervention; that is, bystanders were also included. The reason for including bystanders is because there are indications that victims lack appropriate social skills, so gathering participants with similar difficulties into the same group may be counter-productive [38]. Hence, the intention was to allow victims to interact and make friends with non-aggressive students so that they would establish connections during the intervention and form a larger network of social support. We also expected that the bystanders would offer social support to the victims during the school routine.
This study's objective was to verify whether improved social and emotional skills would reduce victimization among Brazilian students who were victims of bullying attending the 6th grade (first year of the equivalent to middle school in Brazil). This is the first investigation addressing the impact of an intervention based on the development of social and emotional skills on a population of Brazilian student victims of bullying. Its results can improve knowledge concerning this phenomenon in this sociocultural context and can also indicate possibilities in the design of preventive measures and the combat of bullying.
Participants
A total of 522 6th grade students (first year of the equivalent to middle school in Brazil) attending six schools from a Brazilian city were invited and 411 consented to participate. Among a total of 285 students assessed in the pre-test and considered to be either victims or bystanders, 203 consented to participate in the study's second stage that involved the intervention. Thirteen boys and two girls withdrew from the study and were excluded from the final sample, which was finally composed of 188 students assigned to intervention (41.5%) and comparison (58.5%) groups.
Participants were assigned to intervention and comparison groups within their own schools. The 18 6th grade classrooms were distributed into these two conditions in order to obtain comparable samples, so that the nine classrooms composing the intervention group and the nine classrooms composing the comparison group presented similar amounts of victims, bullies and bystanders. All the victims and bystanders from the intervention group were invited to take part in the intervention and all those who agreed were included. The students were assigned to the groups according to an average proportion of 40%-50% of victims and 50%-60% of bystanders. The same occurred for sex, as there were more girls than boys. Some participants, however, withdrew from the study so that the proportion of female participants in the final sample was 72.1% in the intervention group as opposed to 58.8% in the comparison group, a difference that was not, though, statistically significant (p = 0.07). Altogether, 78 victims (41.5%) and 110 bystanders (58.5%) participated. From the total number of victims, 40 (51.3%) were typical victims and 38 (48.7%) were reactive-victims.
The average age in the intervention group was 11.28 years old and in the comparison group it was 11.21 years old (p = 0.441). The ethnic composition of the groups was similar (p = 0.566). The intervention group included: mixed race individuals (48.8%), Caucasians (38.4%), Afro-descendants (8.1%) and others (4.7%), while the comparison group included: mixed race participants (43.1%), Caucasians (42.2%), Afro-descendants (8.8%), and others (5.8%). Both those who participated in the survey, but not in the intervention, (n = 82) and bullies (n = 126) did not present significant differences regarding their distribution in classrooms (intervention and comparison), suggesting that those taking part in the intervention belonged to classrooms with similar characteristics.
This study's focus was students who were victims of bullying. The participation of bystanders was an extra component in the intervention, the objective of which was to promote interaction with the victims as pro-social peers and encourage the establishment of friends (victims and bystanders) to increase the amount and quality of social support and help provided to victims. Even though the characteristics of bystanders were considered when forming the groups, only results concerning the victims are presented. The study was approved prior to implementation by the Institutional Review Board at the University of São Paulo at Ribeirão Preto, College of Nursing (Protocol CAAE: 39462414.0.0000.5393). Parents and legal guardians authorized the participation of students by signing consent forms.
Intervention
The students participated in a behavioral cognitive intervention based on social skills [29]. The eight weekly sessions, which lasted 50 min each, were led by a clinical psychologist (this paper's primary author) on the schools' premises during school hours. The groups were composed of eight to ten participants mixed by gender (female and male) and condition (victim and bystander).
The sessions addressed content and activities related to civility, the ability to make friends, empathy, self-control, and emotional expressiveness, assertiveness and interpersonal problem-solving capacity. Content and activities were developed according to guidelines established by the program [29] in order to ensure a reliable application of the intervention. The structure of the sessions was based on cognitive-behavioral techniques, such as: role-play, dramatization, positive reinforcement, modeling, feedback, videos, and homework assignments. Each meeting was organized around three points in time: (1) beginning-the participants commented on the homework assignments, received feedback, orientation and support from the group and coordinator, then a brief summary from the previous meeting was presented; (2) middle-activities programmed for the meeting were performed; (3) final-homework would be assigned and feedback on the meeting was provided by the participants and coordinator.
Homework involved practicing learned skills in different situations, real daily contexts different from that of the intervention group. Additionally, homework reports and feedback reinforced the skills learned and enabled assessing and designing new strategies if the initial attempt had not been successful. These strategies were intended to provide support to students in implementing social skills. The groups were assessed once more after the intervention (post-test).
The intervention took place between March and May 2015, at the beginning of the school year, which in Brazil starts in February and ends in December. The pre-test occurred in the first week of March and the post-test assessment took place in the first week of June (seven days after the intervention ceased) for both the intervention and comparison groups.
Self-Report (S-R)
Escala de Agressão e Vitimização entre Pares-EVAP (Aggression and Peer Victimization Scale) [39]. EVAP is an 18-item instrument that takes approximately 5 min to be completed. The participants checked the frequency with which they practiced direct or indirect aggressive behavior or were targets of such behaviors. For instance: "I pushed, punched and/or kicked other students"; "I was pushed, punched and/or kicked by other students"; "I cursed at other students"; "I was cursed at by other students". Answers are provided on a Likert scale (1 = never; 2 = almost never; 3 = sometimes; 4 = almost always; 5 = always). Therefore, the scores for the eight questions addressing victimization ranged between 8 (minimum) and 40 (maximum) and for the questions addressing aggression, the interval ranged from 10 (minimum) to 50 (maximum). Psychometric analyses indicated good internal consistency for victimization (α = 0.81) and aggression (α = 0.79).
Sistema Multimídia de Habilidades Sociais para Crianças-SMHSC [40] (Multimedia System of Social Skills for Children, Casa do Psicólogo, São Paulo, Brazil). SMHSC is a self-assessment computerized instrument addressing social skills. It takes approximately 35 min to be completed. It is composed of 21 main videos depicting children interacting with other children and adults. Each situation presents another three short videos depicting alternative behavioral responses for the situation presented in the main video, namely: skillful, passive non-skillful, active non-skillful. The general classes of social skills that are assessed refer to empathy and civility, coping assertiveness, self-control, and participation. For instance, main video number 17 is titled "Resisting Peer Pressure" and belongs to the general class of social skill of coping assertiveness. In the video, Carlos finds out that the ball is kept in the teacher's room and wants Bruno to go and get it, saying: "If you're not a sissy". The entire group confirms: "That's right! If you're not a sissy". Another three videos are presented; each depicts a different response from Bruno. The first is a non-skill active response. Bruno disagrees from the boys and threat to fight saying: "I'm not a sissy! You are! Do you want a piece of me!" The second is a non-skilled passive response. Bruno agrees to do what the group demands saying: "Alright, alright, I'll go . . . .". The third response is skilled. Bruno disagrees and explains: "I'm not going there just because you want! And it has nothing to do with being a sissy!" After each response, two questions are asked: 1. "Do you usually respond this way?", to which the following options are offered: "a. always, b. sometimes, and c. never"; and 2. "What do you think about responding this way?", to which the options are: "a. correct, b. more or less, and c. wrong". Only for the skilled response is a third question is asked: "Do you have difficulty responding this way", to which the options are: "a. correct, b. more or less, and c. wrong". Answers to the question were analyzed using SMHSC, which provided raw and standardized scores for each participant. The general score for difficulty in regard to the practice of social skills was used in this study (α = 0.78).
Peer-Report (P-R)
Sociometric scale [41]. This peer-selection instrument is composed of 10 items and its completion takes approximately 7 min. The participants indicated positive preferences of up to three classmates with whom they enjoyed hanging around, playing, talking or doing schoolwork. They also indicated the negative preference of up to three classmates with whom they least liked to hang around, play, talk or do school work. The participants also listed classmates who had the following characteristics: having few friends, being nice, and being able to resolve conflicts. All the participants who consented to participate in the study, and not only those who took part in the intervention, completed the sociometric scale. The number of indications of each sociometric item was considered within the intervention and comparison groups.
Statistical Analysis
First, to identify involvement in bullying based on the responses provided to the questionnaire, EVAP, grouping analyses were performed using Ward hierarchical method. The Ward method consists of a hierarchical grouping procedure in which the similarity measure used to group individuals is calculated as the sum of squares between the two groupings.
The Ward method consists of a procedure of hierarchical grouping in which the similarity measure used to gather groupings is calculated as the sum of squares between two groupings of all the variables. This method is distinct from other cluster methods because it uses an analysis of the variance approach in order to evaluate the distances between the clusters. In the Ward method, the mean distance of an observation that falls in the center of a cluster from the observations in the same cluster are taken as the basis and the total deviation squares are used. This method tends to result in groupings of approximately equal sizes due to the minimization of internal variation. Three categories emerged: 1. Bystander (low frequency of aggression and low frequency of victimization); 2. Victim (high frequency of victimization and low or moderate frequency of aggression); and 3. Bully (high frequency of aggression and low or moderate frequency of victimization). In this study, reactive-victims are those students who presented a high frequency of victimization with a moderate frequency of aggression.
Afterwards, data concerning the pre-and post-tests were described in terms of mean and standard deviation. The variables of interest had their scores compared with respect to time (pre-test and post-test) and groups (intervention and comparison) through a random effect Poisson regression [42] with log-linear canonical link function. This type of model is indicated when the response variable is a count variable. The study data do not meet normality criteria, as in the usual regression model, because the function's domain is the real line, which is not the case for the data at hand. A random effect was included to account for the correlation arising from the fact that the same subject is observed in different periods: pre-test and post-test. Estimates of the parameters were obtained by the maximum likelihood method because the variables "few friends", "conflict resolution" and "being nice" have standard deviations greater than their means, the model included an over dispersion [43] using SAS's PROC GENMOD software (SAS Institute Inc., Cary, NC, USA). Throughout the analysis, a 5% level of significance was considered.
In the Poisson regression results, betas refer to orthogonal contrasts obtained for the comparisons between the variables of interest, with respect to time (pre-and post-). The orthogonal contrasts are a means of obtaining a test of a specified hypothesis concerning the model parameters. This is accomplished by specifying a matrix L for testing the hypothesis L'β = 0. The statistics calculated are based on the asymptotic chi-square distribution of the likelihood ratio statistic, for the generalized score statistic for generalized models, with degrees of freedom determined by the number of linearly independent rows in the matrix. For instance: Assume a group effect with two levels (Intervention and Comparison), to specify matrix L'. In order to test the difference between groups, we should create a matrix of one line with two columns: 1 was assigned to the first column and −1 was assigned to the second column. In this way, we would compare the groups (Intervention and Comparison) and obtain an estimated difference between the groups, estimative of difference that, in the results section of this paper is called betas. The betas presented in this paper correspond to contrast of times in inverted order, that is, post-test in regard to the pre-test having a matrix L = (−1,1). For this reason, when beta is negative, we can attribute increased acceptance to the post-intervention. Table 1 presents the differences found by the Poisson regression model for the variables with regard to the intervention and comparison groups in the pre-and post-tests.
Results
The results indicate a significant decrease in total victimization in the intervention (β = 0.1851, SE = 0.0455, p < 0.0001) and comparison groups (β = 0.2617, SE = 0.0483, p < 0.0001). Physical victimization decreased significantly in both groups. Verbal victimization decreased significantly among the victims in the intervention (β = 0.1583, SE = 0.0674, p = 0.018) and comparison groups (β = 0.2742, SE = 0.0724, p = 0.0002). Relational victimization also decreased among victims in both the intervention (β = 0.2218, SE = 0.0777, p = 0.004) and comparison groups (β = 0.2409, SE = 0.0816, p = 0.003). No significant differences were found with regard to aggression, though total aggression decreased somewhat in the intervention group. Difficulty experienced by victims in the intervention group in terms of practicing social skills was reduced, but not significantly. Peer acceptance increased for both the intervention and comparison groups, but not significantly. The intervention group was less frequently nominated as having few friends while the comparison group was more frequently nominated as having few friends, but in both cases, the differences were not significant. Conflict resolution did not increase significantly for the victims of any of the groups. The intervention group was more frequently considered nice, though with no statistical significance. Notes: P-R = peer-report. S-R = self-report. Data presented as mean (standard deviation). * = p < 0.05, ** = p < 0.01.
Discussion
This study's objective was to verify whether improved social and emotional skills would reduce victimization among Brazilian student victims of bullying attending the 6th grade (first year of the equivalent to middle school in Brazil). The results indicate a significant decrease in victimization presented by the intervention and comparison groups when comparing pre-and post-tests. Similar results in the intervention group are reported by another study that implemented interventions involving social and emotional skills [33]. In this study, however, a significant decrease in victimization was also presented by the comparison group, thus we cannot claim that the positive results observed in the intervention group were due to the program developed in this study addressing social and emotional skills. A potential explanation involves difficulties faced by students at the beginning of the 6th grade, such as changing schools, the need to interact with new students and adapt to significant changes in school structure, disciplinary control and expectations concerning academic performance [34]. For this reason, victimization rates are possibly higher at the beginning of the year when students were assessed in the pre-test and these initial difficulties concerning peer interactions, conflicts, violent situations and bullying may have decreased as students came to feel better adapted to the new school situation and after having made new friends, the situation that was reflected in the post-test.
Even though it was not statistically significant, aggression decreased in the intervention group. This result may be related to improved social skills, as this indicates a tendency of the victims in the intervention group to act with more civility, empathy, and self-emotional control, and to solve related problems in a non-violent way. This is important because, even though victims generally respond to aggression in a passive manner (exhibiting submission or crying easily, for instance), which may reinforce aggressiveness because a passive response signals to bullies that their actions are successful [31], an aggressive response may also increase the frequency with which intimidating situations occur over time [44]. The short-term interaction between victims and bystanders during the sessions was not sufficient to significantly increase the network of peers among those participating in the intervention; the post-test still indicated the participants had few friends. The short period between the pre-test and post-test may have also influenced the results due to "social status rigidity", which requires time to be changed in social relationships [18]. Having the support of peers when facing bullying is important because even the most assertive responses of victims in the face of aggression may be ineffective in a context in which bullies have high social status or aggression is considered to be normal [36].
This study presents some limitations. First, the post-test was implemented one week after the intervention. A longer period of time would be more suitable to assess changes in bullying and social skills. Another limitation was that the activities were performed within an "artificial environment"; the context of a classroom provides more opportunities to intervene in real daily situations and test learned skills, although there was an effort in this study to make the intervention environment as close as possible to that which the participants routinely experience in their interactions. Further studies should overcome these limitations and incorporate the intervention developed in this study into the school curriculum so that this would be a role to be performed by teachers. Waiting a longer period between the pre-test and post-test is also recommended. The instrument used to collect data regarding bullying does not address cyber bullying, which represents another limitation in this study.
Conclusions
The intervention group experienced less difficulty to a statistically non-significant degree with regard to social skills. Victimization decreased significantly in both the intervention and comparison groups. Aggressiveness did not significantly decrease. This is the first study testing an intervention based on social and emotional skills directed at victims of bullying in Brazil, with the potential to encourage reflection upon intervention models to be designed for the Brazilian context. | 2016-10-31T15:45:48.767Z | 2016-10-26T00:00:00.000 | {
"year": 2016,
"sha1": "d86705ffa7984f782627bbeb11d0147b2cb67b9b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/13/11/1042/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d86705ffa7984f782627bbeb11d0147b2cb67b9b",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
253764293 | pes2o/s2orc | v3-fos-license | Studying the characteristics of nanobody CDR regions based on sequence analysis in combination with 3D structures
Background Single-domain antibodies or nanobodies have recently attracted much attention in research and applications because of their great potential and advantage over conventional antibodies. However, isolation of candidate nanobodies in the lab has been costly and time-consuming. Screening of leading nanobody candidates through synthetic libraries is a promising alternative, but it requires prior knowledge to control the diversity of the complementarity-determining regions (CDRs) while still maintaining functionality. In this work, we identified sequence characteristics that could contribute to nanobody functionality by analyzing three datasets, CDR1, CDR2, and CDR3. Results By classification of amino acids based on physicochemical properties, we found that two different amino acid groups were sufficient for CDRs. The nonpolar group accounted for half of the total amino acid composition in these sequences. Observation of the highest occurrence of each amino acid revealed that the usage of some important amino acids such as tyrosine and serine was highly correlated with the length of the CDR3. Amino acid repeat motifs were also under-represented and highly restricted as 3-mers. Inspecting the crystallographic data also demonstrated conservation in structural coordinates of dominant amino acids such as methionine, isoleucine, valine, threonine, and tyrosine and certain positions in the CDR1, CDR2, and CDR3 sequences. Conclusions We identified sequence characteristics that contributed to functional nanobodies including amino acid groups, the occurrence of each kind of amino acids, and repeat patterns. These results provide a simple set of rules to make it easier to generate desired candidates by computational means; also, they can be used as a reference to evaluate synthetic nanobodies. Supplementary Information The online version contains supplementary material available at 10.1186/s43141-022-00439-9.
Background
Antibodies (Abs), as well-known human therapeutic options, have been widely adopted and used for treating numerous diseases. High-throughput screening (HTS) proved to be an efficient approach to screen for lead Abs that bind to specific antigens [3,7,43]. However, these methods were labor-and resource-intensive and time-consuming and required advanced laboratory skills. With the exponential growth in the amount of information about compounds and molecules that have high potential in clinical and industrial uses, the development and application of in silico or virtual screening have been encouraged to overcome the costs of HTS. This new approach involves "implement rooted in physical principles and/or experimental knowledge to prioritize compounds for experimental testing, thus aiming to save time and cost through a more rational approach compared to HTS" [33]. Different approaches have been adapted for screening small and large therapeutic molecules. For selecting small molecule candidates, in silico screening methods incorporate simple filtering based on physicochemical properties such as Lipinski's rule of 5 [25] or more complex methods like docking-based algorithms, which mimic the interaction between a drug molecule and its target. Lipinski's rule of 5 (Ro5), a robust guideline for screening oral drug-like compounds, has been widely used by medicinal and computational chemists. The rule defines a set of cutoff values that can be used to screen for molecules whose properties violate the boundary values and therefore would be less suitable for oral use. Because of its simplicity and convenient application, many related studies have used this method, which led to the development of the in silico screening strategy commonly used in pharmaceutical research. In case of large immunoglobulin molecules like Abs, most of the methods that were used to develop Ab candidates involved analyzing and engineering the variable region and Fc region. Many studies have focused on the variable region because it is not only responsible for the antigen binding ability but also affects the pharmacokinetics, pharmaceutical properties, and immunogenicity of Abs [15]. The analysis of the variable region comprises predicting potential physicochemical degradation sites, immunogenicity, and aggregation [23] with the help of computer algorithms. The engineering of the variable region includes changes in antigen-binding-site properties, pharmacokinetics, pharmaceutical properties, and immunogenicity. However, these methods are complicated and require good computational resources to screen for potential candidates.
With the increasing use of nanobodies (Nbs), recombinant single-domain variable fragments of camelid heavy chain-only Abs, the field of therapeutic molecules has become more competitive. Nbs have superior properties for medical diagnosis and therapeutic applications due to their high affinity, high production yield in a variety of expression systems, small size, high stability, and solubility together with the ability to recognize unique epitopes that traditional Abs cannot [13]. Nbs have been used in protein purification [21] and immunoprecipitation [29] and as crystallization-assisting chaperones [22]. In addition, Nbs can be used in the clinical field as bioimaging tools [35], for disease diagnosis [40], targeting therapeutics [14], identifying protein-protein interaction, and much more. With the current COVID-19 pandemic still ongoing in many parts of the world, the potential of Nbs has been demonstrated for detecting and treating COVID infection [2,45,47]. Although antiviral drugs are also solid options for treating virus infections, the process of screening and random trial controls would generally hamper the adoption of anti-COVID drugs [46]. Isolated Nbs could also be effective in solving the global problems of COVID variants [11,41,50], which have been challenging vaccine efficacy. The demand has driven researchers to produce functional Nbs against COVID-19 [38,44,49]. Given the great potential for nanobody applications, detailed information on their formation and use has become essential for researchers. Multiple methods have been developed and utilized for screening the best Ab [5,24,28,39] and Nb candidates [10,12,30] for therapeutic properties. Currently, synthetic immunoreactive molecules can be obtained through the screening of appropriate libraries based on structure or sequence [37]. However, no such software for Nbs has been developed. The bottleneck in finding candidate Nbs is to overcome the enormous sequence diversity, because only a small fraction of them may be functional. Research has been conducted to design a library with only four-amino acid codes to reduce the diversity in the Nb sequences [9] or, with the help of computational methods such as Swift-Lib, to limit the threshold of diversity [16]. Prior knowledge of functional CDRs could also be used to graft CDR sequences onto Nb frameworks to generate Nbs that are difficult to make by traditional means [42]. The combination of both sequence characteristics and structures generated by modeling software such as Rosetta [17] or AlphaFold [18] can significantly speed up the discovery of therapeutic Nbs. In order to do that, it is necessary to analyze Nb sequences to define characteristics that can be used for library design or to create Nbs with desired functions and stability.
Here, we have studied a large number of CDR sequences of Nbs to find their overall sequence characteristics and the specified constraints that could be present in the CDR loops of known Nbs. We also analyzed 3D structural features in combination with primary sequence data to explain some definitive characteristics of the functional Nbs.
Creation of the nanobody CDR database
To construct the Nb CDR datasets which are used for analysis, we first downloaded all Nb sequences from the NCBI database. We used ANARCI [8] with the Martin [1] numbering scheme to number the downloaded sequences and filtered out sequences that were not immunoglobulin molecules. We then removed Abs and Ab variations such as Fab and scFv from the Nb dataset based on the sequence description. The amino acid positions spanned the regions from 30 to 35, 47 to 58, and 93 to 101 of CDR1, CDR2, and CDR3, respectively.
Amino acid group classification
We classified all 20 kinds of amino acids into four groups based on their side-chain properties [20]: (1) nonpolar amino acids, G, A, V, L, I, P, and M; (2) polar-neutral amino acids, S, T, C, N, and Q; (3) electrically charged amino acids, E, D, K, R, and H; and (4) aromatic amino acids, F, Y, and W. For each CDR sequence, we assigned each amino acid to its corresponding group, counted the number of AAs in each group, and expressed this as a percentage of the total. For instance, the sequence YVGG can be designated "4111, " which shows that it contains two groups: group 1, which accounts for 75%, and group 4 which makes up 25% of the total residues.
Highest possible counts of amino acids in CDR sequences
For each CDR sequence, we extracted the highest number of each kind of amino acids (except cysteine) and the respective length of that sequence. Scatter plots were used to analyze the possible correlation between the highest possible count of a certain kind of amino acid and all the observed CDR sequence lengths.
Tandem single amino acid and oligopeptide repeats
For tandem single amino acid repeats, a minimum length of at least three consecutive single amino acids was considered. For tandem oligopeptide repeats, a minimum length of two, containing at least two different amino acids, was applied. We analyzed each sequence and observed the count of single amino acid/oligopeptide repeats and their lengths in correlation with the sequence length. Due to the high diversity in the amino acid composition of CDRs, only oligopeptide tandems with lengths of two or three were used for the analysis.
Visualization of conserved dominant amino acids in crystal structures
Antigen-free Nb 3D structures were extracted from the RCSB protein database (PDB ID: 5M2W, 6OBC, 7KJH, 4WEU, 6Z20, 6OBM). Wincoot with SSM Superpose function was used to superimpose extracted Nb 3D structures. For PDB entries with duplicate Nb chains, redundancy was checked, and only one chain was retained. We identified the positions in each CDR that had structurally conserved AAs by calculating the frequency of the most dominant AAs. Each highly conserved position in each CDR was visualized in 3D by using PyMOL. For analyzing the interaction between a Nb and antigen (PDB ID: 7KGJ), we numbered the Nb sequence by using Martin's scheme to highlight the specific CDR positions that could interact with the antigen or were structurally conserved. Investigation of P at position H96 was done with three superimposed Nbs (PDB ID: 3K7U, 6EY0, 6QGW).
Length variation and overall amino acid composition in nanobody CDRs
Nbs can interact with antigen mainly by CDR3, but also by CDR1 and CDR2. These three regions are separated by conserved frameworks (Fig. 1). We downloaded and processed 2161 non-redundant and numbered Nbs from NCBI, resulting in 2377 CDR1, 2377 CDR2, and 2380 CDR3 sequences. The distribution of sequence length varied with each type of CDR, and the optimal CDR length increased from CDR1 to CDR3. CDR1 and CDR2 had consistently average lengths with six residues in CDR1 and 12-13 in CDR2. However, the CDR3 showed large variations in sequence length, with values ranging from 12 to 18 and a median value of 15 ( Fig. 2A). After analyzing the frequency of AAs, we also found differences in the composition of widely used residues in each CDR. Methionine was mostly found in the CDR1 (> 13% of sequences) and rarely found in CDR2 and CDR3 (< 1%), while P was more frequent in CDR3 (~ 5%), compared with CDR1 or CDR2 (~ 1%) (Fig. 2B).
Frequency of amino acid groups in CDR sequences
We first investigated whether the contribution of different kinds of AA groups reflected their roles in CDRs. The frequency of different AA groups showed some interesting features. All Nb CDRs required AAs from at least two different groups in their sequences (Fig. 3A). The AAs were classified based on their side-chain properties, into nonpolar, polar-neutral, electrically charged, and aromatic. However, CDR1 had significantly lower diversity in AA groups compared to CDR2 and CDR3. The majority of CDR1 sequences contained three AA groups (63%), while most of the CDR2 and CDR3 sequences contained all four AA groups (55% and 86%, respectively) (Fig. 3).
This might indicate that all CDR loops contain various functional AA groups to modulate their backbone conformation and their affinity upon interacting with antigens. Most CDR1 sequences need only two different AA groups, while CDR2 sequences need more than two, and CDR3 sequences need all four different groups to achieve functionality. The frequency of AA group usage also differed among the types of CDR (Fig. 3B). Interestingly, CDR1 and CDR2 loops shared similar frequencies in the usage of all four AA groups. Group 3 AAs were most common in CDR3 (22%), compared to CDR1 (11%) and CDR2 (10%).
Limitations in the distribution of amino acids in CDRs
The composition of AAs in CDR sequences is naturally highly diverse and randomized. However, constraints on the use of certain AAs could potentially exist to prevent a CDR from being non-functional. To determine which AAs might be limited in a CDR and whether the selection followed any trend, we determined the greatest occurrence of each group in relation to the lengths of CDR loops. The results showed a strong correlation between CDR length and the abundance of specific types of AAs. For simplicity and practical reasons, we used a threshold length of 13 for CDR2 and 18 for CDR3 based on the observed length distribution ( Fig. 2A). The results indicated that the frequency trends in CDR2 and 3 were consistent and could be represented by polynomial equations. For example, a quadratic equation could describe the highest occurrence of tyrosine (Fig. 4). The highest occurrence of each AA type was more consistent in the case of CDR2 and less consistent in CDR3. For the CDR2 loops, most of the residues showed a strong correlation with R 2 > 0.8 (except for G and P) (Supplement 1). For the CDR3 loops, AAs D, S, and Y showed a high correlation with R 2 values of 0.837, 0.93, and 0.967, respectively. However, other AAs in the CDR3 showed lower correlation values (Supplement 2). The CDR1 was excluded as this region was mainly restricted to a length of six AAs.
Amino acid repeat units rarely present in CDRs
After investigating the composition and presence of each AA, we then analyzed more complex sequence features like repeating units in the CDRs of Nbs to determine the frequency of repeated AA sequences in CDRs of functional Nbs. We first identified polyamino acid, poly(AA), repeats with length > 3, and the results indicated that the frequency of such repeats was limited (Fig. 5). Poly(AA) stretches were rare in most CDR sequences and accounted for only 1.26%, 11.36%, and 14.12% of CDR1, CDR2, and CDR3 sequences, respectively. The presence of poly(AA) repeats was constrained to one unit per CDR, with only one exception, where one out of 2380 Nb CDR3s harbored three different poly(AA) stretches (gi 1036392491). The lengths of poly(AA) repeats did not correlate with CDR length. Base on the scatterplots, only CDR2 showed a small correlation with the length of poly(AA) repeats (Fig. 5).
Structurally conserved amino acids in CDR sequences
After investigating the differences in the sequence characteristics between CDRs, we explored the possible conservation of specific AAs at numbered CDR position based on their frequencies. We chose ten positions that tended to contain a certain kind of AA: H30, H34, and H35 in CDR1; H48, H49, H51, and H57 in CDR2; H93, H94, and H101 in CDR3 (Fig. 6). These positions have been shown to be dominated by specific AAs. At position H30, S was found to have the highest frequency (64.3% of the total CDR1 dataset), but their side-chain coordination varied greatly. M was found to be mostly enriched in the CDR1 sequences, especially at position H34 (77.4% of the total CDR1 dataset) with overlapping of side-chain coordination. G was conserved at position H35 (67.4% of the total CDR1 dataset), followed by A (15.3%). For the CDR2 dataset, V and A were predominant at positions H48 and H49 (96.6% and 75.4%, respectively) and showed high concordance in sidechain coordination. At positions H51 and H57, I and T showed high conservation of D (50.5%), but the sidechain coordination also varied significantly between 3D structures. We inspected another complex, and this one was between the synthetic Nb, Sb45, and the SARS-CoV-2 receptor-binding domain (PDB ID: 7KGJ) (Fig. 7). In this structure, the Nb interacted with the spike glycoprotein through all three CDRs. The marked residues from the Sb45 Nb were translated to the specific positions by using Martin's numbering scheme. The length of CDR1 was six AAs, CDR2 was 13 AAs, and CDR3 was 13 AAs. Out of ten observed positions with high AA conservation, nine positions (light pink) mainly contributed to beta-sheet formation rather than directly interacting with antigen residues. However, the residue T (purple) at position H30 (dominated by S) interacted with the antigen. The presence of AAs at positions H34, H35, H48, H49, H51, H57, H93, and H101 was also concordant with the dominant residues. Examination of sequence characteristics revealed no poly(AA) stretches comprised of three or more AAs or any oligopeptide repeats. Notably, P was mainly found in the CDR3 (Fig. 2B) and was located at position H96 (10.3%). We inspected three crystal structures (PDB ID: 3K7U, 6EY0, 6QGW) with P at the H96 position. The data revealed that all three CDR3 loops had a sharp bend through P, which verified the role of proline residues in maintaining CDR3 loop conformation (Fig. 8).
Discussion
In this study, we analyzed 2377 CDR1, 2377 CDR2, and 2380 CDR3 Nb sequences to reveal their characteristics. We first analyzed the CDR loop length because many studies have suggested that length variation may relate to the diversity and AA composition [6,34,48]. The variation in length was greater between CDRs. CDR1 was the shortest with an optimal length of six AAs; CDR2 and CDR3 were much longer with a broader range of optimal length. CDR3 was the longest perhaps to compensate for the lack of a VL to maintain sufficient binding surface area [27]. Thus, choosing an appropriate length should be considered for Ab or Nb CDR engineering.
After establishing the optimal CDR length, we then proceeded to investigate the diversity and AA composition. The usage of total AAs in the CDR (Fig. 2) was analyzed to gain insight into how each CDR differed from the other. Some AAs were more common in one CDR but less so in others. Methionine, for example, had a high frequency in CDR1 but was rarely found in CDR2 or CDR3. Glutamine and proline were present at a higher level in CDR3 than CDR1 or CDR2. These results suggest that certain kinds of AA might be more favorable in specific CDRs while less essential AAs could be discarded from the CDR sequence.
Due to the high diversity in AA composition in the CDR sequences, it is essential to classify AAs according to their side-chain structure to identify the general characteristics. The AA composition in the CDR sequences was so diverse and hard to predict, that they still maintained a combination of at least two different AA groups (Fig. 3). Because AAs are the critical element determining loop conformations and specificity [36], Nbs should adopt a set of residues with specific physicochemical properties so that their CDR loops can function correctly. For example, an aromatic residue like Y contributes significantly to specific interactions and affinity [9]; however, its side chain is large and contains a hydrophobic ring. A loop full of aromatic residues would result in a rigid conformation that was highly aggregated because of the tendency for π-stacking [4]. These CDR loops must incorporate some smaller, more flexible AAs like A and S in their sequences to provide better backbone flexibility and for appropriate positioning of the aromatic side chains [9]. In contrast, polar and charged residues contribute to better solubility [31,32]. It is worth noticing that CDR3 contained more different groups of AAs than the other two CDRs. This suggests that CDR3 may require different AAs for optimal function to compensate for the monomeric Nb form, as CDR3 has longer loops and a larger interface for optimal antigen interaction. When the general characteristic of AA groups in the CDR was identified, we then delved further into the individual AAs to explore their properties in the CDR loops. We suspected that although the AA composition in this region was diverse, there could be a constraint in the occurrence of each kind of residue. For example, a CDR3 loop that contained a stretch of only one kind of AA should not exist because the loop would not be able to function properly and the chances of forming a long poly(AA) stretch were statistically very small. In this regard, the balance in AA composition is fundamental, and this feature should be correlated with the length of the CDR loops. To check whether this hypothesis was true, we calculated the highest occurrence of each kind of residue in each CDR. The results that we found were promising, as many AAs showed an increasing trend in their correlation with the length of CDR loops, but the trends of the AAs as a whole were not always consistent. Only some residues showed high correlation values, and these residues were also abundant in the CDR AA composition; low-frequency residues showed low correlations. We suspected that these favored AAs were optimized because of their major roles in the CDRs, while others were less optimized because of their underrepresentation and minor contribution to CDR function, thus resulting in the variability in trends. However, only some AAs indicated a clear rising trend. This inconsistent finding might be related to the limited Nb data that we retrieved, as only about 2400 Nb sequences were obtained. More data about functional Nbs should become available in the near future so we can determine whether our hypothesis about the occurrence of AAs is also consistent with the Nb case. Once the distribution of each kind of AA in the CDR loops is completely established, it will significantly contribute to the application of Ab-Nb CDR engineering by identifying the array of AAs that balances the AA composition.
After analyzing the AA composition, we next studied AA repeat patterns in the CDR sequences. AA repeats are abundant and have particular roles in protein function [19,26], so we checked whether these features also applied to the CDR sequences of Nbs. We first studied the characteristics and distribution of poly(AA) stretches with a minimum length of three. We found that there was a constraint in the occurrence of poly(AA) stretches in the CDR loops, where three was the most common length for poly(AA)s. The frequency of occurrence values was also low as most of the Nbs' CDR loops did not contain any poly(AA) tracts. Not all kinds of AAs can form poly(AA) tracts, and only a portion of them significantly contributed to poly(AA) formation. This result may be related to the usage of AAs in each CDR, where only abundant AA kinds had a higher chance of forming these poly(AA) tracts. The average length of CDR1 was significantly shorter than that of CDR2 and CDR3, which explained the rarity of occurrence of poly(AA) in CDR1 compared to CDR2 or CDR3. In the case of oligopeptide repeats, the result showed a high restriction in the occurrence of repeat units. In the CDR2, most 2-mer repeats were formed by common AAs such as G, S, and T. In CDR3, the composition of 2-mers was much more varied than in CDR2, which could be related to the innately high diversity in CDR3. Almost all of the dipeptide repeats were constrained to a length of four (two repeats). This observation was also applied to tripeptide repeats in which the greatest length was six (two repeats). Nbs could not be functional as expected if the CDRs contained too many repeats, even though the AA composition and AA groups satisfied observed parameters. We found that protein repeats and poly(AA) tracts were highly under-represented in the CDRs, implying that these repeating patterns played only a minor role in the CDR loops.
With the increasing interests in both Ab and Nb engineering, it is important to identify the AAs that can reasonably be modified. Many studies have been conducted to select the potential AA candidates at certain positions, such as altering the binding properties [37] or improving solubility without affecting binding affinity [31]. The desired positions across different studies should also be carefully checked since they may vary because of differences in the numbering schemes used.
By inspecting the 3D crystal structures of six representative Nbs, we found that the conservation between dominant AAs and their spatial coordination was not strictly correlated. We found that AAs at the two ends of CDR loops, such as S at H30 or D at H101, showed greater side-chain coordination diversity than residues inside the CDR loops. We suggested that AAs at the terminal positions of the CDR loops had to be more dynamic and flexible for repositioning the CDR loop conformation, while others inside the loops had to be more rigid to maintain CDR loop stability. In case of M, this kind of AA was mostly found in CDR1 with high conservation at position H34 and overlapping of side-chain coordination. This indicated that M is very selective and wellconserved, implying M may play a different role in the structural conformation of CDR1 rather than interacting directly with antigens. Other AAs at CDR2 such as V at H48, A at H49, I at H51, and T at H57 also showed high frequency and maintained side-chain coordination, which implied major roles in CDR2 loop conformations. Interestingly, A dominated at the position of H94 in Nbs, which is totally different from the same position in Abs (data not shown). Many studies have demonstrated the hallmark AAs at the interface of light chains and heavy chains which can distinguish Nbs from Abs; however, we still have not found the explanation for this distinctive feature of Nbs at the H94 position. Examining a candidate Nb-antigen structure also showed the relationship between well-conserved CDR positions and 3D structure, as AAs at these positions mostly contributed to Nb stability. Some Nbs with a rigid AA, such as P at position H96, could also play a role in maintaining the loop conformation. Thus, modifying the AAs at these positions could affect a Nb's structure. These results could be useful for limiting the diversity of certain positions on the CDR loops since the usage and coordination of some AAs were well-established.
Conclusions
The increasing number of Nb sequences has provided useful information about the sequence-based characteristics of the CDR loops, which can be used to define functional Nbs. By extracting and analyzing this data, we found that the presence of two different AA groups was sufficient for Nb CDRs, given the limitation in the occurrence of certain AAs in correlation with CDR lengths and the restriction in AA repeats in the Nb CDRs. This knowledge should be helpful to establish parameters or cutoffs for setting simple rules for generating desired candidate Nbs, particularly in CDR engineering, library designs, or evaluating synthetic Nbs via in silico highthroughput screening.
Additional file 1: Figure S1. For the CDR2 loops, most of the residues showed strong correlation with R 2 > 0.8 (except for G and P). Figure S2. Correlation between the highest occurrence of amino acids and CDR3 lengths. Figure S3. Occurrence of repeat motifs formed by corresponding amino acids in each CDR region. (A) poly amino acid repeats, (B) oligo repeats. | 2022-11-23T14:59:38.805Z | 2022-11-21T00:00:00.000 | {
"year": 2022,
"sha1": "ecb2dc808397f740fcfd03097ebe61bbc536318b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "ecb2dc808397f740fcfd03097ebe61bbc536318b",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
16109166 | pes2o/s2orc | v3-fos-license | The epidemiology of HIV and prevention needs among men who have sex with men in Africa
Men who have sex with men (MSM) in Kenya are at high risk for HIV and may experience prejudiced treatment in health settings due to stigma. An on‐line computer‐facilitated MSM sensitivity programme was conducted to educate healthcare workers (HCWs) about the health issues and needs of MSM patients.
SERVICES BARRIERS
The HIV epidemic in sub-Saharan Africa (SSA) is dynamic with regional and temporal variation. The 2012 Report on the Global AIDS Epidemic reports a decline in HIV incidence among the general population (GP) by 25%, a decrease by 40% in HIV-related mortality, and that more than half of people living with HIV (PLHIV) who are eligible for treatment were on treatment in SSA [1]. The UN classifies countries as low level when HIV prevalence in the GP, as measured by HIV surveillance data, is under 1% and key population (KP) prevalence does not exceed 5%, where KPs are defined as female sex workers (FSWs), men who have sex with men (MSM) and people who inject drugs (PWID); concentrated epidemics are where HIV prevalence is under 1% in the GP, but any KP (e.g., FSW) prevalence consistently exceeds 5%; and generalized epidemics are where HIV prevalence exceeds 1% in the GP regardless of HIV status among KPs [1,2].
An epidemic appraisal proposed by Wilson and Halperin and others that characterizes the typology of HIV transmission within the country, rather than simply crude estimates of incidence and prevalence at the national level, called for reclassification of countries based on transmission dynamics [3,4]. They posited that concentrated epidemics are driven by KPs, and they added mixed epidemic settings, where both the GP and KPs play a role in HIV transmission, and generalized epidemics, in which they argued that contribution to new infections from KPs is insignificant. But this is debatable.
UNAIDS estimates HIV prevalence to be 17.9% among MSM in SSA. Yet there are limited epidemiological data for MSM or KPs in general in this region. For programmes to be well aligned, we need to better understand the epidemiologic and behavioural burden and social drivers of HIV within KP groups in all epidemic settings. The articles in this series focus on building the literature on epidemiology, social drivers of transmission and programmatic innovations for MSM in four regions of SSA: West, Central, East and Southern Africa. Respondent-driven sampling (RDS) methods were used to engage networks of MSM in these pioneering studies, which characterize the HIV epidemiology among MSM in Cameroon, Senegal, Malawi and Swaziland as elevated and sustained when compared to men in the GP. This supplement also includes a meta-analysis of prevalence studies from KPs in Central and West Africa, which helps to ground our understanding of MSM within the broader context of KPs.
The extraordinary burden of internalized and external stigma and discrimination creates the paralyzing barriers to MSM access for prevention, care and treatment services that are notable across these studies. Key dynamics of intersecting stigmas of HIV and sexual orientation among HIV-positive MSM are also explored in Swaziland [5]. Structural interventions to address the pervasive stigma in healthcare settings have been called for by the public health community, and this supplement also addresses this issue with findings from sensitivity trainings in coastal Kenya, which can influence and support clinical work with MSM who come to healthcare settings for sexually transmitted infection (STI) and HIVrelated care and treatment. Additionally, innovative strategies for community mobilization engaging peer leaders in small-group safe spaces within stigmatized peri-urban townships are addressed through the article from South Africa.
While the role of sex among men is increasingly described in concentrated epidemic settings, studies from Southern Africa within generalized epidemics, where KPs are conventionally not thought to play a significant role (e.g., South Africa, Swaziland, Lesotho, Malawi, Namibia, Botswana and Zimbabwe), have also shown MSM to have high prevalence of HIV, syphilis and hepatitis B virus, with disease burdens equal to or greater than those of men in the GP. Still, because of the conception of KPs as insignificant in these generalized epidemic settings, any data collection or targeted response is limited [6].
Swaziland has been documented to have the highest HIV prevalence globally. The incidence of HIV appears to have peaked in 1998Á1999 at 4.6% according to UNAIDS estimates, while in 2009 it was estimated to be 2.7%. Recent data from the Swaziland HIV Incidence Measurement Survey (SHIMS) estimated HIV incidence at 2.4% in the total population: 3.1% among women and 1.7% among men [1,7,8]. The 2009 Swaziland Modes of Transmission (MOT) study characterized major drivers of incident HIV infections to be multiple concurrent partnerships before and during marriage as well as low levels of male circumcision [9]. While these drivers were validated through the SHIMS, it is critical to note that like many MOT studies in generalized settings in Africa, there was no known prevalence among FSWs or MSM, so the MOT analysis assumed a low frequency of both practices and therefore assigned them as minor drivers of the epidemic.
In this issue, Baral and colleagues conducted the first cross-sectional study to estimate HIV prevalence and its risk factors among MSM in Swaziland [10]. The HIV prevalence in the RDS sample of 324 MSM was 17.6%, and the odds of HIV prevalence increased by 20% for each year of age. The vast majority (70%) of the sample reported being unaware of their HIV status, and consistent condom use with lubricants with male partners was reported by 12.6% of respondents. The authors note that within their MSM sample, HIV prevalence is consistent with that of an age-matched sample from the GP until age 24Á26 years, when the prevalence of HIV among MSM rises higher than that of other men in the GP Á with HIV prevalence of 43.1% among MSM older than 27 years [10].
These data as well as other recent data showing alarming rates (70.3%) among FSWs have called researchers to rethink the prevention, care and treatment response in Swaziland [11]. Similarly, in Malawi, a high-HIV-burden country in East Africa, Wirtz et al. conducted the most comprehensive MSM study to date in Malawi, where the HIV response has largely focussed exclusively on heterosexual and vertical transmission of HIV and where an estimated 8% of GP men have HIV [12,13]. A sample of 338 Malawi MSM had prevalence of HIV and active syphilis of 12.5 and 4.4%, respectively, after adjusting for RDS approaches. Ninety percent of HIV infections were previously undiagnosed, and about half reported consistent condom use with casual male partners. Among MSM 26 years and older, prevalence of HIV was 28.1%.
West and Central Africa, the most populous regions of Africa, have a mixture of HIV epidemics, and KPs are better understood to play an important role in the overall transmission dynamics in Nigeria, Senegal and Burkina Faso, where KPs consistently show elevated HIV prevalence in comparison to the GP, as reported by Papworth et al. in a meta-analysis of KPs from West and Central Africa [14]. In Cameroon, where the GP HIV prevalence for men is 2.9%, Park and colleagues sampled 511 MSM in Douala and Yaoundé through RDS and estimated the HIV prevalence to be 37% [15]. In Douala and Yaoundé, respectively, HIV prevalence was 25.5 and 44.4% [16]. Like in the other studies, the sample was young, with a median age of 24 years, and as age increased, HIV rates increased at staggering levels. HIV among MSM aged 24Á29 years was 47%, and it was 49.4% for those over 30. About half of respondents did not use condoms consistently with casual partners (48.5%), and even more did not use them consistently with regular partners (64.1%). Similarly striking in Senegal, where GP prevalence among reproductive age men is 0.5%, the article included by Drame et al. reports baseline HIV prevalence among a sample of MSM at 36.0% (43/114) with cumulative HIV prevalence after 15 months at 47.2% (51/ 108) [17].
Due to the criminalized nature of male-to-male sex in all countries where studies from this issue took place, with the notable exception of South Africa (which still experiences high stigma), MSM are often afraid to visit healthcare services; and when they do go, they are reluctant to disclose their sexual histories to healthcare providers for fear of rejection, derision, or other negative reactions. Continued work is needed to develop violence screening, reporting and mitigation approaches within these settings [18]. The authors in this issue demonstrate that access to and coverage of quality HIV services for MSM are still marginal and not sufficient to reverse the epidemic's trend among MSM, and this is aggravated by several factors, primarily stigma, discrimination and limited domestic investment in programmes focussed on MSM [19Á21]. Lessons learned from mature programmes targeting FSWs in the region show that with limited coverage, poor dosage and an inadequate combination of approaches, even proven interventions will be ineffective [22].
Research focussing on MSM sexual risk behaviour in SSA is scant, and even less is known about the stigmatization and discrimination of HIV-positive MSM [23]. Kennedy et al. conducted 40 in-depth interviews from 20 HIV-positive MSM, 16 interviews with key informants and three focus group discussions with MSM community members. Internalized and experienced stigma was high among the men living with HIV, who report that they conceal their HIV status from others. MSM living with HIV reported experiencing greater social isolation and lack of support for care-seeking and medication adherence. Perceived and experienced stigma from healthcare settings led to delayed care seeking and travel to more distant clinics to retain anonymity at home.
The authors argue that mental health interventions, training for healthcare providers and better protection against discrimination are needed for Swazi MSM living with HIV, which corroborates previous findings from South Africa [24].
The association between HIV prevalence and the existence of community-based HIV interventions targeting MSM and other KPs has been well described [25]. Tailored communitybased programmes that provide MSM with high-quality, sensitive services that are socially and economically acceptable and led by the beneficiaries themselves is a promising approach. The current article by Batist et al. expands on this association with the use of safe spaces to remove barriers to service access, including those aimed at training providers and mobilizing communities even within stigmatized peri-urban settings, which led to greater feelings of connection, social support and self-esteem among MSM community members and also led to these spaces becoming distribution points for condoms, lubricants and HIV education.
Also in this supplement, two articles look at a sensitivity training for healthcare workers providing services to MSM [21,26], highlighting the pre-existing attitudes that can manifest during clinical encounters with MSM. Healthcare workers in SSA generally do not receive specific training in working with MSM or other KPs, and they may not be aware of risk factors for HIV transmission or appropriate care and treatment needs. Healthcare worker training has been identified as a priority intervention to support a minimum package of essential services for MSM [27]. In van der Elst et al., the researchers implemented a novel approach to sensitivity training for healthcare workers providing services to MSM [21]. The training consisted of self-directed, publically available online modules followed by group discussions focussed on MSM sexual risks and healthcare needs. Knowledge and homophobia were assessed prior to training, immediately after training and three months post training. There was a statistically significant decline in homophobia sustained after three months post training, with greater reductions for males and those in clinical roles (doctors and nurses), who were also more likely to have higher homophobia scores pre-training. However, it remains to be seen whether these attitudes can be maintained over time without ongoing support [21].
In a subsequent article in this supplement, van der Elst et al. explored topics including the sexual identification of subcategories of MSM, sexual practices and risks for HIV and STI transmission, practices for sexual history taking and sexual health examinations for MSM [26]. Stigma was also a concern for healthcare workers, such as negative judgements from peers or community members for being associated with MSM, and was an ongoing challenge after the training. After completing the programme, healthcare workers expressed greater acknowledgement of MSM patients in their clinics, empowerment to address their needs, and a better understanding of the biological, behavioural and social influences that lead to HIV and STI risk for MSM.
The term ''MSM'' is meant to address all MSM, regardless of their gender or sexual identities. Some MSM self-identify as heterosexual rather than gay, homosexual, or bisexual, especially if they also have sex with women, are married, only take the penetrative role in anal sex, or have sex with men for money or convenience [28]. They may not consider their sexual encounters with other men in terms of gender identity or sexual orientation, or they may more aptly self-identify using local social terms which reference sexual identities, masculinity and femininity, and behaviours. One noteworthy finding within the articles presented in this issue from Malawi and Swaziland was the disconnection between gender identity and sexual orientation [10,12,20]. While nearly all the respondents of both surveys reported that they were either gay or bisexual and had anal sex with men as criteria for eligibility in the study, a sizeable number reported that they were not male. In Malawi, 17.0% reported they were female, and another 2.8% said they were transgender. In Swaziland, 15.7% reported being female, and 1.8% said they were both male and female. It is not clear whether participants actually considered themselves to be women or whether their sexual behaviour caused them to consider themselves not to be men. There is a need for further study to better understand how these terminologies translate into risk and sexual identity profiles while not singling out these individuals for further stigma.
As more data become available for MSM in SSA, including in Southern African generalized settings, MSM needs should be identified and addressed throughout the continuum of HIV prevention, care and treatment. We continue to need evidence-based interventions to identify, create and train healthcare providers as well as community champions, including lawyers, media owners, journalists and religious leaders sensitive to MSM programming, and establish communitydriven programmes while expanding integration within the health system as appropriate.
The articles in this series have shown that throughout SSA, there is a significant and sustained epidemic among MSM, fear of discrimination from healthcare settings, and providerbased and self-stigma which impede prevention, care and treatment [29Á31]. The findings highlight the need to focus on MSM as a critical KP in ''mainstream'' approaches as well as MSM-targeted models. Countries in Africa characterized as having generalized epidemics, where KPs are not considered relevant, must be re-conceptualized based on these findings. All ''generalized'' epidemics are in reality mixed epidemics with ongoing transmission among KPs, and this becomes increasingly clear as GP prevalence rates decline while MSM experience expanding epidemics.
While it is not clear what proportion of new HIV infections are linked to MSM directly and via second-order transmission among their partners, it is clear that without addressing this underserved, stigmatized population, HIV transmission will be impossible to abate. Therefore, the benefits of targeted structural, behavioural and biomedical services for MSM go beyond the individuals to benefit the larger public welfare and security of all countries within SSA as well as globally. After three decades of the fight against HIV, plans to end the HIV epidemic through goals such as the AIDS-free generation and the US President's Emergency Plan for AIDS Relief's (PEPFAR) Blueprint seem possible. Substantial progress has been made, and more will come through vigilance, courage, tolerance and commitment.
Introduction
Sub-Saharan Africa has a very high burden of HIV-1 infection, of which a substantial proportion occurs among populations reporting high-risk sexual behaviour such as transactional sex and anal intercourse [1]. Such populations suffer from stigma and rejection, and they have been neglected by many HIV prevention and care programmes [2]. As a result, most African healthcare workers (HCWs) have not been informed about the risk of HIV transmission with regard to heterosexual or homosexual anal sex. In addition, African HCWs may lack understanding of the many challenges that men who have sex with men (MSM) and other key populations face in healthcare facilities [3].
Societal discrimination on the grounds of sexual orientation has been reported frequently among African MSM, taking the form of sexual, physical and verbal assault [4Á7], and a number of studies have demonstrated an association between reported experience of discrimination and HIV risk or risk behaviour [8]. Similarly, high levels of internalized homophobia among MSM have been reported in Nigeria [8], South Africa [9,10] and Uganda [11], known to be associated with individual HIV risk-taking behaviour [12]. Overt stigmatization specifically from HCWs in the context of HIV testing and care, such as denial of service [4,3,13] and harassment in clinic spaces [14], has been reported as a key element of perceived discrimination, presenting a deterrent to service access [15] or accurate disclosure of behavioural risk [14]. In the absence of resources targeted to groups at high risk of HIV infection, the marginalization of MSM from public HIV prevention and treatment resources can only hamper the effectiveness of national HIV control efforts [6].
Health worker training, social mobilization and community engagement were prioritized as structural interventions in a recent consultation on priority areas for MSM HIV prevention research involving 69 participants from 17 African countries [16]. HCWs have also been called to action to reduce stigma and discrimination, provide integrated services for mental health concerns and substance use, screen MSM routinely for HIV and sexually transmitted infections (STIs) and ensure training for all personnel in clinical settings [3]. As yet, African HCWs lack any evidence-based, culturally adapted training model that is sensitive to MSM needs. This problem likely stems from cultural taboos about anal sex practices, even in opposite-sex couples [17], and strong political, religious and public prejudice against same-sex practices [18].
Since 2005, biomedical research has been ongoing with both HIV-1 negative MSM and MSM living with HIV in coastal Kenya [1,19]. To date, the only incidence data for African MSM derive from our cohort and a related cohort in Nairobi [19,20]. Overall HIV-1 incidence among young MSM in coastal Kenya was as high as 8.6 (95% confidence interval [CI]: 6.7Á11.0) per 100 person-years of observation [19]. The majority of these MSM reported sex work, and large numbers of such men have been identified in coastal Kenya [21]. Similarly, our cohort study of MSM living with HIV showed that 40% had less than 95% antiretroviral therapy (ART) adherence, compared to 29% of heterosexual men and 12% of women who were followed in the same research setting [1]. These findings prompted us to brief health authorities and develop materials to help improve care for MSM in Kenya and elsewhere in Africa.
Internet-based learning (e-learning) as a cognitive tool has increasingly been used in health professions in resourceconstrained low-and middle-income countries [22]. E-learning technologies offer learners control over content, learning sequence, pace of learning, time and often media, allowing learners to tailor their experiences to meet their personal learning objectives [23]. The internet-based HCW MSM sensitivity training described here represents our attempt to deploy meaningful, clinically relevant material to meet this specific learning need within Kenyan HIV services through adaptation of an existing training curriculum to a web environment.
''MSM: An introductory guide for health workers in Africa'' is a paper-based HCW sensitization training first developed in 2010. The content of training was validated and revised through a programme of extensive classroom use in South Africa, and following expert review [24]. The paperbased training guide was electronically converted to a selfdirected electronic format and published online in July 2011, a version of which was adapted for use in this study.
The objectives of this study were (1) to assess the feasibility of facilitated self-directed learning of MSM health issues in coastal Kenya and (2) to evaluate the effect of the training intervention upon HCW knowledge and attitudes.
Study site and participants
Seventy-four HCWs involved in HIV prevention, treatment and care services in coastal Kenya were recruited to participate in the study. We mapped 54 ART-providing governmental and nongovernmental health providers in four districts in coastal Kenya (Kilindini, Mombasa, Kilifi and Malindi). An average of two staff representatives from each health-providing facility were invited to the training intervention, including clinicians and counsellors as well as clinic administrators.
Four ''district AIDS/STD coordinators'' (DASCOs) working within the study districts were trained to lead the MSM sensitization training during a 2-day ''training-of-trainers'' course similar to the training proper. An additional day was used to prepare focus-group topic guides and organizational matters. The study procedures were approved by the ethical review board at the Kenya Medical Research Institute, and all participants provided written informed consent for impact evaluation. HCWs received Ksh2000 (approximately US$24.00) to cover travel expenses and lodging.
The training intervention
The training consisted of two consecutive days and included eight modules which were taken in four sessions (i.e., two computer modules per session). Each session was followed by a group discussion. Each group size comprised 18Á19 participants. DASCOs were supported by four members of the research team (i.e., a community liaison officer, a research counsellor, an MSM staff-fieldworker and a social scientist) and two members of a local LGBTI (lesbian, gay, bisexual, transgender and intersex) organization. HCWs were introduced to the sensitivity training on MSM health issues and learned that the training consisted of computer-assisted learning (http://www.marps-africa.org) and group discussions. The curriculum consisted of the following modules of study: (1) MSM and HIV in sub-Saharan Africa; (2) Stigma; (3) Identity, coming out and disclosure; (4) Anal sex and common sexual practices; (5) HIV and sexually transmitted infections; (6) Mental health, anxiety, depression and substance abuse; (7) Condom and lubricant use; and (8) Risk reduction counselling. Modules were designed to be self-completed in 1Á2 hours each, including multiple-choice questions (median 12, range 9Á16) at the end of each module. A score of 71% correct was required to advance to the next module, and upon successful completion of all eight modules, participants were sent a link to download their course certificate. A postcourse evaluation asked for opinions and suggestions for course improvements, using both closed and open-ended questions.
Discussion topics included the identification of subcategories of MSM and their characteristics, sexual practices of MSM and risks for HIV and STI transmission, factors that make MSM vulnerable to STIs and HIV, risk assessment in counselling MSM, best practice for sexual history taking and sexual health examination with MSM, relevant information on safer sex for MSM, personal values and attitudes towards MSM, and addressed stigma and strategies to improve communication with clients who are MSM. At the end of the training, HCWs discussed a work plan on how to strengthen clinical care and uptake of HIV and STD testing for MSM in their dayto-day practice. Study participants with a clinical role were also requested to keep a journal for three months to document and reflect upon their work practices and personal attitudes towards MSM.
Data collection
Course participants completed an online registration, including socio-demographic characteristics (age, gender and level of education and training), details of working practice (role within, type and location of healthcare organization) and specific experience working with HIV prevention, treatment and care with the most at-risk populations (MARPs) in Africa. To assess baseline levels of knowledge, participants conducted a pre-course 24-item multiple-choice assessment covering key learning outcomes across the course material, and they completed a 25-item Homophobia Scale (HS; adapted from Wright et al. [25]). The same two measures were repeated three months after course completion to assess sustained changes in knowledge of and attitudes towards MSM. Immediate post-course knowledge was assessed using the same pre-knowledge questionnaire upon completion of the eight modules. The results of pre-training and post-training assessments were not communicated to participating HCWs.
Measurement scales
Knowledge scores of course material were divided into the following categories: Poor (B17 questions correctly answered), Good (17Á22 questions correctly answered) and Excellent (22 questions correctly answered). When 17 or more questions were correctly answered, the immediate post-training knowledge was considered adequate.
The HS, which was developed and standardized among college students in the United States by Wright et al. [25], was adapted for use in Kenya. The HS aims to measure thoughts, feelings and behaviours towards homosexuality and MSM, and it consists of 25 statements to which respondents indicate their level of agreement on a 5-point Likert scale (Table 1). Questions were reviewed and adapted by three Kenyan research staff and HCWs with professional and personal experience working with local MSM. The adapted HS is shown in Table 1 and reflects changes in terminology (e.g., ''gay'' was replaced with ''MSM'' and ''faggot'' with ''shoga'' in question 9) to reflect local terminology in current use. I have damaged property of gay persons, such as ''keying'' their cars, was replaced with Homosexuality should be treated as an illness/Homosexuality can be cured (question 17); I would feel comfortable with having a gay roommate was replaced with Homosexuality is un-African/is something brought by foreigners (question 18); and I have rocky relationships with people that I suspect are gay was replaced with Gay men have the same rights to public/taxfunded services as straight men (question 25). Responses to items 1, 2, 4, 5, 6, 9, 12, 13, 14, 15, 17, 18, 19, 21, 23 and 24 were reverse coded (item scores 1 05, 204, 3 03 etc.). The total HS score (HSS) was the sum of all item scores, with 25 subtracted from the total. The range is between 0 and 100, with an HSS of 0 being the least homophobic and 100 being the most homophobic.
Data analysis
Analysis was conducted using Stata 11.0 (StataCorp LP, College Station, TX, USA). Binary and categorical characteristics of study participants, established at baseline, were compared using chi-square tests. Although both knowledge and HS scores before and after training approximated to Gaussian distributions, differences between paired measures were non-normal, and thus unadjusted nonparametric methods were used for analysis. Median differences between preand post-training knowledge and homophobia score are reported with an interquartile range (IQR). A Wilcoxon signed rank test for matched pairs was applied to test the statistical significance of differences between pre-and post-training scores. MannÁWhitney and exact McNemar's tests were used to test differences in scores and binary measures, respectively, by HCW characteristics. Spearman's rank was used to assess correlation between pre-and post-training scores, and knowledge and HS scores at both points. Multivariate linear regression models of pre-and post-training score outcomes were explored, but they yielded no additional insight beyond bivariate analysis.
Results
Seventy-four HCWs were recruited to participate in the training programme, and their characteristics are shown in Table 2. The majority were female, and the mean age of participants was 32 years (range: 23 to 53). Sixty-two participants (84%) worked at a government health facility (hospital or clinic), seven (9%) worked at a local nongovernmental organization (NGO) and three (4%) represented faithbased organizations. Most (74%) were in a clinical role (nurse or clinical officer). Irrespective of job role, 8% had received any previous training on how to counsel MSM clients, and a similarly low proportion (7%) had ever received training on how to counsel on anal sex practices. HCWs who had received training on anal sex practices were more likely to have ever asked their male patients if they had sex with men than HCWs who did not report previous training (86% (6/7) versus 31% (21/67), x 2 pB0.01).
All participants said they would recommend the course to others. Open-ended suggestions for course improvements are presented in Table 3. Study participants recommended that the training should be taken by all health stakeholders dealing with MSM issues and be included in medical training. There was an interest in similar training related to other key populations (e.g., women who have sex with women and sex workers).
Effect of training on MSM sexual health knowledge among healthcare workers Table 4 shows knowledge of MSM sexual health issues among participants before the training course, and upon reassessment three months after the course. Prior to the training course, only 10/74 (14%) had an ''adequate'' level of knowledge of MSM issues (threshold score: 17/24), reflecting a median score of 54% (IQR 49Á63%). Levels of knowledge were similar by socio-demographic and workplace characteristics of HCWs, although it was somewhat lower for HCWs in administrative roles compared to other roles (median 42 vs. 54, MannÁWhitney p00.293).
At the end of training, 70/74 (95%) HCWs had adequate course knowledge (exact McNemar's x 2 pB0.001 vs. pretraining). At three months after the course, 35 (49%) of the 71 HCWs reassessed had retained ''adequate'' knowledge compared to 9/71 (13%) at pre-training (exact McNemar's x 2 pB0.001). This represented a significant increase in the median assessment score of 12% (IQR 4Á21%) between baseline and three-month knowledge assessments (Wilcoxon signed test for matched pairs pB0.001). Significant sustained improvements in knowledge were apparent for all HCW age groups and genders, those with clinical or administrative roles and those from governmental health providers.
Pre-training and three-month post-training scores were negatively correlated (Spearman's rho (0.51, p B 0.001), indicating that improvements in knowledge tended to be highest among HCWs with lower pre-training knowledge. There were no significant differences in the degree of knowledge gain by the gender or age group of HCWs; however, participants in counselling roles achieved significantly lower gains in sustained knowledge than other HCWs (median difference: 0% vs. '13%, MannÁWhitney p00.0163).
Effect of training on personal attitudes toward MSM among healthcare workers Table 5 shows HS scores among HCWs prior to training and at reassessment three months later. Overall, the median HS score prior to training was 68/100, representing extensive agreement with homophobic statements and disagreement with statements indicating tolerance of MSM (see Table 5). Male HCWs had slightly higher HS scores at baseline than female HCWs, while HS scores declined with increasing age group but differences were not statistically significant. HCWs in clinical roles (medical and nursing) had higher HS scores than other staff (median 71 vs. 66 respectively, MannÁ Whitney p 00.116), and HCWs working in government facilities had significantly higher HS scores than HCWs in NGOs (Table 5, MannÁWhitney p00.037).
The majority of HCWs reported lower HS scores three months post-training (80.3%, 57/71) compared to their baseline HS score; in four (5.6%), HS scores were unchanged; and in 14.1% (10/71), HS scores were higher after training than before. Overall, the median decrease in individual HS score after training was 8 points (IQR 2Á15), which was statistically significant. These findings did not change in a sensitivity analysis omitting the three HS questions that were culturally adapted (data not shown). Individual pre-training and post-training HS scores were negatively correlated (Spearman's rho 0( 0.71, pB0.001), reflecting the tendency for HCWs with high pre-training HS scores to exhibit greater decreases in this measure as a result of training ( Figure 1).
Male HCWs and those working in clinical roles and in governmental institutions recorded the most pronounced reductions in HS score subsequent to training, although differences in median reduction comparing HCWs' gender, age group, staff role and institution were not statistically significant. More modest declines in HS score were apparent for counsellors (median reduction after training: 4 points) and staff of NGOs (median reduction after training: 0 points); however, it is notable that these groups reported relatively low HS scores prior to training. Collectively, there was some evidence for correlation between scale of increase in individual knowledge and scale of decline in HS score, and this was of borderline statistical significance (Spearman's rho 0(.21, p 00.087).
Discussion
This formal evaluation of a training course aimed specifically to improve knowledge and awareness of MSM sexual health needs among healthcare staff involved in frontline HIV prevention, treatment and care to adult populations in sub-Saharan Africa. Specific and accurate knowledge relevant to the management of behavioural and clinical risks for MSM clients prior to training was poor.
Whilst this may not be surprising in the face of longstanding neglect of Kenyan MSM within HIV policy and resource allocation and a lack of attention to MSM within medical, nursing and HIV counselling training in Kenya, it draws focus to the challenge of maintaining and extending the professional competence of the existing HIV workforce to match the epidemiological realities of the Kenyan HIV epidemic Á especially since the National AIDS & Sexually Transmitted Diseases Control Programme (NASCOP) requires Kenyan HCWs to document the number and category of MSM using HIV services.
Whilst targeted services may well be necessary for subpopulations of MSM, such as male sex workers, they are unlikely to replace the need for MSM-specific clinical care among general health services. MSM-specific programmes have aroused considerable social antipathy in Kenya to date [26] and may in any case not be perceived as accessible to men who covertly engage in homosexual behaviour [27]. Furthermore, surveillance of key populations, including MSM, and strategic information on service coverage to these groups are now an international requirement [28].
The combination of self-directed, modular computer-based learning supplemented by group discussions facilitated by trainers identified from within the existing workforce may offer a relatively sustainable and mobile model for episodic health professional training in this context. The learning content of this course is freely available as a web resource, yet reliable access to internet services remains elusive and expensive in most parts of the country. Even where it is available, the narrative reflections by participants who undertook this training emphasize the importance of the sanction provided by facilitated group discussions to share and explore personal and professional issues arising from the training content that may well be lost in self-directed learning [29]. The brief training programme described here resulted in significant improvements in knowledge of MSM sexual health issues pertinent to day-to-day prevention and clinical practice, and it was sustained by most trainees until at least three months after training. Increase in knowledge was accompanied by a reduction in negative attitudes toward MSM over the same period. Encouragingly, the positive effect of training upon knowledge and personal attitudes toward MSM was strongest among HCWs who had poor levels of knowledge and/or more extreme negative atti-tudes toward MSM prior to training. That positive changes were most marked among HCWs in clinical roles within governmental settings, which represent the backbone of Kenyan HIV services, is cause for particular optimism. Studies to date of perceived barriers to healthcare access identified by MSM in Kenya [7,30] and elsewhere in sub-Saharan Africa [31,32] have reported denial of service, lack of confidentiality, ignorance and verbal abuse from governmental HIV services as central challenges in accessing sexual and general health services. The finding of this study, albeit preliminary, suggests 16.4.18748 both that members of this workforce are willing to learn about MSM sexual health and that their knowledge and attitude toward MSM are responsive to this learning. This study has a number of limitations. The HS, which was originally developed and validated among college students in the United States [25], required amendment to preserve its face validity in a markedly different research context. Whereas the modified scale was responsive to change with training, and these changes were robust to sensitivity analysis excluding modified scale items, the objective meaning of absolute scores and the convergent and divergent validity of this scale in this population remain to be established. Furthermore, although the assessment of training effects upon knowledge and homophobic sentiment was assessed at an endpoint long after the training itself, the longer term effect of training cannot be assumed from this study. Finally, this study lacked a control group which, ideally, would have consisted of HCWs not receiving the intervention and HCWs only participating in the self-directed learning.
We report qualitative narratives among HCWs returning to their workplace after training but finding little support for new perspectives amongst (untrained) colleagues [29]. In a recent qualitative assessment of counselling challenges regarding MSM that are experienced by Kenyan counsellors and clinicians in coastal Kenya, all felt that lack of training and supervisory support impacted their ability to serve MSM [33]. These findings may suggest that either longer term support of trained HCWs and/or more extensive facility-based training of all staff may be prerequisites to longer term changes in institutional practice. While knowledge of same-sex practices is a first step to improve services to MSM, serving MSM in day-to-day practice will further improve services. Follow-up of health workers trained in this study is planned and may provide insights in current care services provided to MSM at two years post-training. Additionally, although the training itself was conducted by facilitators from governmental services that were specially trained for the role, the study was run by a team who was unusually experienced in working with Kenyan MSM, which may threaten generalizability to other settings.
Finally, the ultimate goals of improving knowledge of MSM sexual health needs and reducing prejudicial attitudes toward MSM in healthcare settings are to enhance the accessibility of population-based, public health services to MSM themselves. Although surely a prerequisite to accessible HIV prevention, treatment and care for MSM, the extent to which changes in the attitudes and practices of healthcare providers are reflected in the perceived and practical accessibility and acceptability of services to MSM themselves is unknown. Further study will be required to establish the effect of this brief intervention on long-term attitudes and professional practices towards MSM, and what practical contribution such strategies might make to addressing unmet HIV-related needs among MSM.
Conclusions
In summary, we developed, implemented and evaluated a brief training intervention addressing knowledge and attitudes toward MSM and their sexual health needs in Kenya. The training, which combined self-directed and facilitated group learning, increased health worker knowledge and reduced homophobic attitudes up to three months after training. Scaling up such interventions offers a straightforward response to the immediate need to support HCWs in offering accessible and informed services to address the largely sexual health needs among MSM in Kenya. 9
Introduction
Swaziland is a small, land-locked, lower-middle-income country that is surrounded by South Africa and Mozambique; it has a population of approximately 1.1 million people and a life expectancy at birth of approximately 48 years [1]. Similar to other Southern African countries, Swaziland has been severely affected by HIV, with over a quarter of its reproductive-age adults (15Á49) estimated to be living with the virus, equating to an estimate of 170,000 people living with HIV [2]. Moreover, the incidence of HIV appears to have peaked in 1998Á1999 at 4.6% [95% confidence interval (CI) 4.27Á4.95], according to estimates by the Joint United Nations Programme on HIV/AIDS (UNAIDS), while in 2009 it was estimated to be 2.7% (95% CI 2.2Á3.1%) [3Á6]. There appear to have been further declines in incidence according to 6054 person-years of follow-up data from 18,154 people followed from December 2010 to June 2011 as part of the Swaziland HIV Incidence Measurement Survey (SHIMS) longitudinal cohort. Overall incidence was approximately 2.4% (95% CI 2.1Á2.7%), with incidence estimated to be 3.1% (95% CI 2.6Á3.7) among women as compared to 1.7% (95% CI 1.3Á 2.1) among men [7]. Indeed, women and girls have been more burdened with HIV than men throughout the history of the HIV epidemic in Swaziland, with the HIV prevalence among women 15Á24 in 2006 being estimated to be 22.6% compared to 5.9% among age-matched men and boys [5].
The 2009 Swaziland Modes of Transmission study characterized major drivers of incident HIV infections to be Baral SD et al multiple concurrent partnerships before and during marriage as well as low levels of male circumcision [8]. These risk factors were confirmed in the SHIMS study, with risk factors for incident HIV infections among both men and women including not being married or living alone, having higher numbers of sex partners and having serodiscordant or unknown HIV status partners [7]. There are no known HIV prevalence estimates for key populations in Swaziland, including female sex workers (FSW) or men who have sex with men (MSM) [9,10]. The 2009 Swazi Modes of Transmission Study indicates that both sex work and maleÁmale sexual practices are reportedly infrequent and assumed to be minor drivers of HIV risks in the setting of a broadly generalized HIV epidemic. However, the prevalence of these risk factors has not been measured in the HIV surveillance systems that are used to inform the Modes of Transmission Surveys [11]. The last several years have witnessed an increase in the understanding of the potential vulnerabilities among these same key populations through targeted studies including MSM in neighbouring countries with similarly widespread HIV epidemics [12,13].
The largest body of data is available from South Africa, where the first study completed in 1983 of 250 MSM demonstrated a high prevalence of HIV, syphilis and hepatitis B virus [14]. More recently, a study of rural South African men found that approximately 3.6% of men studied (n 046) reported a history of having sex with another man [15]. Among these men, HIV prevalence was 3.6 times higher than among men not reporting male partners (95% CI 1.0Á13.0, p00.05) [16]. There have also been several targeted studies of MSM in urban centres across South Africa that consistently highlight a population of men who have specific risk factors for HIV acquisition and transmission and limited engagement in the continuum of HIV care [17Á19]. Relatively recent studies from other countries, including Lesotho, Malawi, Namibia and Botswana, have shown similar diverse populations of MSM [16,20,21]. Diversity among populations of MSM across Southern Africa manifests through diverse sexual orientations and practices ranging from those who are gay identified, with primarily male sexual partners, to those who are straight identified, with both male and female sexual partners [22]. Diversity has also been measured in the range of HIVrelated risk practices among MSM, including understanding of the HIV acquisition and transmission risks associated with unprotected anal intercourse and of the levels of use of condoms and condom-compatible lubricants (CCLs) [23].
To better characterize vulnerabilities and HIV prevention, treatment and care needs among MSM in Swaziland, a crosssectional assessment was completed to provide an unbiased estimate of the prevalence of HIV and syphilis among adult MSM in Swaziland. This study was completed in equal collaboration with the Swaziland National AIDS Program (SNAP) in the Ministry of Health. This study further sought to describe the significant correlates of prevalent infections, including individual behavioural characteristics, and describe social and structural HIV-related factors and risks for HIV infection among MSM.
Methods
Sampling MSM in Swaziland were recruited via respondent-driven sampling (RDS), a peer referral sampling method designed for data collection among hard-to-reach populations [24]. Potential participants were required to be at least 18 years of age, report anal sex with another man in the previous 12 months, be able to provide informed consent in either English or siSwati, be willing to undergo HIV and syphilis testing and possess a valid recruitment coupon.
Survey administration and HIV testing
All participants completed face-to-face surveys and received HIV and syphilis tests on site. Surveys were administered by trained members of the research staff and lasted approximately one hour. The study was completely anonymous and did not collect any identifiable information; we used verbal rather than signed consent to further ensure anonymity. Questions on socio-demographics (e.g., age, marital status and education), behavioural HIV-related risk factors (e.g., HIVrelated knowledge, attitudes and risk behaviours) and structural factors (e.g., stigma, discrimination and social cohesion) were included [25]. HIV and syphilis tests were conducted by trained phlebotomists or nurses, according to official Swazi guidelines. Test results, counselling and any necessary treatment (for syphilis) and/or referrals (for HIV) were provided on site. Participant surveys and test results were linked using reproducible, yet anonymous, 10-digit codes.
Analytical methods
Population and individual weights were computed separately for each variable by the data-smoothing algorithm using RDS for Stata [26]. The weights were used to estimate RDSadjusted univariate estimates with 95% bootstrapped confidence intervals (BCIs). Crude bivariate regression analyses were also conducted to assess the association of HIV status with demographic variables as well as a selection of variables either expected or shown to be associated with HIV status in the literature. All demographic variables were then included in the initial multivariate logistic regression model regardless of the estimated strength of their crude bivariate association with HIV status. Non-demographic variables were included in the initial multivariate model if the chi-square p value of association with HIV status was 50.25 in the bivariate analyses. Most of the demographics variables, however, dropped out of the final model after controlling for other independent variables.
Because regression analyses of RDS data using sample weights are complicated due to the fact that weights are variable-specific [27], RDS-adjusted bivariate and multivariate analyses were conducted using individualized weights that were specific to the outcome variable (i.e., HIV status) [27]. The adjusted odds ratio (aOR) estimates were not statistically different from the unadjusted estimates in the bivariate analyses, although some slight differences were observed in the multivariate analyses. Thus, only the unadjusted odds ratios (ORs) are reported for bivariate analyses, while both are presented in Table 1 for multivariate analyses. All data processing and analyses were conducted using Stata 12.1 [28]. Baral SD et al 16.4.18768 Missing data Eleven out of the 324 participants were excluded from this analysis due to missing data on key RDS-related variables. There were 29 out of 313 participants with missing data on at least one variable used in the multivariate analyses. Only two variables had data missing for more than three participants: age at first sex with another man (n missing 04) and knowledge about the type of anal sex position that puts you most at risk of HIV infection (n missing06). Two of the 29 participants with missing data were living with HIV; thus, the effective crude HIV prevalence used in the multivariate model was 17 Although the total number of cases with missing data is not very small (9.3%: 29/313), the number missing by variable is very small. Due to the small change in HIV prevalence in the analysis sample compared to the complete sample as shown in this article, no effort was made to impute missing data. The 29 cases were excluded in the multivariate regression models.
Sample size calculation
The sample size was calculated based on the ability to detect significant differences in condom use among MSM living with HIV and those not living with HIV. There were no known estimates of condom use among MSM in Swaziland, but previous studies of MSM from nearby countries estimated that consistent condom use during anal sex with other men among MSM is approximately 50% [19]. In addition, [29,30]. Thus, this study was powered on the assumption that those who have received information about preventing HIV infection from other men would have a 16.5% increase in reported consistent condom use. A power analysis demonstrated that with 80% power, we would require 160 participants. Estimates of appropriate design effects for RDS have varied in the literature, and we used a design effect of 2, planning for the accrual of 324 MSM [31]. This sample size facilitates the detection of significant differences in HIV-related protective practices, such as consistent condom use, and targeted HIV-prevention measures, and is sufficient for key social factors such as experiences with stigma and discrimination.
Ethics
The study received approval for research on human participants from both the National Ethics Committee of Swaziland as well as the Institutional Review Board of the Johns Hopkins Bloomberg School of Public Health.
Results
Three hundred and twenty-four men were accrued from six seeds over a range of between 1 and 14 waves of accrual, with the largest recruitment chain including 123 participants. As shown in Table 2 (Table 3). About one-third of participants reported having had both male and female sexual partners in the previous 12 months (35.7%, 95% CI 27.7Á43.6). Approximately one-half of the participants reported always using condoms during sex, although significant numbers of men reported both unprotected insertive and receptive anal intercourse in the past 12 months. Condom use was not significantly different between main and casual male or female partners. Overall, safe sex with other men, defined as always using condoms and water-based lubricants over the last 12 months, was not common, with 12.6% (95% CI 7.6Á12.6) measured to report this behaviour. Safe sex, defined as condom use with all sexual partners over the last 12 months, was significantly higher with female partners (at 40.0% in the crude assessment) than with male partners (p B0.05). Overall, safe sex with all sexual partners was uncommon and was reported by 4.3% (RDS-adjusted 1.3%, 95% CI 0.0Á9.7). Knowledge of basic questions related to safe sex for MSM, including sexual positioning, type of sexual act and lubricant use, was low, with 11.2% (RDS-adjusted 9.1%, 95% CI 5.2Á 13.0) of participants providing correct answers. Table 4 demonstrates levels of service uptake, with evidence of statistically significantly lower levels of access to targeted services focused on preventing HIV transmission via sex between men as compared to sex between men and women (p B0.05 for both). Notably, only about half of the sample was somewhat or very worried about HIV. Just under half of the men who had symptoms of a sexually transmitted infection (STI) were tested in the previous 12 months, with 7.8% (95% CI 3.9Á11.7) diagnosed in this same time frame. About half of the sample had been tested for HIV in the previous 12 months (50.7%, 95% CI 43.2Á59.2), including some who were tested more than one time. Reports of any experienced rights violations related to sexual practices, including denial of care, police-mediated violence and physical or verbal harassment, were reported by about half of the sample, although perceived rights violations related to sexual orientation (fear of seeking healthcare and fear of walking in the community) were more common, with 79.6% (95% CI 73.7Á85.5) calculated to report this. Disclosure of sexual practices to healthcare workers was reported by one-quarter of the sample (25.0%, 95% CI 19.0Á31.0), whereas about half of the participants (44.0%, 95% CI 36.4Á51.7) had reported disclosure of sexual practices to a family member.
HIV prevalence was strongly correlated with age in both bivariate analyses (OR 1.23, 95% BCI 1.15Á1.21) for each year of age and multivariate-adjusted analyses (aOR 1.24, 95% BCI 1.14Á1.35) ( Table 1). Other statistically significant associations with HIV in adjusted analyses included identifying as the female gender, having ever been to jail or prison, having lower numbers of casual partners, being diagnosed with an STI in the last 12 months and having easier access to condoms.
Discussion
In the country with the highest HIV prevalence in the world, this study describes the burden of HIV and associated characteristics among MSM who were accrued using RDS. Interpreting the prevalence of HIV among MSM and its relationship with the widespread and generalized femalepredominant epidemic in Swaziland is challenging on a Baral SD et al While the participants in our study were relatively young, the HIV prevalence was consistent with that of general reproductive-age men until age 24Á26, when the prevalence of HIV among age-matched MSM appears to be higher than that of other men sampled as part of the Swazi DHS study (Figure 1) [2]. Given that relatively few men in our sample reported female sexual partners, their HIV acquisition and transmission risks are likely different from those of other men in Swaziland and potentially more related to anal intercourse. Conversely, Swaziland may be among a small number of countries where even the low acquisition risks associated with insertive penile-vaginal intercourse is counterbalanced by the significantly higher HIV prevalence among women, resulting in significant acquisition risks associated with sex with women. However, the idea that acquisition risk for MSM primarily related to sex with other men is reinforced by the results that condom use was lower with male sexual partners than with female sexual partners. Condoms being used more frequently during sex with women as compared to sex with other men have been observed in other studies of MSM across Sub-Saharan Africa and provide an argument against MSM being a population that bridges the HIV epidemic from within their sexual networks to lower risk heterosexual networks [19,20,32,33].
However, to answer this question, phylogenetic studies and the characterization of sexual networks are needed to better describe patterns of HIV transmission. Participants were far more likely to have received information about preventing HIV infection during sex with women as compared to sex with other men. This lack of access to or uptake of information, education and communication services has resulted in participants in this study having a limited knowledge base of the sexual risks associated with same-sex practices. Primarily, participants incor-rectly believed that unprotected penile-vaginal intercourse was associated with the highest risk of HIV transmission, consistent with earlier studies of MSM across Sub-Saharan Africa. Numerous studies have shown the opposite: HIV is far more efficiently transmitted during anal intercourse as compared to vaginal intercourse [13,34]. There was also limited knowledge related to the importance of water-based lubricants being CCLs, which is especially important during anal intercourse given the absence of physiological lubrication in the anal canal. The importance of CCL was underscored as ultimately being the determining factor in just six study participants reporting safe sex with all partners in this study. Thus, while there is significant provision of general HIV-prevention messaging across Swaziland, there has been limited information focused on educating MSM on how to prevent HIV acquisition and transmission during sex with other men. Data suggest that starting with simple and proven approaches, including peer education programmes, is necessary to educate these men about their risks and protective behavioural strategies [35]. However, these approaches will likely not be sufficient to change the trajectory of HIV epidemics given the high risk of infection associated with unprotected anal intercourse with non-virally suppressed HIV serodiscordant partners. Thus, moving forward necessitates assessing the feasibility of combination approaches that integrate advances such as antiretroviral-mediated preexposure prophylaxis and universal access to antiretroviral therapy for people living with HIV [13]. However, the success or failure in achieving coverage with these HIV prevention, treatment and care approaches among MSM will, in part, be determined by the level of stigma affecting MSM.
It is now broadly accepted that addressing the needs of people living with HIV is vital to protect their own health as well as prevent onward transmission of HIV [36]. In addition, mean and total viral loads in a population have been linked to population-level transmission rates of HIV [37]. Only a quarter of the men living with HIV in this study were aware of their diagnosis, demonstrating the need to increase HIV testing, linkage to CD4 testing, and antiretroviral treatment and adherence support for those who are eligible. A recent systematic review and meta-analysis of self-testing for HIV in both low-and high-risk populations demonstrated that selftesting was both appropriate and associated with increased uptake of HIV tests [38]. This may be especially relevant in the Swazi context, where fear of seeking healthcare was prevalent, suggesting the need to study new strategies to overcome barriers to HIV testing among MSM in Swaziland, including leveraging community networks and potentially self-testing [39]. In this study, being a person living with HIV was associated with lower numbers of casual male partners in the last 12 months. This relationship appeared to be stronger among those who were aware of their status, although it was not statistically significant because of limited numbers. In addition, these data are consistent with earlier research findings that simply being made aware of one's status of living with HIV can change one's sexual practices to decrease onward transmission [40]. This further argues for implementation science research focused on optimal strategies to scaleup HIV testing for MSM in Swaziland [41]. Over one-quarter of participants in this study self-identified as women, and this was independently associated with living with HIV. There is nearly a complete dearth of information related to HIV among transgender people across Sub-Saharan Africa [42,43]. However, where transgender people have been studied, they have been found to be the most vulnerable to HIV acquisition because of increased structural barriers to HIV prevention, treatment and care services and because of increased sexual risks, including unprotected receptive anal intercourse [43]. Given the limited information available about transgender people, transgender was assessed in this study as both a sexual orientation and a gender identity. There was a significant disconnect between these two as no participants self-identified as being transgender. Ultimately, further ethnographic research is needed to better understand the HIV-prevention needs of transgender people in Swaziland.
Having been to jail was also independently associated with living with HIV among MSM in this study. Globally, incarceration has been shown to be an important risk factor for HIV, given the limited access to HIV-prevention services such as condoms and CCLs, the interruption of HIV treatment as well as exposure to higher risk sexual partners [44Á47]. While further research is needed on same-sex practices within jails, there is likely a need to provide HIV-prevention services for men in Swazi prison settings [47].
The methods employed in this study have several limitations. While RDS is an effective approach to characterize asymptotically unbiased estimates intended to approximate population-based estimates of characteristics in the absence of a meaningful sampling frame, there are still several uncertainties in the most appropriate tools for interpretation of these data [48]. Moreover, the sample of men accrued here was relatively young, consistent with recruitment challenges observed in other studies of MSM across sub-Saharan Africa. While we conducted significant engagement with older MSM, fear associated with inadvertent disclosure limited their participation in the study. Only with improved social environments will more information about the needs of older MSM become available in difficult contexts [49]. In addition, while RDS was used to accrue a diverse sample, all of the seeds were connected with Rock of Hope, a newly registered organization serving the needs of lesbian, gay, bisexual and transgender populations in Swaziland. We thus may have overestimated actual service uptake among MSM in Swaziland.
Conclusions
The implementation of the research project was guided by recent guidelines to inform HIV-related research with MSM in rights-constrained environments [50]. While these men had not been previously engaged in research on HIV prevention, treatment and care, the success of this study highlights the fact that accrual of this population is both feasible and informative for the HIV response in Swaziland. Moreover, the interconnected social and sexual networks leveraged for accrual can likely serve to disseminate HIV-prevention approaches via MSM throughout the country. While the epidemic in Swaziland is one driven by heterosexual transmission, the burden of HIV and the HIV prevention, treatment and care needs of MSM have been understudied, and these men have been underserved in the context of large-scale programmes [51]. The data presented here suggest that these men have specific HIV acquisition and transmission risks that differ from those of other reproductive-age adults. Encouragingly, Swaziland has seen declines in the rate of new HIV infections over the last seven years, and these declines are related to HIV testing and treatment scale-up [5]. However, the increase in HIV services likely has had limited benefit for MSM, which may result in a scenario where epidemics of MSM expand in the context of slowing epidemics in the general population Á a reality observed in most of the world [13]. (7) In South Africa, MSM-specific service providers and nongovernmental organizations (NGOs), including the Desmond Tutu HIV Foundation (DTHF), engage MSM through both peer education and the use of safe spaces within township communities [13,16,30,33,34]. In 2008, the DTHF used several of these strategies to recruit MSM in Cape Town for the Global iPrEX study, a biomedical HIV-prevention clinical trial [29,35]. Batist MSM social networks in Cape Town's townships have been described as including key individuals who establish spaces where other MSM are able to socialize safely [36]. As a result of the iPrEX study, links with these individuals and multiple MSM social networks were formed. This led the DTHF to design and conduct a pilot community-based HIV-prevention programme with MSM in these networks.
This pilot programme aimed to reach MSM in various townships through the use of community-based social activities and meeting groups. The programme was designed to disseminate HIV-prevention information and supplies, and promote the use of condoms and HIV service uptake. This article presents an overview of the project methods, a description of the participants and results from follow-up interviews and focus group discussions (FGDs) conducted with a subset of participants.
Methods
The pilot HIV prevention programme was implemented with MSM in five predominantly black African townships in greater Cape Town. Three structured components were included: (i) group meetings were held regularly with small gatherings of MSM to facilitate knowledge exchange and disseminate prevention supplies, (ii) community-based activities were facilitated to provide opportunities for MSM group bonding and (iii) inter-community activities were conducted to promote integration and diversity. All pilot activities took place over a six-month period between May and October 2012.
Community leader selection and participant recruitment
Townships were selected based on high HIV prevalence, which was identified through previous HIV surveillance data, and on the presence of MSM social networks identified through recruitment for the Global iPrEX study.
From each township, one MSM community leader was identified from previous research [30]. Community leaders participated in the planning and facilitation of all activities, disseminated HIV-prevention information and provided healthcare referrals to MSM in their community. They were at least 18 years old; had demonstrated leadership qualities; were respected, trusted and socially prominent among their MSM peers; and lived in a township where pilot activities were planned. The initial community leader team was selected and trained between January and April 2012.
Self-identified MSM were then recruited to take part in the pilot programme using convenience sampling through peer outreach workers and venue-based contact. All participants were 18 years old or older, were born male, were reported to have sex with men and lived in a township where the pilot was taking place. Each participant completed a self-administered paper questionnaire that collected baseline data on their demographic characteristics, sexual practices, health-seeking behaviour and access to services. Participants were offered voluntary HIV counselling and testing by trained staff and were provided with information about MSM-competent healthcare facilities. Participants who tested HIV positive were provided counselling and referrals. Participant recruitment was completed in 57 days between May and July 2012.
Implementation of the pilot programme
Community leaders received initial two-day training and completed follow-up trainings throughout the pilot. Trainings included education on sexually transmitted infections and HIV but primarily focused on developing leadership skills such as effective communication, managing complicated social situations, strategic planning and goal setting, and encouraging healthy social norms.
Group meetings took place every 1Á2 weeks and were held in private and safe venues in each township. Meetings were semi-structured and included both social and educational components such as debates about current events, training on condoms and water-based lubricant, and discussions on HIV-prevention strategies. Meetings were facilitated by a community leader and staff member but guided mostly by the participants, who were encouraged to take ownership and direction of each meeting. Condoms, water-based lubricant and HIV-prevention information were disseminated during these meetings.
Community-based activities were designed based on participant feedback and used to supplement group meetings in each township. Community-based activities included sports (hiking, netball and soccer), dance competitions, drag pageants and debates. Similar to group meetings, HIV-prevention discussions were integrated into each of the activities. Light refreshments were provided to participants at all meetings and activities.
Finally, inter-community activities, which brought together at least two different MSM groups, were conducted at least once a month. These activities were similar in scope to the community-based activities but were organized to promote knowledge sharing and socializing between MSM from different townships. MSM participants were provided with transport to attend inter-community activities.
Data collection and analysis
Quantitative methods Quantitative data from the baseline questionnaires were analyzed using STATA version 11.0 (StataCorp LP, College Station, TX). Numerical variables were explored using measures of central tendency and distribution [medians and interquartile ranges (IQRs)], and categorical variables were explored using proportions and frequency tables.
Participants were requested to sign an attendance register at each activity. Registers were entered into a secure Excel spreadsheet and linked to the participant's ID. Attendance was measured for each participant and defined as the total number of events attended by the total offered to that participant.
Qualitative methods
After completion of the pilot activities, IDIs with each of the community leaders and FGDs with a subset of participants were conducted in December 2012. A purposive sampling strategy was initially used to equally represent MSM who attended regularly and those who did not. However, many participants who did not attend regularly were unable to be contacted, resulting in the remaining FGD slots being filled by participants who attended more frequently. All FGDs and one IDI were conducted in private facilities within each community, and four IDIs were conducted at the research offices of the DTHF. All FGDs and IDIs were conducted by one of the two trained facilitators and supported by a research assistant who took notes. The FGDs and IDIs were conducted predominantly in English, but participants were also encouraged to use the language they felt most comfortable speaking. A semi-structured interview guide was used to explore participants' perceptions and experiences with community life, project activities, stigma, healthcare services and HIV.
Audio recordings from each FGD and IDI were transcribed, and all participant-identifying information was removed. Qualitative data were analyzed using the framework approach. Predetermined themes based on the interview guide questions were used to structure the initial framework, and a coding scheme was developed to identify emerging themes. Two analysts reviewed transcripts from one FGD and IDI together to establish consistency in coding. After this, the analysts each reviewed the remainder of the transcripts individually. Comparisons and discussion between analysts were used to reach consensus on final themes.
Ethical consideration
Written informed consent was obtained from all participants, who were reminded that they would be able to take part in any community-based activities regardless of their decision to participate in this pilot study. Participants taking part in the follow-up FGDs and IDIs were informed that their responses would remain anonymous and would not affect their involvement in future initiatives from the DTHF or other organizations. They received R50 (approximately US$5.00) as reimbursement for their time and transport. Community leaders were provided with a monthly stipend of R800 (approximately US$90.00) as compensation for transport costs and their time spent in project activity planning and implementation. Ethical approval for this project was obtained from the University of Cape Town's Faculty of Health Sciences Human Research Ethics Committee.
Participant baseline characteristics
In total, 98 MSM consented to participate and completed a baseline questionnaire. The majority of participants were black African (95%, 93/98) and gay identified (82.3%, 79/96). The median age of participants was 24.5 with an IQR of 21Á29. Over half of the participants had received secondary education (64.3%, 63/98), and less than one-third (28.6%, 28/ 98) reported current employment. High-risk sexual behaviours including UAI and transactional sex were reported by MSM in each community. In total, 26% (25/98) of participants reported having had at least one female sexual partner in the last six months. A summary of participant baseline characteristics is presented in Table 1.
Community activities MSM community groups were established in 5 townships, and 57 community-based activities including group meetings and 9 inter-community activities were conducted between May and October 2012. Participant enrolment varied between communities, with 33 participants enrolled from Community A, 24 from Community C, 17 from Community D, 14 from Community B and 10 from Community E. Less than half of the participants (44%, 43/98) had previously engaged in other MSM-focused activities or research prior to the pilot.
Attendance registers were not collected from 7 of the 57 community meetings and from one of the inter-community events due to an administrative error. A median of eight (IQR 6Á9) MSM attended the 50 community meetings with attendance registers, and the eight inter-community activities were attended by a median of 20 (IQR 19.25Á21.5) MSM. Condoms and lubricants were distributed during 23 activities and were available on request throughout the duration of the project. Following their enrolment, 60% (59/98) of participants attended at least one pilot activity. Of those participants, 47% (28/59) attended at least one-half of the scheduled activities. A summary of attendance is shown in Table 2.
Follow-up interviews and focus group discussions
Of the 100 MSM who took part in the pilot activities, 36 also participated in follow-up FGDs, and each of the five community leaders completed an IDI. Efforts were made to include participants with varying degrees of attendance; however, there was substantial loss to follow-up of the participants who had lower attendance. Overall, more than half of the participants from the FGDs attended 50% or more of the scheduled activities.
HIV knowledge, testing and services Many participants described the benefits of receiving MSMspecific HIV-prevention knowledge through the meeting groups, while others reported having already received this information elsewhere: I didn't know everything about preventing HIV or AIDS but once I joined the group I've got more information and then that information I used it . . .. I was worried at first you know until I joined the group and then it influenced me in a kinda way to be strong don't have to be worried since you know that mhm theres so many things which can protect you from getting HIV. (FGD1) Participant attitudes towards HIV testing at local health clinics remained consistently negative because of the insensitive or discriminatory care many had previously received. Despite reactions to local healthcare clinics, participants were aware and made use of MSM-competent healthcare services throughout the duration of the pilot project.
Use of water-based lubricants
Prior to the pilot, participants reported limited access to free water-based lubricant and described using petroleum-based lubricant during anal sex. Many participants described that their use of condoms remained inconsistent, particularly with regular sexual partners. Other participants continually referred to an improved knowledge and use of water-based lubricant as a result of taking part in the pilot activities, specifically the group meetings: But we came to the group and they taught us that you have to use specific lube . . . before we can have sex. (FGD 2) Social support and personal development Participants explained how their feelings of loneliness and social isolation were improved after taking part in the pilot because it created opportunities to socialize with other MSM. This seemed particularly true in communities with little existing MSM activities.
. . . You are able to, you know, be yourself and the sense of getting to be yourself and also giving the feeling that you are not alone . . .. (FGD 1) . . . It's nice when we had events, especially in our communities, because there's nothing happening so Suggestions to improve HIV programme implementation Participants offered suggestions for improving the implementation of the pilot activities. Specifically, they felt that staff changes should be kept minimal since it was challenging to develop relationships with new outreach staff. Some participants also expressed the need for improved efficiency with inter-community activities, specifically highlighting the transport and timeliness of other MSM as key barriers. Overall, participants also shared a willingness to engage their broader community to address stigma and expressed a need for activities to do so by targeting other community members beyond MSM.
Discussion
This article presents the outcomes of a pilot communitybased HIV-prevention programme for township MSM. It is important to note that while participants described changes in their behaviour as a result of the pilot, its aim was not to measure behaviour change. Many factors, including concurrent programmes, may have influenced participants' behaviour [14,15]. Taking this into consideration, participant responses do suggest that this pilot was successful in achieving some of its primary objectives. First, the pilot programme successfully engaged MSM from high-risk networks in five Cape Town townships. Attendance data suggest that social activities and group meetings were a feasible method for reaching certain MSM in this pilot; however, overall attendance varied greatly and included a large percentage of participants who attended no activities. This variability may suggest that the pilot activities did not cater to the interests or needs of all participants, particularly the unique needs of MSM [37]. Other factors that may influence attendance have been described and should be explored further in this context, including feelings of mistrust and community stigma [33,38].
Second, HIV-prevention information and supplies were successfully disseminated to MSM during this pilot. Other studies have described how facilitated social spaces can result in knowledge gain by encouraging the exchange and processing of information between peers [39]. Similarly, MSM in this pilot felt that group meetings created safe environments to learn about HIV prevention with their peers. In addition to improving knowledge, increasing access to water-based lubricants and condoms is also essential for MSM, particularly in communities where limited or incorrect lubricant use has been reported [2]. These findings support previous suggestions to explore the use of small communitybased spaces for lubricant dissemination [40]. Small meeting groups and social activities should be further explored as strategies to supplement current lubricant dissemination strategies for MSM in this setting.
Third, participants reported other meaningful benefits to this pilot, including improvements in their self-efficacy, selfesteem and social isolation. Social isolation, poor self-efficacy and limited social support may play important roles in the individual risk of MSM, specifically condom negotiation and lubricant use [10,33,41,42]. Since this study did not aim to address social isolation or self-efficacy directly, it remains to be seen if any risk reduction occurred through this pilot as a result of diminished social isolation or improvements in selfefficacy. However, these results do support previous recommendations to further explore self-efficacy with township MSM in HIV-prevention programmes, and they suggest that community-based group meetings and social activities warrant further investigation as feasible methods to do so [41].
Additional research is needed to explore community-based approaches for condom use and HIV testing in this setting. HIV testing and condom use are complex behaviours affected by a multitude of factors, including stigma [10,43]. MSM in this pilot were supportive of broader community interventions to reduce stigma, lending further support to current recommendations for future community-based HIVprevention interventions to explore methods that empower MSM to safely and appropriately address stigma within their communities [40].
There are limitations to this pilot study. This pilot targeted black African townships; therefore, these findings cannot be extrapolated to other groups. MSM who did not attend pilot activities were not equally represented in the final qualitative interviews. Their reasons for non-participation may not be adequately included in these findings. Even though participants openly shared suggestions for improving the programme, their responses may have been biased towards discussing positive benefits of the programme in general. The timeframe of this pilot was brief and cannot address the sustainability of these activities in the long term.
Taking these limitations into consideration, this communitybased HIV-prevention pilot programme provides useful insights for MSM-specific HIV-prevention programming that warrant further research. Specifically, small meeting groups and social activities promoted an enabling environment, within the context of larger stigmatizing communities, where MSM were able to receive social support, improve their selfesteem and gain access to relevant HIV-prevention information and supplies.
Conclusions
Results from this pilot programme describe how townshipbased MSM can benefit from facilitated social activities and meeting groups. Results from this pilot programme suggest that these strategies are a viable method for disseminating HIV-prevention information, condoms and water-based lubricant. Furthermore, these groups create a supportive environment in which MSM can learn from each other, explore their sexual identities and overcome potential barriers to HIV prevention such as social isolation and low self-esteem. The use of community-based social activities and facilitated smallgroup meetings should be furthered explored as components to ongoing HIV-prevention interventions for MSM in this setting.
Competing interests
The authors declare that they have no competing interests.
Authors' contributions EB, BB and AS were the implementers of this work under supervision of SB and L-GB. EB and BB led the analysis. The manuscript was written collaboratively between EB and BB, with input from SB, AS and L-GB. SB and L-GB provided ongoing support throughout the process of the entire project.
Introduction
The HIV epidemic in Senegal has followed a pattern distinct from the epidemics observed in Southern and Eastern African countries such as Kenya and Malawi, with a far more concentrated epidemic among key populations such as men who have sex with men (MSM) and female sex workers [1]. The Senegalese government launched an early and comprehensive effort to prevent HIV infection in the general population [2]. This campaign was deemed a success by many and is, in part, likely responsible for the limited HIV epidemic in the country, which reports an HIV prevalence of 0.8% among reproductive age women and 0.5% among men ages 15Á49 [3,4]. More recently, there has been increased study of social factors such as unregulated sex work, stigma and discrimination targeting those at high risk of HIV acquisition and transmission, as well as HIV transmission related to same-sex practices among men [5Á11].
MSM have multiple, intersecting drivers of risk and have had a consistently higher risk of HIV acquisition and transmission since the first cases of HIV were discovered [1,12]. This disproportionate burden of HIV in MSM has also been observed in Senegal. Studies dating back nearly ten years have highlighted this disproportionate burden with HIV prevalence among MSM reported to be 22.4% in 2004 and 21.8% in 2007 [11]. Based on these and other data, Dramé reported that HIV prevalence among MSM is approximately 50 times higher than the prevalence observed among reproductive age adults in Senegal according to the most recent demographic and health survey [13]. Finally, the attributable fraction of HIV infections among MSM in Senegal is high; Van Griensven et al. estimated that nearly a fifth of prevalent HIV infections among men in Senegal are among MSM. Collectively, these data reinforce the need to address the HIV prevention, treatment and care needs of MSM in Senegal [14]. Dramé The definition for social capital is ''institutions, relationships, attitudes and values that govern interactions among people and contribute to economic and social development'' [15]. The importance of social capital has been increasingly recognized as a major social determinant of health because of its association with health outcomes including chronic disease-related morbidity and mortality and, more recently, sexually transmitted infections [16Á26]. Specifically, limited social capital has been associated with higher rates of exchange, survival and commercial sex, and associated with a higher burden of HIV among MSM in Africa [27Á30]. HIV infection has also been associated with low social capital; this may be particularly relevant for stigmatized groups such as MSM [31Á33]. Development of social capital among MSM has been shown to be limited by enacted stigma [34Á36]. And discrimination and stigma affecting MSM are welldocumented, not only in Senegal, but throughout sub-Saharan Africa and more broadly around the world [6,7]. Niang et al. describe the effect that stigma and discrimination can have on health care-seeking behaviours among MSM in Senegal [6]. When men perceive or experience stigma and discrimination in a health care setting, they are less likely to access health services for STI, resulting in higher rates of untreated STI within sexual networks, thereby mediating HIV transmission [37,38].
MSM face additional challenges in countries where sex between men is criminalized [3]. In Senegal, in 2008, several health promoters working in HIV prevention were arrested under suspicion of being homosexual. These arrests, and the fear of further arrests, had wide-ranging effects on HIV in the community of MSM in Senegal [39]. In response, many nongovernmental organizations who had been working in the area of HIV prevention among MSM went into hiding for their own safety. Those who continued distributing prevention materials such as condoms and water-based lubricants saw a marked decrease in the number of men accessing their services. The decreased numbers of men accessing services aimed at the community of MSM also resulted in a lesser availability of social support among MSM. Furthermore, and importantly, health care providers began to fear providing services to MSM following the arrests. This had grave implications for many HIV-positive MSM, who were no longer able to access treatment, either because their provider would no longer see them, or because they feared arrest if they left their home. Some have said that these arrests set HIV prevention efforts back ten years [39].
Stigma and discrimination affect HIV risk and social capital not only by affecting how MSM access prevention and treatment, but also by curbing the presence of research and prevention projects targeting this group in Senegal. A small number of research projects in West Africa has resulted in a limited understanding of what interventions work in communities of MSM in this region [37]. Interventions are difficult to implement, particularly given the constrained legal environment [39]. Community-based organizations of MSM are those with the closest ties to the community and the greatest ability to facilitate interventions [40]. However, these organizations are often not legally registered [13]. Despite these significant challenges, HIV prevention inter-ventions have been effectively implemented for MSM in Senegal [11].
For a population where so much information is left unknown, a cohort study can provide relevant data including prospectively measured HIV incidence [41]. This research project had two primary foci. The first was to assess the feasibility of implementing and retaining participants in a community-driven HIV prevention study in Senegal. The second focus was to describe the study participants in terms of HIV and STI prevalence and incidence, risk behaviours and indicators of social capital at baseline. Ultimately, 119 men were enrolled in the feasibility cohort study. At baseline, all participants completed an informed consent process, a structured survey instrument and a medical examination conducted by an infectious disease physician. The medical examination included a physical exam and syndromic diagnosis and treatment of STI, or a referral for treatment and follow-up if necessary or preferred by the participants. Participants also provided 10 ml of plasma for testing for HIV, hepatitis B and syphilis, according to the Senegalese national testing algorithm [42]. A subset of the participants also received an exploratory intervention.
Because of the small sample size and high loss to follow up, the outcomes of this intervention are not statistically relevant and will not be discussed in this paper.
Follow up
Thirty-seven participants were lost to follow up between T1 (baseline) and T2 (15 months); 14 of these were HIV positive. At the end of the planned implementation period, the remaining participants (n 0 82) again underwent a process of informed consent and completed the same structured survey instrument. At this time period, T2, 60 participants presented for a second session of biological testing for HIV, hepatitis B and syphilis. Whereas at T1, a partner organization was able to perform biological testing on-site immediately following participant surveys, this coordination was not possible at T2. Participants were required to make an additional visit to a clinic for collection of biological samples; 22 participants were unable or unwilling to conduct this additional visit because of inability to pay transportation costs or other competing issues. Retention support was provided by Enda Santé staff through regular visits or phone calls throughout the follow up period, depending on the wishes of the individual.
Ethics
All human subjects' research conducted in accordance with this study has been reviewed and approved by the Senegalese National Ethics Committee for Health Research.
Analytic approaches
The collected data were linked using anonymous codes. Survey data were entered into SPSS, and monitoring data were collected utilizing Microsoft Excel. All data collected were cleaned and merged into a single database. Inconsistencies found during the data cleaning were reconciled to the original questionnaires or laboratory forms.
These data were analyzed using STATA Version 12 (College Station, Texas). Preliminary analysis was conducted using chi square analysis to determine potential associations of social capital at baseline. HIV incidence was calculated by dividing the number of people who seroconverted between T1 and T2 by the number of participants at risk of HIV acquisition (tested negative at T1 and returned for testing at T2), and multiplying this number by person-time. Because of the small sample size and high rate of loss to follow-up, multivariate regression models were not used.
Results
The cohort consisted of 119 male participants who reported having anal sex with another man in the past 12 months, with ages ranging from 18 to 42 years. The mean age for all participants was 28 years, with half of the participants between the ages of 23 and 32 years. Those who were found to be HIV-infected were older than those who tested HIV negative (p 0 0.05), with an average age of 28.8 (interquartile range: 25, 32), compared to HIV-uninfected MSM who had an average age of 26.5 (interquartile range: 22,29). All had had some contact with community groups of MSM in Dakar, Senegal. One-third of the participants had a primary school education or less (n 0 43, 36.4%), onethird had attended secondary school (n 0 39, 33.1%), 15.3% (n 0 18) had attended university and an equal percentage (n 0 18, 15.3%) had attended Islamic or Arab schools. A large majority of participants were single (n 0 104, 88.1%), and 77.3% reported living with their family (n 0 92). Table 1 summarizes the demographic, behavioural, social and financial characteristics of the cohort.
Retention results
Thirty-seven of 119 participants were lost to follow-up (31.1%), meaning they were unable or unwilling to participate in the study at T2. Fourteen of those lost to follow up were known to be HIV positive. HIV-positive participants were not lost at a significantly different rate than HIVnegative participants (p 0 0.43). No statistically significant differences were found between those lost to follow-up and those retained to Time 2 comparing any of the variables listed in Table 1.
Reasons for loss to follow up include participant death, participants being unreachable via contact information and social networks, or participants having moved outside of Dakar. Of the participants lost to follow up, four are known to have died (4/119, 0.03). Cause of death was not recorded in this study. Table 2.
Biological results
At the baseline medical examination, 49.2% (n 0 59) of participants were diagnosed with an STI. In the biological testing, three cases of syphilis were diagnosed at baseline (prevalence 0 2.6%), and two cases were diagnosed at follow-up (prevalence 0 3.3%). Forty-one participants tested HIV-positive at baseline (36.0%). All participants returning for biological testing at T2 were tested for HIV, regardless of prior test results. Sixty-one participants were tested for at T2, 40 of whom had tested negative at baseline. Eight new infections were observed at T2 (15 months follow up), equating to an annualized incidence of 16 cases per 100 person-years (95% CI 4.6Á27.4%) ( Table 3).
Discussion
This study attempted to use a community based approach to accrue and retain MSM in Senegal for 15 months while implementing a pilot intervention. Although this study was focused on assessing the feasibility of HIV prevention studies, these data also highlight HIV among MSM as an ongoing public health emergency in Senegal. The high incidence of HIV suggests that this is an ideal population in which to assess novel approaches to prevent HIV acquisition. Moreover, the high prevalence of HIV indicates that this is also an ideal population in which to assess the effectiveness of approaches that address the needs of people living with HIV.
These approaches would aim to reduce viral load as a means of improving the health of PLHIV, as well as decreasing the risk of onward HIV transmission.
Loss to follow up in this study was significant, which poses a challenge to the success of future HIV prevention research among MSM in Senegal. Reasons for the loss to follow up were likely multifactorial, including the fact that limited resources were appropriated for enhanced retention approaches in this study. In addition, there was a surprisingly high mortality among this group of men that, with a mean age of 28, was relatively young. Although cause of death was not recorded, anecdotal discussions with community members suggested that these deaths were HIV-related. This pilot cohort study leveraged community groups to implement the study rather than academic teams with significant experience in managing cohorts. Thus, the study demonstrates that cohorts are possible using this approach, but that participant retention strategies should be more thoroughly incorporated into the research protocol. Further research, including qualitative research, is needed to better understand characteristics associated with being retained in the study, and there is a need to explore appropriate retention strategies, for example, using linked peer navigators or SMS-based appointment reminder systems.
Traditional HIV prevention interventions, including condom promotion and HIV testing are necessary. But data on the high force of HIV acquisition and transmission among MSM, as well as the high incidence presented here, suggest that these interventions alone are not enough [43]. Addressing the needs of people at high risk for HIV acquisition could be achieved by assessing the feasibility of antiviral-driven measures such as topical or oral chemoprophylaxis. There are currently Phase II rectal microbicide studies for MSM which include a site in South Africa, and these may eventually represent an important strategy [44]. Separately, oral preexposure prophylaxis has been shown to be effective among MSM and may represent a relevant strategy for particularly high-risk men with limited condom usage despite exposure to condom promotion programmes [45]. The proportion of participants in this cohort who had previously been tested for HIV was high, 88%, though many had not received their results. This suggests the need to optimize the continuum of HIV care in this population; this should include ensuring that people are first aware of their HIV status, then assessed for treatment eligibility, actively linked to treatment services and provided with adherence support to achieve viral suppression [46]. Given the high prevalence and incidence of HIV, these data suggest the need to evaluate active linkage to care interventions for MSM in Senegal [47]. A recent systematic review of linkage and utilization of HIV medical care among PLHIV in the United States reported several approaches for linkage to care may be efficacious, including counseling, education and health system navigators [48]. This study was comprised of a highly selected and relatively small sample of MSM already linked into community based organizations in Dakar. However, these men are subject to multiple levels of stigma and discrimination, including exclusion from social activities, isolation from broader social networks and a society that has criminalized their behaviour. Thus, effective HIV intervention packages should address the individual biological and behavioural facilitators of HIV acquisition and transmission, but also address the broader structural determinants of HIV affecting these men.
The baseline data suggests a relationship between social capital and HIV risk including sexual practices and, potentially, prevalent HIV infections. Men who had less financial need were significantly more likely to report use of condoms and water-based lubricant. These results are consistent with earlier data from Senegal noting the importance of financial stability integration of social services as part of health services in the country [49]. These data link social capital to HIV-related risks and suggest that addressing sexual risk practices without addressing the social contexts in which they are taking place may have limited benefit [43]. Documentation and anecdotal reports from the past two decades have suggested that the implementation of interventions that address social capital among MSM can potentially effectively decrease marginalization, stigma and the risk for HIV infection [16,17,50Á53]. Although the relationship between social capital and HIV risk is complex, increasing trust and community involvement among this vulnerable population may lead to positive changes in social norms and selfefficacy, and can ultimately lead to lower HIV acquisition and transmission risks [16,17,22,23,26,54,55].
The generalizability of this study to the general population of MSM in Senegal is limited for several reasons. Because of the difficulty of contacting MSM, recruitment was conducted using existing community networks allowing for a representative sample of MSM who are members of community organizations in Dakar. However, this approach potentially excluded those who are the most isolated or those who feel the least desire to become involved in the community of MSM. Thus, using a sampling frame derived from members of community based organizations serving MSM potentially selects for a population with higher social capital than average MSM in Senegal. As earlier mentioned, retention in the study was limited, which did not allow for a statistically powered assessment of the benefit of the intervention. Future studies will need to put a heavy focus on participant retention to facilitate evaluation of the tested packages of interventions.
Conclusions
Moving forward, cohorts of MSM will be needed to characterize the effectiveness of combination HIV prevention approaches in the West African context. The experience of conducting this feasibility cohort study with a pilot financial intervention illustrates the potential feasibility of such studies among MSM in a region where they are known to be at among the highest risk for the acquisition and transmission of HIV.
Competing interests
The authors have no competing interests to declare.
Authors' contributions SB, FM, DD MN conceptualized the study. EC conducted the data analysis and led the writing of the manuscript. FD and DD provided management of the research implementation and field teams and supported writing sections of the manuscript. SB and CB provided technical oversight for the implementation, data analysis and manuscript development.
All authors have read and approved the final manuscript.
Introduction
Globally, men who have sex with men (MSM) have substantially higher levels of HIV infection than men in the general population [1]. This is true even in the generalized HIV epidemics of sub-Saharan Africa, where MSM have more than three times the HIV prevalence of general population adult males on average [1]. Despite the knowledge that MSM are more likely to be infected with HIV across settings, there has been little investigation of the experiences of MSM who are living with HIV in sub-Saharan Africa. Positive health, dignity and prevention is a framework used to highlight health and social justice issues for people living with HIV (PLHIV) [2,3]. The primary goals of positive health, dignity and prevention are ''to improve the dignity, quality, and length of life of people living with HIV; which, if achieved will, in turn, have a beneficial impact on their partners, families, and communities, including reducing the likelihood of new infections'' [2]. This framework builds upon earlier concepts of ''positive prevention'' and ''prevention with positives,'' which highlighted the importance of ensuring the health of PLHIV and engaging PLHIV in HIV-prevention efforts [4Á6]. However, positive health, dignity and prevention situates living with HIV within a human rights framework and focuses on the importance of understanding and addressing structural constraints. It also considers the role of stigma and discrimination, which Parker and Aggleton [7] describe as social processes related to social inequality, power and oppression through which some groups are structurally excluded in society. Stigma has often been defined based on the classic work of Goffman as the social devaluation of a person based on a ''significantly discrediting'' attribute [8], while discrimination has been defined as behaviour resulting from prejudice [9]. Both stigma and discrimination are common in relation to both HIV and same-sex relationships.
In Swaziland, HIV prevalence in reproductive-age adults is among the highest in the world at 26.1% [10]. UNAIDS classifies Swaziland as a generalized HIV epidemic and, to date, the response to HIV in Swaziland has largely focused on the general population. Recently, the first surveillance of HIV prevalence and associated risk factors among MSM in Swaziland was conducted and showed a high burden of HIV among Swazi MSM, comparable to that of men in the general population [11]. However, same-sex behaviour is criminalized in Swaziland, and little attention has focused on the experiences of MSM who are living with HIV in this setting. Indeed, we identified just one peer-reviewed article focusing on HIV-positive MSM in sub-Saharan Africa. Cloete et al. [12] conducted a survey on HIV-related stigma and discrimination among a convenience sample of both HIV-positive MSM and men who have sex with women in Cape Town, South Africa. The survey found that internalized HIV-related stigma was high among all participants. Overall, MSM reported slightly greater social isolation and discrimination due to their HIV status, but these differences generally did not reach statistical significance.
In this study, we sought to explore the positive health, dignity and prevention needs of MSM who are living with HIV in Swaziland to inform HIV prevention, care and treatment services for this population. To our knowledge, this is one of the first qualitative studies to examine these issues among HIV-positive MSM in sub-Saharan Africa. As such, findings could inform the design and implementation of programmes for MSM living with HIV in Swaziland and similar settings.
Methods
A qualitative approach was used to address the study aims. Methods included key informant interviews, in-depth interviews with HIV-positive MSM, and focus groups with MSM community members.
Key informants were selected if they had experience with MSM and lesbian, gay, bisexual and transgender (LGBT) populations or with HIV-related services in Swaziland. Sixteen key informants were interviewed, including HIV programme planners, policy makers, clinicians and LGBT community leaders. Interviews were semi-structured and employed a field guide to direct the conversation and stimulate probing. Participants were asked to describe the situation of MSM in their communities, their knowledge of existing services for MSM and PLHIV and their suggestions for how services could better meet the needs of MSM.
In-depth interviews were conducted with 20 MSM living with HIV interviewed twice each for a total of 40 interviews. Recruitment was conducted through a variety of settings and organizations, including HIV clinics; PLHIV networks; LGBT and MSM community organizations; and HIV prevention, care and treatment services. Participants were asked about the experiences of MSM generally in their communities; MSM social networks; personal and community experiences with HIV prevention, care and treatment services; experiences with stigma and discrimination; and suggestions for how services, interventions and messages could be better tailored for MSM.
Focus groups were conducted with MSM to gather a broader community perspective on the study topics; HIV status was not asked for reasons of confidentiality. Three focus groups were conducted with 26 MSM (4, 9 and 13 participants in each group). Topics covered were similar to interviews.
All interviews and focus groups were conducted in a private setting in either English or SiSwati and lasted approximately one to two hours. MSM were interviewed by a Swazi familiar with the local LGBT community who received training in qualitative research, while key informants were interviewed by an American masters-level research assistant with qualitative training living in Swaziland.
Qualitative data analysis
Analysis of qualitative data was conducted through identification of recurrent patterns and themes following Crabtree and Miller's five steps in qualitative data analysis, or the ''interpretive process'' [13]. These steps are: (i) describing, (ii) organizing, (iii) connecting, (iv) corroborating and (v) representing. These steps form part of an iterative process which starts by re-examining the goals of the research and considering questions of reflexivity, then moves towards ways of highlighting, arranging and reducing texts to make connections through the identification of recurrent patterns and themes.
All interviews and focus groups were recorded, transcribed and translated into English. Debriefing notes immediately following each interview captured the interview context, theoretical issues, methodological issues and follow-up topics. Weekly meetings were held with all interviewers to discuss emerging themes and identify topics for further exploration to ensure an iterative process. After all data were collected, a full-day data analysis workshop was attended by representatives from LGBT groups, Ministry of Health (MOH) and National Emergency Response Council on HIV and AIDS (NERCHA) representatives, interviewers, clinicians and other stakeholders. This workshop devoted individual time to read de-identified transcripts to identify themes, then group time to categorize and discuss emerging themes and implications. Following the workshop, a codebook was developed by four study team members working together until agreement was reached. Codes were selected based on a priori topics of interest (research questions), themes identified during the data analysis workshop and emergent themes from transcripts. Codes were then applied using the computer software package Atlas.ti (version 5.2, Scientific Software Development GmbH, Eden Prairie, MN). The coded text was read to identify further themes or patterns and memos were created for key themes, which were developed into the findings presented here. Kennedy
Ethical considerations
All participants provided oral informed consent prior to participation, and referrals to clinical and counselling services were provided as needed. Study staff members were trained on sensitivity issues around HIV and MSM. A study advisory board, including representation from the LGBT community, implementing partners and government, reviewed the study protocol and interview guides and provided ongoing advice to the management and execution of the study. Ethical review and approval for this study was received from the Scientific and Ethics Committee of the Swaziland MOH and the Johns Hopkins Bloomberg School of Public Health in the United States.
Results
Dual stigma and disclosure of sexual identity and HIV status The predominant theme across interviews was the significant and multiple forms of stigma and discrimination faced by MSM living with HIV in Swaziland. MSM reported experiencing stigma and discrimination related to both their HIV status as well as their sexual identity.
Same-sex behaviour is both criminalized and heavily stigmatized in Swaziland. MSM reported experiencing significant stigma, discrimination and rejection as a result of their sexual identity. One man, when asked if he had ever experienced stigma or discrimination as a result of being gay, responded, ''A lot, several times, too many times.'' As a result of these experiences, and fear of similar stigma and rejection, many participants said they had not disclosed their sexual identity to anyone except other MSM. ''That is my secret and I'm not planning to tell anyone in my family,'' explained one. Participants worried about negative reactions, rejection and abuse if they disclosed. One man, when asked what would happen if he disclosed his sexuality to his friends or family, responded, ''I would not even dare. It would be like being in a devil's den.'' Others worried more about disappointing their loved ones by not conforming to social norms. One MSM asked, Do you know this SiSwati saying that goes, 'you have to have a heart for the other person'? . . . We always put the next person before [ourselves] . . . So we hardly want to disappoint the next person with being me, being myself and being comfortable with myself and insisting that I should be accepted, you know. We want to always conform [to] what society expects.
However, some participants had disclosed their sexual identity to family members or friends and had found acceptance, often after some initial difficulty.
Men also described stigma related to their HIV status. One participant described ''the abuse we are subjected to'' as ''stigma, you see, that once you are HIV-positive, people think that you have AIDS. And also, that people have not accepted and they still do not know what HIV is.'' Experiences or fear of HIV-related stigma prevented many MSM from disclosing their HIV status to family, friends and sexual partners. Lack of disclosure led to challenges with antiretroviral drug (ARV) adherence, hiding medications and a lack of social support for care-seeking and adherence to care and ARVs.
Participants selectively disclosed either their HIV status or their sexual identity to different individuals based on their anticipated reaction. For example, participants said they might disclose their HIV status to family members as they anticipated receiving some material or emotional support as a result, but they might not disclose their sexual identity to those same family members due to fear of rejection or a negative reaction.
Violence and lack of police protection Violence was also a common experience for MSM. MSM reported violence from a range of individuals. One man noted that some MSM ''are killed for being gay, others are assaulted and others are chased away from home and disowned.'' Due to the criminalized nature of same-sex behaviour in Swaziland, many MSM felt they had no recourse to bring incidents of discrimination or violence to the authorities. Furthermore, many had experienced a lack of police protection as a result of their sexuality. One participant described such an incident: Participant (P): I was actually with a friend of mine in Manzini and we went to the butchery for a braai [barbecue], and when we got there, umm, there were these people who were, like, sitting outside at the car park. They were just rude and they started insulting us and we didn't try to defend ourselves, try to explain anything, and they went on, like, we are gay, we have to be beaten up, the gayness should be beaten out of us. We just ignored them and they attacked one of my friends we were with, they started beating him and he was bleeding. Interviewer (I): Really. P: Like for real, he bled to the point where we had to go to the hospital and we obviously went to lay a charge. And the police were kind of 'occupied', they didn't have the time to go and find these people that have beaten my friend. I: Why did the police act in that way? Did you narrate to them what happened? P: We sure did, but I think it's because we told them how the whole thing started Á they called us names because they say we are gay. And I think also the police could tell that we are [gay], so they thought there was no case there.
Stigma from healthcare settings
The stigma associated with being an MSM was the predominant barrier to accessing healthcare services for MSM living with HIV. Both perceived and experienced stigma in healthcare settings led to a lack of care-seeking behaviour. As one participant described it, When they say 'bring your partner', and then you bring the same sex partner, they are like, 'yah, this is why you are having this [HIV], this is why', and they will be throwing words at you . . . so then you get embarrassed, sometimes you'll decide to leave without being treated, and where are you taking that sickness to?
Another participant, when asked how the needs of MSM differed from PLHIV in general, explained that the main difference was how forthright MSM could be about issues related to their sexuality: I think they are different in the sense that for those who are straight they are open and they communicate easily about sex issues. As for us gays, it's difficult unless you have someone you can talk to and give you advice as to what you can do when you have some health issues. As for people in general, with them it's easy for them to go to hospital, but with us it's difficult. You can't say it's painful in your anus Á what will you say the cause for that is?
This participant continued by noting that this influenced care-seeking behaviour, as he would delay care-seeking or self-medicate to avoid disclosure: I: What happens, so you end up not going there [to the hospital]? P: I just stay at home and you find that this thing becomes complicated. When this thing becomes complicated, you find that maybe you go to the pharmacy and they tell you that this thing is at an advanced stage.
Other men said they travelled long distances to seek HIV care at clinics where they either were not known personally or where they did not experience stigma and discrimination. P: Even at the hospital, they interviewed me, then there were changes and I could tell that they wanted me to reveal what type of person I am. Since then I stopped fetching my drugs there. I now go to another clinic which is far away from home. I drive all the way to fetch my tablets instead of taking them locally. I: Really, why is it so? P: Because I thought there is problem at the local clinic since I am gay. So I decided to change . . . They treat us like small devils, as if we are the one who are spreading the HIV virus.
However in a few cases, MSM did disclose their sexual identity to healthcare providers and reported positive and supportive reactions, particularly from non-governmental HIV testing and counselling sites.
Fear of stigma also shaped the type and nature of counselling that MSM received in healthcare settings, particularly regarding offering services to sexual partners. MSM, as well as key informants, noted that in clinical services such as HIV testing and treatment, providers' questions about HIV prevention generally assume heterosexuality. Providers would ask MSM to bring their wives into the clinic to be tested for HIV. Due to fear of stigma, MSM would often simply state that they did not have a wife, but would not mention their male sexual partners.
Finally, participants reported mistreatment by staff and lack of confidentiality at clinics due to being HIV-positive. These negative experiences were particularly experienced when picking up ARVs, leading one MSM to say that ''the ARVs end up being an inconvenience [rather] than helping you.'' Some men felt that PLHIV in general were treated poorly by healthcare workers. ''You really feel that you are different from other people,'' explained one. However, others felt that at least some healthcare workers provided highquality care to PLHIV, and that MSM were not necessarily treated any differently from other PLHIV.
Mental health challenges
Many MSM said that living with a stigmatized sexual identity and a challenging, stigmatized disease led to feelings of depression as well as self-stigma or shame. ''To be like this to me seems like I was created for nothing on earth,'' said one, ''because there is nobody who is happy about me at home and at school.'' The initial receipt of an HIV-positive diagnosis was emotionally devastating for many participants. Participants described feelings of depression and anger. They also said that others had even more difficult coping. ''Some of them they commit suicide because they can't accept their status,'' said one MSM, ''because no one can accept them as they are gay and positive.'' Some participants said feelings of selfstigma led them to drink alcohol as a coping mechanism.
[After testing HIV-positive], I was very much hurt so much that I decided to devote myself to drinking alcohol. I was drinking every day, and there was not a day that went by without me drinking.
However, over time, many participants said they came to accept their HIV status and learn to cope with the disease. MSM also reported that they had difficulty accepting their sexuality. Some described shame related to having sexual feelings for other men.
Participants reported receiving emotional support from a variety of sources. One MSM said he went to his pastor for support, while another derived comfort from religion but had not disclosed or discussed his life with his church. Only one participant mentioned going to formal counselling services, saying he and his partner saw a private counsellor who knew they were gay. However, most received support from partners, friends or family to whom they had disclosed either their HIV status or their sexual identity.
Preventing HIV transmission to sexual partners and the context of MSM relationships MSM in this study were very aware of the need to prevent onward HIV transmission to sexual partners. Many discussed how they had changed their behaviour after being diagnosed with HIV in order to reduce transmission risk to others by using condoms and reducing the number of partners. However, others reported continued risk behaviour, often linked to alcohol use. As one participant put it, ''most of the time we have sex without a condom it is when we are drunk.'' Poverty and lack of economic opportunities also shaped risk behaviours. Participants reported that some members of Kennedy the MSM community were not necessarily gay, but engaged in transactional sex with men to support themselves financially. However, the majority of our participants identified as gay, and many said they were in long-term, monogamous partnerships with other men.
Some MSM felt that the clandestine nature of MSM relationships in Swaziland may lead to greater numbers of and more casual types of partnerships. MSM described many of their partners as bisexual or having female girlfriends and wives, possibly to fulfil cultural expectations. Furthermore, MSM said that their relationships are often kept secret and therefore families do not play a role in relationship counselling and peacekeeping as they might for heterosexual couples.
Usually in our community we have short-term relationships. These relationships are caused by the fact that there is nothing bonding those people. And maybe the community, the parents or relatives are not involved in our relationships. And then if I have got a problem with my boyfriend, if I say it's over, it's over . . . you are not able to go tell your parents or relatives . . . if people are informed either way about such people [MSM] in the community, if there is a relationship going on with his parent, the parent will be able to intervene either way, and those relationships will sustain.
Improving positive health, dignity and prevention services for MSM MSM said that societal acceptance and stigma reduction would be the most important way to improve services for MSM living with HIV. As one man stated, ''If we can be recognized and they can know that there are people who are living this kind of life and they can know how they can reach us in terms of programmes and services.'' Participants knew that same-sex relationships were more accepted in neighbouring South Africa and hoped that social norms in Swaziland might shift in a similar direction. They also discussed the organizations working openly for LGBT health and rights in South Africa and noted that the lack of such formal organization in Swaziland limited the ability to develop an effective and appropriate response to HIV for MSM.
Participants held a variety of opinions on how best to tailor existing interventions and services for MSM. Some participants suggested developing special clinics or services for HIVpositive MSM. Others worried that targeted services would reinforce stigma. One potential consideration was including MSM living with HIV as ''expert clients'' to help navigate HIV treatment services. Participants said less about mental health services; just a handful of interviewees said that increasing access to counsellors would be helpful, as existing HIV care and treatment providers were overworked and did not have time to provide in-depth counselling for PLHIV.
Currently, as there are essentially no HIV-prevention services for MSM in Swaziland, participants suggested a ''training of trainers'' model, whereby trusted MSM community members could be trained in HIV-prevention messages particularly relevant for MSM and could then share those messages with others in their community. MSM also suggested continued or expanded distribution of condoms and particularly lubricant to prevent condom breakage.
Several participants, both MSM and key informants, said that healthcare workers should be trained on issues related to MSM. As one key informant explained, ''Even their procedures manuals should have information on how to handle MARPS [most at-risk populations, including MSM].'' Importantly, key informants in this study consistently said that regardless of personal belief, they had an ethical responsibility to provide services to everyone, equally. ''As a [member of the] health sector, my belief is non-discriminatory services to all the members of the population, and issues of legality and everything rest with the Ministry of Justice,'' said one. Another stated, Even though I don't approve of what they are doing . . . as a public health officer, I have to make sure that they have access to health services. I don't have to judge them. I don't have to give my views on what they are doing. But my duty is to make sure that they have access to services . . . whatever their sexual orientation is, they are human beings, they are Swazi.
Discussion
This study is among the first studies to examine the positive health, dignity and prevention needs of HIV-positive MSM in sub-Saharan Africa. We found that a social and structural context characterized by significant and multiple stigmas was key to understanding these needs. Dual stigma related to both sexual identity and HIV status led to selective disclosure or lack of disclosure of both identities, and consequently a lack of social support for care-seeking and medication adherence. Perceived and experienced stigma from healthcare settings, particularly around sexual identity, also led to delayed care-seeking, travel to more distant clinics and missed opportunities for appropriate services. These findings support and extend findings from other sub-Saharan African settings that discrimination reduces the willingness of MSM to access services [14Á16]. The lack of support from friends, relatives and society for same-sex relationships was described as weakening these relationships, leading to greater numbers of sexual partners as well as relationships with women to ''hide'' same-sex behaviours, potentially further increasing HIV risk. This finding similarly echoes research from the United States suggesting that psychosocial health problems may increase HIV risk among MSM, leading to a ''syndemic'' [17]; such findings highlight the need to approach HIV prevention within the context of overlapping health problems [18].
Intersectionality is a theoretical framework that examines the relationship or ''intersection'' between multiple forms of oppression and discrimination due to social categorizations such as race, class or gender [19]. MSM living with HIV experience the dual stigma of being a sexual minority and having a stigmatizing illness. Intersectionality posits that these multiple stigmas are not experienced independently, but that they interact in complex ways to create disparity and social inequality in health outcomes [19]. We found that MSM living with HIV described dual stigma as an Kennedy overwhelming burden in their lives which influenced multiple aspects of their health and relationships. Considering the needs of MSM living with HIV in this intersectionality framework provides the deepest understanding of their experience.
Intersectionality also highlights the ways in which individual experiences with stigma reflect larger social structures that create and sustain inequality. Participants in our study experienced outright discrimination, stigma and violence against MSM and PLHIV. However, because sexual identity can be concealed, they often encountered situations in which they were assumed to be heterosexual Á assumptions which they did not correct due to fear of discrimination. For example, in healthcare settings, many providers assumed their clients were heterosexual and provided services accordingly. For our participants, these assumptions led to missed opportunities for appropriate counselling services tailored to their individual needs and risks, as well as missed opportunities for offering important services, such as HIV testing and counselling, to their sexual partners. Although the World Health Organization couples HIV testing and counselling guidelines support offering these services to same-sex couples [20], in practice, most couples HIV testing and counselling services in sub-Saharan Africa focus exclusively on steady, heterosexual partnerships and fail to consider samesex relationships. Although individual providers may offer supportive services for same-sex partners, a more comprehensive approach is needed to incorporate training on samesex relationships into couples HIV testing and counselling programmes.
Currently, services for MSM living with HIV in Swaziland are essentially non-existent. This is unsurprising, given the lack of data on MSM and HIV risk in Swaziland until very recently and the criminalization of same-sex behaviour in Swaziland. Research has documented a strong correlation between criminalization of same-sex behaviour and lack of investment in services for MSM globally [21]. However, MSM have unique healthcare needs [22], and even in rightsconstrained settings, comprehensive HIV services for MSM can and should be provided [23]. Our findings suggest the beginnings of political will among healthcare workers, key stakeholders at the government and local levels and the MSM community to provide these services. Key informants in particular reflected on their duty to provide services to all Swazis in a non-discriminatory manner. These beliefs can provide a foundation for establishing comprehensive HIV services, including both prevention and care and treatment services, for MSM. In fact, this research helped, in part, to catalyze the official registration of an NGO, Rock of Hope, dedicated to key population rights in Swaziland, including LGBT rights, which has been invited to engage with the country's key population policy technical working group addressing HIV among MSM and other key populations. The technical working group is under the auspices of the Swaziland National AIDS Programme (SNAP), a programmatic body under the MOH. Other implementing partner organizations providing HIV-related services have indicated they would be open to developing services for MSM. In this changing political and institutional context, there appears to be a genuine possibility of government, NGO and civil society collaboration to develop an effective and comprehensive response to the HIV epidemic among MSM in Swaziland.
This study provides unique information about the needs of MSM living with HIV in a sub-Saharan African context with high HIV disease burden. Conducting multiple interviews with MSM living with HIV and working closely with local LGBT groups increased the comfort level of our participants and their willingness to participate in this study. However, MSM participants were still discussing very sensitive, stigmatized and illegal behaviours, and they may not have fully opened up to interviewers. Data were collected largely from MSM in urban centres due to reliance on existing networks; this may limit transferability of the findings to rural MSM or those without strong MSM social networks.
Conclusions
The intersecting stigmas of sexual identity and HIV status shaped multiple facets of the lives of MSM living with HIV in Swaziland. Intersectionality provides a framework for understanding these experiences and highlights how programmes and policies should consider the specific needs of this population when designing HIV prevention, care and treatment services. In Swaziland, programmes should consider tailored multi-level interventions that address these unique needs at the policy, societal and healthcare delivery levels. At the policy level, the health sector in Swaziland is already initiating important research to examine the epidemiology and service delivery needs of MSM; findings from this research should be incorporated into the national HIV response. For Swazi society in general as well as healthcare providers, interventions to reduce stigma, discrimination and violence against MSM and PLHIV are needed. The health sector should also consider distributing condoms and lubricant for MSM, training healthcare providers in the specific health needs of MSM and engaging MSM as peer outreach workers or expert clients in both prevention activities and clinical services. Finally, further research examining the experiences and needs of MSM living with HIV globally is required to improve comprehensive HIV services for this population.
Introduction
The sub-region of West and Central Africa (WCA) is the most populous of sub-Saharan Africa (SSA), with a combined population of roughly 356 million [1]. The region possesses a distinct cultural, economic and historical diversity. The majority of countries purport French as their national language, while English is the state language for four countries, and Spanish and Portuguese are both spoken within the region. Fifteen of the countries in WCA are classified by the World Bank Atlas method as low income ( US$1025), including Benin, Burkina Faso, Cape Verde, Central African Republic, Chad, the Democratic Republic of Congo (DRC), the Gambia, Guinea, Guinea-Bissau, Liberia, Mali, Mauritania, Niger, Sierra Leone and Togo [2]. Côte d'Ivoire, Cameroon, Ghana, Nigeria, the Republic of Congo, Senegal and São Tomé and Príncipe are categorized as low-middle income (US$1026 to US$4035) [2]. One country in the region is upper-middle income (Gabon), and one is ranked as a high-income country (Equatorial Guinea), mainly due to newly found oil reserves and a population under 1 million [2].
Historically and economically multifarious, the region has not been immune to the HIV epidemic. The first reported cases of HIV emerged in the mid-1980s, and national surveillance bodies such as National AIDS Committees (NACs) were established over the subsequent decade [3]. Early phylogenetic subtyping revealed unique regional dynamics, with both HIV-1 and HIV-2 circulating, and the majority of global cases of HIV-2 found in West Africa. Concurrently, the origins and greatest subtype diversity of HIV-1 were reported in Central Africa [4] (Figure 1). Nevertheless, regional epidemiological reporting has traditionally been immersed in the overall context of SSA. Trends in the HIV epidemic show that SSA possesses the highest burden of HIV, and 69% of the global population of people living with HIV reside within its borders [23.5 million (22.1Á24.8 million)] [5,6]. While these statistics show an important burden of disease on the continent, they mask disparities in HIV epidemics regionally [7]. Countries in East and South Africa report consistently generalized epidemics among reproductive-age adults (ages 15Á49), which is defined through the Joint United Nations Programme on HIV/AIDS (UNAIDS) criteria as HIV prevalence consistently higher than 1% in antenatal clinics [8,9]. Nine out of the 15 Southern African Development Community (SADC) members report national prevalence over 10% [5,6,10]. Reproductive-age adult estimates are as high as 25.9% in Swaziland and 24.8% in Botswana [11]. Comparatively, national prevalence in WCA has remained low or moderate since HIV surveillance reporting began, with current general-population estimates ranging from 0.02 to 4.5% [5,6,12]. Twelve countries in the sub-region report national prevalence under 2% [5]. Consequently, the majority of these countries' HIV epidemics are classified as mixed, concentrated or borderline generalized [6,12].
The international community has recently noted that classifications of the HIV epidemic based on prevalence data often limit understanding of the complexity of transmission and appropriate prevention strategies. However, concentrated epidemics have historically been defined as occurring in countries where HIV prevalence is consistently higher than 5% in at least one subgroup within the population, but less than 1% in antenatal clinics [7,9]. These subgroups are generally considered to be female sex workers (FSWs), men who have sex with men (MSM) and people who inject drugs (PWID) [7,13]. There is less clarity around mixed epidemics, although these are generally agreed to be lowlevel generalized epidemics ranging from 2 to 5% HIV prevalence in the general population, and high transmission rates in subgroups of the population [7]. Based on this, the HIV epidemics in countries in WCA are predominantly mixed or concentrated.
Researchers have suggested that the complexity of the regional dynamics in WCA has not been dissected adequately [12,14Á16]. Underlying drivers such as migration patterns, subtype diversity, significant regional variations of the disease and at-risk populations are understudied [11,12,16Á19]. In an era where the global spread of HIV is on the decline, data are progressively emerging to show sustained or expanding transmission in populations at high-risk for HIV [15,20Á22]. However, national surveillance systems, particularly in low and middle-income countries, remain constructed on populationlevel studies such as the Demographic and Health Survey and antenatal care surveillance data [6,13]. These methods provide a global overview of basic risk factors associated with transmission, but they do not capture data characterizing sex work and other transactional or compensated sex, same-sex practices and drug use outside of alcohol consumption, all of which are demonstrated high-risk factors and contributors to the acquisition and transmission of HIV [11,21,23].
Globally, surveillance shows that groups such as FSWs, their clients, MSM and PWID sustain a higher burden of disease in concentrated epidemics and substantially contribute to new infections annually [4,7,18,22,24]. In settings such as Southeast Asia and Latin America, general-population HIV prevalence remains similar to that of WCA, and a higher burden of disease is observed among key populations. For example, Pakistan and Indonesia report 25% and 35% prevalence among PWID, respectively [25]. Vietnam and Chile report an HIV prevalence rate of 15% and 20% among MSM, respectively [25,26]. Myanmar (Burma) reports a prevalence of 10% among FSWs, and Brazil reports 4.9% [25,26]. All of these reported levels are roughly five to thirty times higher than general-population prevalence in the specific countries listed [25,26]. National-level responses on these continents have included programmes for key populations, and noteworthy advances in the reduction of new infections have been reported over time [27,28]. In contrast, WCA reports partial or sporadic data for key populations and limited governmentlevel policies defining key population treatment and prevention needs [5]. National surveillance and programming in WCA subsequently remain rooted in broad HIV prevention messaging and approaches similar to those seen across East and South Africa such as prevention of mother-to-child transmission (PMTCT) and non-targeted community-based behaviour change programmes [5,7].
Lessons learned from other contexts such as Southeast Asia and Latin America, where limited prevalence of HIV among average-risk reproductive-age adults also exists, require us to examine the epidemiology of the HIV epidemic in WCA [11,29]. This systematic review aims to complete a historic, situational and epidemiological analysis of the burden of disease among key populations in 24 countries located in WCA.
Methods
The US National Library of Medicine's MEDLINE database, one of the most comprehensive sources of healthcare 16.4.18751 information in the world, was searched using the PubMED interface to obtain biomedical markers for any of the three key populations: FSWs, MSM or PWID. The study objectives specified the need for epidemiologic studies that report biological endpoints (HIV prevalence) with defined methods; thus, it was decided a priori that MEDLINE would be sufficient. However, a sensitivity assessment was employed using the same search strategy to explore EBSCOhost CINAHL Plus, PsycINFO, Ovid, SocioFile and Popline, and no additional data points were obtained which met the defined inclusion and exclusion criteria. Google and Google Scholar were searched for contextual information and non-peer-reviewed literature. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRIMSA) guidelines were referenced for the development of the search protocol and study reporting structure [30,31].
The medical subject headings (MeSH terms) for HIV and AIDS and key terms relating to ''sex work,'' ''men who have sex with men'' and ''intravenous drug use'' were cross-referenced with terms associated with 16 The inclusion criteria for this study included reported HIV prevalence data for any of the three key populations, as well as clients of FSWs, in any of the 24 countries defined for this review. Publications were included if prevalence was listed in the article with sample size and sampling and HIV-testing methods described, regardless of the overall aim or topic of the study. Date of publication was not used as an inclusion criterion. Exclusion criteria included manuscripts not published in French, English or Spanish. Articles were downloaded and organized using Endnote (version X5), and data collection was finalized in April 2013.
Screening and data abstraction A title and abstract search protocol was utilized based on previously validated methods for systematic reviews [32]. At each step in the search protocol, the titles, abstracts and available data were appraised by two independent reviewers (LA and EP), and compiled and synthesized using standardized forms. During the title and abstract reviews, if either of the two reviewers considered the article relevant, it was included. Articles classified as relevant at the title review stage were downloaded for abstract and full-text evaluation. Data were independently extracted by two reviewers (LA and EP), then compared and consolidated for analysis.
Data, including sampling methods, HIV-1, HIV-2 and dual HIV-1 and -2 (HIV-1/2) infections with sample size and number of participants living with HIV, were detailed and coded by the two independent reviewers (LA and EP). Information was categorized by key population studied, sampling techniques, country or countries, sample size, number of study participants living with HIV and notes. Discrepancies in abstracted data from the two reviewers were assessed by a third reviewer independently evaluating the article (SB), as was the final consolidated database (CH).
Results
Our search generated 995 citations, including 885 unique titles with dates of publication from 1987 to 2013 (Figure 2). Based on the inclusion criteria, 122 full articles were reviewed for data extraction, and 76 of these contained relevant data for at least one of the key populations defined. HIV prevalence data for at least one key population existed in 13 of the 24 countries included in the search (54.2%). Eleven of these countries were located in West Africa, and two countries were in Central Africa (DRC and Cameroon).
The majority of publications were assessments regarding FSWs (78.9%, 60/76), and another 10.5% (8/76) provided HIV prevalence data for their clients. One publication provided prevalence data for FSWs and well as clients of FSWs in Togo [33]. Thus, 90.8% (69/76) of the publications included in this study were related to FSWs, representing 41,270 FSWs across 13 countries and 5,986 clients of FSWs across 6 countries.
Two countries (Senegal and Nigeria) had published HIV prevalence data among MSM, and one seroprevalence study was conducted among male sex workers (MSWs) in Côte d'Ivoire, which was included in the MSM pooled data for analysis [34]. A total of six publications combined for the three countries were found for MSM (7.9%, 6/76), and one publication was available with HIV prevalence data for PWID, 1987,1988,1992,1995,1995,1997,1998,2000,2002 1989,1993,1993,1993,1998,2002,2008,2011,2012 Table 1 show a pooled HIV prevalence for the relevant key population(s) in each country, the 95% confidence interval (CI), and the date(s) of the publications retrieved per country. We include both HIV-1 and HIV-2 infections in the pooled prevalence data for the country, and, when possible, we display the division of HIV-1, HIV-2 and HIV-1/2 infections. The far-left data column in Table 1 displays the overall HIV prevalence among reproductive-age adults (15Á49) per country as reported by UNAIDS' most recent country-level surveillance data [6]. N 0 4,612) in Senegal. The pooled prevalence found among clients of FSWs was 7.3% (95% CI 6.6Á8.0) ( Table 2). Six countries had at minimum of one study reporting prevalence data for this demographic, with publications as early as 1992 and as late as 2009 (Table 1).
Men who have sex with men
While this review revealed a paucity of data for MSM, the pooled HIV prevalence in this review was 17.7% (95% CI 16.5Á18.9) for MSM in WCA (Table 2). No studies included were published earlier than 2005, and all but one were published after 2010. Three relevant Nigerian studies showed a pooled prevalence of 15.1% compared to 3.2% in adults of reproductive age [6,93Á95,106]. Senegal's pooled prevalence was 21.7% compared to 0.5% in the adults of reproductive age [6,104,105,107]. The study conducted in Côte d'Ivoire among MSWs reported 50.0% prevalence among a sample of 96 men in Abidjan [34]. Snowball, convenience, purposive and respondent-driven sampling were the primary recruitment methods used to obtain these data.
People who inject drugs
One study included directly sampled PWID. The study found a slightly higher prevalence of HIV at 3.8% (95% CI 2.8Á4.8), compared to 3.2% in the general population in Nigeria [6,92]. The sample was recruited through respondent-driven sampling and mainly compromised of men (90%) [6,92].
Limitations
This study was conducted as a systematic review to understand the prevalence of key populations in WCA and compare historical HIV prevalence to general-population statistics. Data were obtained from peer-reviewed literature, and while this ensures some quality control, we acknowledge that some relevant data that exist in grey literature and other programmatic data may have been overlooked. Programmatic data were not included in this review as it was not possible to implement a standardized assessment of the quality of the methods used and to ascertain an overview of research sampling and testing methods. However, the grey literature obtained through this review played a key role in the contextual analysis and discussion section of this study. Certain limitations also include the use of only English, French and Spanish, as other publications in other languages may have relevant data not captured in these inclusion criteria.The study among MSWs from Côte d'Ivoire was included in the overall analysis; however, the sampling method directly recruited these individuals from an established sex worker clinic, and thus HIV prevalence may be overestimated in this subpopulation. Also, while the authors noted that the majority of MSWs in the Abidjan area were MSM, they did not collect data on types of partner [34]. The contextual description from the authors is supported by evidence from other contexts where partners of MSWs are male [108,109]. Concurrently, systematic review methods were applied; however, sensitivity analysis and meta-analyses were not utilized. While odds ratios or aggregated comparison data were not generated, the overall analysis provides an overview of HIV prevalence among key populations and details of the epidemiology of key populations since the debut of HIV research in this region.
Discussion
Epidemiologic literature over the past 30 years has demonstrated a consistent and disproportionate burden of HIV among key populations in WCA. From the first published study in 1987 to the most recent in 2013, elevated levels of HIV among FSWs and their clients were consistently reported. In recent years, studies emerged to display an elevated burden of HIV among MSM within the region, although the number of studies in this subpopulation remains limited. Concurrently, there is nascent but growing evidence of the existence of PWID and, consequently, HIV infections in this subpopulation [92].
HIV prevalence
The elevated HIV prevalence among MSM, FSWs and clients of FSWs is important based on the determinants of the HIV epidemic in WCA and even more broadly across SSA. Surveillance has shown that women carry the highest burden of HIV on the continent, with national-level statistics constantly reporting that women have a higher HIV prevalence and incidence than men [13,110]. While programmes are designed to address the various risks associated with female HIV acquisition, the results of this study demonstrate that HIV risks are significantly higher among FSWs than women who do not sell sex in WCA. These results are substantiated by a systematic review of FSWs in low and middle-income countries, which showed that FSWs in SSA have a pooled prevalence of 36.9% (95% CI 36.2Á37.5) with a background prevalence on the continent of 7.42% in females [15]. Globally, FSWs were 13.5 (95% CI 10.0Á18.1) times more likely to be living with HIV than women of reproductive age [15]. Thus, the results of this review and the epidemiology of HIV among FSWs worldwide suggest that inclusion of and significant focus on these women and their clients are of importance to address these populations' high HIV acquisition and transmission risks in WCA [11,72,81,84,96]. On a continent where women are disproportionately burdened with HIV, prevalence of 17.7% (95% CI 16.5Á18.9) among MSM demonstrates a potentially concentrated epidemic in this key population. A prevalence of 7.3% (95% CI 6.6Á8.0) in clients of FSWs is also elevated compared to the general male population of the region and calls into question prevention programmes targeting this population. For clients of FSWs, male acquisition is linked to behavioural risks associated with multiple sexual partners, limited condom use and concomitant infection of an STI, amongst other determinants that are specific to men who engage in transactional sex [16,74,111]. For MSM, recent research has emerged that displays the increased transmission of HIV during anal sex, as well as sexual role versatility during samesex practices that increases individual HIV risks and drives transmission within sexual networks of MSM [21]. Thus, the acknowledgement of a heightened burden of disease in these populations is important for the design and implementation of specialized HIV prevention, treatment and care programmes regionally [16,26].
The heightened HIV prevalence in the MSM community found in these results is not unexpected, although the lack of data in WCA is noteworthy. The high prevalence reported in this review is comparable to other continents, with research indicating that MSM around the world are 19 times more likely to be infected with HIV than their adult male counterparts [18]. Interestingly, same-sex practices in WCA were reported as early as 1996 in a published population-based review [112]. The authors noted that the cumulative number of positive cases had exponentially increased from 1985 to 1995, and the primary modes of transmission were heterosexual practices (73.0%), homosexual practices (0.8%) and mother-to-child transmission (6.0%) [112]. More recent behavioural studies equally noted homosexual behaviour in different demographic studies. In Nigeria, 11.4% of sexually active secondary school students reported same-sex practices, and 12.4% reported anal sex [113]. In two Ghanaian studies in 2006 and 2008, prison inmates reported same-sex practices or identified as homosexual at 30.8% and 29.5%, respectively [114,115]. While sporadic reports of same-sex practices and elevated HIV prevalence have been reported in the region, there is limited targeted programme activity for these men [5,116,117]. What does exist is limited in scale, based on community-driven initiatives, and functioning in highly stigmatized settings [33,117,118].
While HIV prevalence in PWID was found to be relatively low, the Nigerian study provides two important details for programming in WCA. Firstly, while it has generally been assumed that PWID constitute a minimal presence in WCA, the study's ability to generate a sample size of 1459 through respondent-driven sampling indicates that this population does exist. Secondly, while HIV prevalence appears low, we know from other contexts that once HIV is introduced into this specific subpopulation, the possibilities of rapid spread and sustained transmission are great [119,120]. Contextually, policy makers are becoming aware of an increase of drug trafficking in the region, with large quantities of drugs confiscated in the past few years, and the recent conflict in Mali ascribed mainly to this trade [121]. Further supporting evidence of this regional trade was found in behavioural data in prisoners. In the same Ghanaian study in 2006, 41% of inmates reported imprisonment for narcotics; 7.3% had used cocaine, 5.2% heroin and 4.2% phencyclidine [114]. In the 2008 Ghanaian prison study, 35% of 1336 prisoners reported ever injecting drugs [115]. As was seen in Afghanistan as well as Thailand, Cambodia and other Southeast Asian countries, migration, trafficking, drug use and the HIV epidemic are intrinsically linked [119,120,122]. Thus, this is an important population to identify and appropriately engage in WCA in the coming decade of HIV prevention and control.
Historical perspective
This review also indicates that knowledge of HIV prevalence among key populations and the proportion of HIV infections attributable to key populations in WCA are not representative of new or changing dynamics of HIV transmission. In 1995, Djomand et al. noted that the male:female ratio of HIV infection in Côte d'Ivoire had declined over time and the gender ratio had shown females to be 4.8 times more likely to be infected than men in 1988, compared to 1.9 times more likely in 1991 [20]. The authors asserted that this decline displayed that the HIV epidemic was initially concentrated in a core group of FSWs and their male partners, and was potentially expanding in broader populations with less identifiable risk factors, similar to dynamics observed in other regions outside of SSA [20,122Á124]. In 2004, Côté et al. conducted a study of adult males (15Á59) in Accra, Ghana, and attributed 84% of existing cases of HIV to sex work and other transactional sex [125]. A study in 2008 based on Demographic and Health Surveys across four countries in SSA, including Ghana, showed that men who ever paid for sex were more likely to have HIV than men who had not (odds ratio 1.89, 95% CI 1.57Á2.28) [126].
In the capital city of Lomé, Togo, researchers estimated the attributable fraction of current HIV cases to sex work and other transactional sex was 32%, in contrast to only 2% of cases outside of Lomé [18]. Finally, recently in Nigeria, a modes of transmission study asserted that 23% of HIV infection was attributable to key populations, including 10% of new infections amongst MSM [93]. Despite high HIV prevalence among key populations and a high number of HIV in 2009, cases attributable to behaviours such as sex between men and sex work, systematic prevention and treatment programmes for key populations have not been implemented regionally [5]. While prevention programmes for FSWs and their clients have been noted in countries including Ghana, Côte d'Ivoire, Nigeria and Cameroon, the appropriate scale of these programmes and collected surveillance data are limited, and HIV prevention, treatment and care programming for key populations has failed to become a standard of best practices in the region [5]. Economic and regional migration Underlying dynamics of the epidemic indicate external, economic and urban-centred disparities have contributed to the complexity of the HIV epidemic in WCA over time. Domestic and international migration patterns were repeatedly reported and significantly mirrored economic crises and fluctuations in specific countries. For example, a study in Côte d'Ivoire documenting the FSW population that accessed health clinics between 1991 and 1998 noted a major shift in country of origin over time, with Nigerian women surveyed increasing from 2 to 56% between 1992 and 1998, and Ghanaian women decreasing from 82 to 9% in the same time period [29]. Other studies reported the migration of Ghanaian FSWs to other countries in the 1990s and asserted that the significant economic and political crises in the country at that time contributed to this migration [3,35]. The proportion of Liberian FSWs included in the same Ivorian study was shown to have increased from 0% in 1992 to 15% in 1995, and then to have declined to 2% in 1998 [94]. This evolution reflects the first internal conflict experienced in Liberia in the 1990s (1989Á1996) [127,128]. In a study reviewing the spread of HIV among FSWs in four cities across SSA, researchers noted that Cameroonian FSWs were more likely to have migrated internally to urban centres, while in Benin 86% of the FSWs sampled were from another country [41]. The only MSM study to discuss countries of origin was the MSW study in Côte d'Ivoire. Of the 96 MSWs sampled in Abidjan, 7.3% (7/96) reported a different country of origin [34].
The importance of these findings is revealed in the HIV prevalence among immigrants in the various studies. Nigerian and Ghanaian FSWs in the 2002 Côte d'Ivoire study were 1.03 (0.47Á2.23) and 3.69 (2.28Á5.97) times more likely to be infected than their counterparts from Côte d'Ivoire, Liberia and other West African countries [106]. In Lomé, two-thirds of FSWs were immigrants, and Ghanaian FSWs were 1.68 (1.06Á 2.66) times more likely to be living with HIV [126]. Addressing the needs of migrating populations at risk for or living with HIV is crucial, as these populations have less access to health services, are less likely to understand their human rights, and are more likely to contract a disease [129]. These populations are also more likely to be mobile; thus, successful prevention services for immigrant or mobile FSWs could potentially have an important impact in the overall reduction of HIV transmission and acquisition in the region [129].
Concurrently, disparity of HIV prevalence per locality was repeatedly reported in the various studies reviewed. In the same study that cited higher HIV levels among Ghanaian FSWs in Lomé, the prevalence among Lomé FSWs in 2005 was reported at 45.4% compared to 17.7% in the rest of Togo [18]. In two studies in Benin, there was significant spatial variation in the burden of HIV. For example, a study conducted in six cities in 2005 showed prevalence for HIV as high as 48.2% in Parakou, compared to 16.4% in Abomey/Bohicon [36]. A similar study found HIV prevalence in Cotonou, Benin, among FSWs to be 38.5%, compared to a pooled prevalence in three other large cities of the country of 58.9% [35]. Therefore, from an HIV prevention perspective, cross-border initiatives, effective community-based networking and standardized programmes across urban and regional landscapes for key populations are relevant for the WCA region.
Ways forward
Our review makes clear that there is a significant gap in the literature and subsequent HIV programmes for key populations in WCA. This may be ascribed to the application of the HIV response model of SSA to WCA epidemiological and prevention approaches. However, as reports of high HIV prevalence among key populations have existed in the literature since 1987, it also calls into question the structural barriers to healthcare for populations that engage in these defined sexual behaviours in this region. As in other contexts, sex work and other transactional sex, same-sex practice and drug use are either criminalized or highly stigmatized in this region, and public policies have ignored or generally declined to address the specific health needs of key populations [5,130,131]. Research has shown that macro-level policies that impede or deter health service delivery for key populations ultimately increase vulnerability to disease acquisition [23,130,132].
Data presented here provide a useful framework for HIV programming in the region. The inclusion of relevant sexual history and behavioural questions in large-scale surveillance surveys, such as DHS, may also be of benefit in obtaining a better overview of the epidemiology of key populations, both in WCA and worldwide. While the delivery of sensitive questions such as engagement in sex work, transactional sex, same-sex practices and drug use must be carefully administered (ideally not within the household setting), standardized national data collection would go far to inform country and regional policy development in WCA.
Subsequently, emerging data have shown that addressing the epidemic in key populations requires combined behavioural, biomedical and structural approaches [23,133]. Limited condom use with regular sexual partners, unawareness of HIV status and co-infections with genital ulcerative diseases are contributing factors to heightened prevalence [10,21,116]. High prevalence among key populations concurrently has implications for prioritized biomedical interventions [21,134].
While the knowledge that these populations have a higher risk for transmission and acquisition of HIV and other STIs is acknowledged, the method in which prevention and treatment programmes address these risks has yet to be firmly cemented in HIV prevention programming [13]. Researchers in the United States and elsewhere have demonstrated the importance of engaging populations in the continuum of HIV care Á from undiagnosed cases to testing and diagnosis, followed by linkage to ongoing care and treatment [135]. The continuum of HIV care significantly reduces the viral load among people living with HIV and ultimately reduces transmission [135,136]. In two recent studies in the United States, researchers found that due to advances in antiretroviral regimes, with 70Á80% adherence to antiretroviral therapy (ART) by participants, durable viral suppression occurred in most individuals, lowering the possibility for onward HIV transmission [136,137]. The findings indicate that the key to community viral suppression is early diagnosis of the disease, well-developed referral systems to clinical services, and care and support programmes that encourage adherence and access to treatment [136]. This approach has been shown to be effective in contexts with both high and low prevalence, and recent research from South Africa affirms that adequate ART coverage at the community level reduces incidence over time [138]. Thus, prevention programmes are beginning to show that distribution of prevention commodities and messages should be in concert with interventions that address the virology and biomedical aspects of care and treatment [135]. This is even more relevant for key populations who carry a significant burden of disease and ultimately are people living with HIV.
Structural factors acting at the macro-and meso-levels should not be ignored in WCA and are essential when building combination biomedical programmes [23,139]. Criminalization and public policy neglect substantially inhibit key populations' ability to access appropriate, life-sustaining and preventionoriented health services. Policy-level gaps and communitylevel stigma must be addressed if programmes are to adequately confront the needs of these populations [140,141]. Studies from other countries on the continent indicate the stigma experienced within their communities and at health services, significantly deters the uptake at clinical services for key populations [130,142]. Public policies that adequately address the intricate health needs, reduce stigma and discrimination, and facilitate community and provider level HIV care and treatment delivery will highly benefit the overall control and prevention of HIV among key populations in WCA [23].
Conclusions
This systematic review suggests that the concentrated HIV epidemic in WCA more closely resembles the epidemics in Southeast Asia and Latin America than those in the rest of SSA. This not only calls into question the response to the HIV epidemic in WCA but indicates that the region has an opportunity to adapt and develop region-specific prevention and treatment strategies. Targeted, cost-effective programmes that address not only behavioural but also biological and structural risk factors associated with HIV acquisition and transmission key populations should be engaged to reduce the onward spread of HIV. Prevention programmes should model strategies on appropriate programmes that reduce community viral loads, increase uptake of treatment among key populations and address the barriers to healthcare that exist in highly stigmatized settings. Ensuring that programmes rooted in community-based approaches address the continuum of HIV care, from diagnosis to viral suppression, will be a challenge but also a possible victory for HIV prevention and control in WCA.
Introduction
Globally, it has been observed that HIV prevalence among men who have sex with men (MSM) significantly exceeds HIV prevalence in the general population, even in the context of generalized epidemics [1Á3]. Across sub-Saharan Africa, HIV prevalence is estimated to be approximately 5% in the general population and 17.9% among MSM [1]. The few published studies from West Africa consistently report higher HIV prevalence among MSM than in the general population, with HIV prevalence estimates of 13.5% among MSM in Nigeria, 16.3% in Burkina Faso and 21.8% in Senegal [1,2,4Á6]. Individual-, network-, community-level and policylevel factors noted to contribute to the higher risk of acquisition and transmission of HIV and other sexually transmitted infections (STIs) among MSM have been found to be prevalent in Central and West Africa [5,7,8].
With over 550,000 people living with HIV in Cameroon, the prevalence of HIV among reproductive-age adults in A preliminary analysis of data reported here was presented at the 7th International AIDS Society Conference, which was held from 30 June to 3 Cameroon is estimated to be 4.3%, which represents a mature and widespread generalized epidemic [9,10]. In Douala and Yaoundé, the two largest cities of the country, HIV prevalence among reproductive-age adults is estimated to be 4.6% and 6.3%, respectively [10].
MSM were recently listed as a priority group in the Cameroon government's ''National Strategic Plan for HIV, AIDS, and STIs: 2011Á2015,'' along with goals including strengthening HIV-prevention programmes and building capacity for HIV health services that serve MSM [11]. The higher biological risks of HIV acquisition and transmission associated with unprotected anal intercourse (UAI) compared to other forms of sexual intercourse make MSM an important target population for HIV-prevention efforts [12]. However, only one HIV prevalence estimate from programmatic data in Douala is available to date for MSM; in this 2007 study, which used convenience sampling, HIV prevalence was estimated to be 18.4% [13].
Established individual-level risks for HIV acquisition and transmission among MSM in the region that are modifiable include UAI, inconsistent use of condom-compatible lubricants (CCLs), a high number of male partners, drug use and syphilis co-infection [1]. In a recent study, UAI in the past 6 months was frequent among MSM in Douala, as was having one or more female sexual partners [7]. Bisexual concurrency and bisexual partnerships among MSM have been observed in studies in Nigeria, Senegal and southern Africa [5,14,15]. Inconsistent condom use with male and female partners was common among MSM in one Togo study, and in a study conducted in Nigeria, it was associated with prevalent HIV infection, as was having been the receptive partner in anal intercourse in the past 6 months [5,16]. Other factors associated with prevalent HIV infection among MSM in Nigeria and Senegal were older age and having a symptomatic STI [5,15].
Network-level factors that may impact HIV-transmission risk include sexual network size, STI prevalence, levels of peer education, knowledge of HIV status within the population and network tendencies for drug use or transactional sex [1]. Community-level factors that may contribute to HIV risk include high community viral load and suboptimal coverage or uptake of healthcare services [1]. Additionally, the social stigma surrounding HIV, sexual identities and homosexuality in Cameroon may deter MSM from seeking voluntary HIV counselling and testing (VCT) or other health services [17Á 20]. Perceived stigma, including fear of seeking healthcare and refraining from disclosing same-sex practices to a health professional, and enacted discrimination, including denial of healthcare access based on sexuality, were frequently reported by MSM in Senegal and southern Africa, and were associated with increased sexual risk practices and prevalent HIV infection [21Á26]. Similar to most countries in sub-Saharan Africa, sexual relationships between men are both criminalized and highly stigmatized in Cameroon, and prosecution can result in up to 5 years of imprisonment [8]; physical violence from law enforcement is also a reality for some MSM, posing challenges to HIV programming [8,27Á29].
In light of the unique needs of MSM within generalized epidemics, and the limited data available on this vulnerable population in Cameroon, we aimed to describe the socio-demographic and behavioural characteristics of MSM in Douala and Yaoundé, determine the age-stratified HIV and syphilis prevalence in both cities, and investigate the individual-, network-and community-level factors associated with HIV infection among this population.
Study population
This cross-sectional study was conducted in AugustÁSeptember 2011 at two community-based organizations (CBOs) that provide targeted services to MSM: Alternatives-Cameroun in Douala and the Cameroon National Association for Family Welfare (CAMNAFAW) in Yaoundé. The interviewers were MSM community volunteers from Alternatives-Cameroun, Humanity First and CAMNAFAW. The MSM sensitivity trainings for interviewers were conducted at the Association Camerounaise pour le Marketing Social (ACMS) conference rooms in Douala and Yaoundé. Men aged 18 years or older who reported engaging in penileÁanal or oral intercourse with another man in the past 12 months were eligible for the study. Participants were recruited using respondent-driven sampling (RDS) [30], a sampling technique that enables estimation of proportions and regression modelling while controlling for non-random social network structures that bias peer-based recruitment. Seven seeds heterogeneous in sexual identity and sexual role preference were selected through existing community contacts to begin the recruitment process in each city. Upon enrolment in the study, all individuals were given three uniquely coded coupons to refer other MSM to the study. The CBOs worked with the research team to identify the initial seeds, screen study participants for eligibility and interview participants after receiving informed consent.
Sample-size calculations were based on the ability to detect a 15% change in the prevalence of condom use at last anal intercourse over time from 60% at baseline, with a design effect of 2, a significance level of 0.05 and a power of 80%, yielding 241 men for each city.
All participants provided written informed consent. The study was approved by the Cameroon National Ethics Committee, and the secondary analysis of the study data was approved by the Johns Hopkins Bloomberg School of Public Health.
Data collection
Participants completed an interviewer-administered structured questionnaire containing questions on: socio-demographics; network size; sexual behaviours, including condom and lubricant use (always vs. often, sometimes or never); experiences of STI symptoms; access to CBO-run MSM centres (which included outreach services); access to free condoms; VCT experiences; knowledge of HIV transmission, prevention and treatment (a composite score from 13 questions); and perceived social support for condom use (a composite score from eight questions, including support from partners, family and peers). Interviews were conducted in French or English, and they were recorded in French.
After participants received pre-test counselling, approximately 4 ml blood specimen was collected from them by a Global Viral Cameroon phlebotomist and tested to confirm 16.4.18752 HIV and syphilis serostatus, followed by post-test counselling on the same day. Men who screened positive for HIV or syphilis were referred to appropriate health services. All participants were reimbursed 1000 CFA franc (US$2) for completing the questionnaire and an additional 1000 CFA franc (US$2) for each peer referred into the study. All participants received free VCT, condoms and CCLs. Participants were also given access to peer education, support groups and linkage to HIV care.
Laboratory testing
Specimen processing and testing were conducted by staff from Global Viral Cameroon at the field sites. The national HIV surveillance algorithm for second-generation surveillance of HIV, adopted by the Ministry of Public Health of Cameroon, was used to measure current HIV status, including Determine † HIV-1/2 (Inverness Medical, Chiba, Japan) and Human HEXAGON HIV 1 ' 2 (Human GmBh, Wiesbaden, Germany). All indeterminate and positive samples and 15% of the negative samples were transferred to the Global Viral Cameroon Yaoundé laboratory for fourth-generation HIV enzyme-linked immunosorbent assay (ELISA), which detects antibodies to HIV-1/2 and the p24 antigen (whose presence indicates a possible seroconversion). Screening for syphilis was performed according to the national algorithm in Cameroon using Rapid Protein Reagin (RPR; SGM Italia, Roma, Italy) and Treponema pallidum haemagglutination assay (TPHA; Fortress Diagnostics Limited, Antrim, UK). Global Viral Cameroon was responsible for blood specimen collection, laboratory testing and serology data management.
Statistical analysis
ACMS and CARE International-Cameroon managed study data. Questionnaire data were double entered into the CSPro (version 4.0) software, exported into SPSS for data cleaning by ACMS and then exported to Stata/SE (version 11.2) for data analysis.
To minimize biases associated with chain referral sampling, weights were created in Stata/SE version 11.2 using the RDSII estimator to account for the effect of differences in the social network sizes of participants. Weights were based on the transition matrix for the dependent variable, current HIV status. Network size was assessed using the response from the latter of two questions: ''How many men who have had oral or anal sex with men in the last 12 months do you know, who also know you and live in this city?'' and ''among these men that you know personally, how many of them are 18 years and older?'' Homophily (range: (1 to '1) was assessed to evaluate the preferences of individuals to recruit MSM with the same HIV status [31].
Bivariate logistic regression models were used to estimate the unadjusted association between HIV infection and covariates selected based on our knowledge and the published literature. RDS-weighted prevalence and bootstrapped confidence intervals were calculated for all variables explored in regression modelling. Multivariate logistic regression models were built to estimate the adjusted association between current HIV status and covariates, with age forced into all models regardless of statistical significance. The Akaike information criterion (AIC) was used to favour the most parsimonious models. Bivariate and multivariate logistic regression models were also built with RDS weighting. p-values B 0.05 were used to indicate statistical significance. We further compared the associations between binary covariates using the Pearson chi-square test.
Results
A total of 295 men were screened in Douala, of whom 272 participated. In Yaoundé, a total of 246 individuals were screened, resulting in 239 participants. The median number of descendants per seed was 32 (range 6Á99) in Douala and 31 (range 2Á88) in Yaoundé. In Douala, the median number of waves per seed was 6 (range 1Á8); homophily for HIV status was (0.04 among the HIV-negative group and 0.06 in the group living with HIV. In Yaoundé, the median number of waves per seed was five (range 1Á9); homophily for HIV status was 0.004 for the HIV-negative group and 0.06 for the group living with HIV. In both samples, RDS network homophily was close to 0, which may indicate a close approximation to random recruitment. The majority (77.9%; 398/511) reported that they would have given a coupon to their recruiter (an indicator of the reciprocal ties assumption [32]).
Overall, the median age was 24 years (range 18Á51, interquartile range (IQR) 21Á28). In both cities, the majority had completed secondary education and were single. Sixtytwo percent of MSM in the overall sample identified as bisexual, compared with 28.6% who identified as gay or homosexual and 9.8% as MSM or other. Ninety-eight percent of all participants reported having penile-anal intercourse in the past 12 months. Median age of sexual debut with another man was 19 (IQR 17Á22) ( Table 1).
Responses to questions on health service uptake, HIV knowledge, social support and sexual practices are presented in Table 2. Men in Yaoundé were much less likely to access CBO services targeting MSM than men in Douala (33.7% vs. 66.1%, x 2 (1), pB0.001). No difference was observed in ever receiving free condoms (74.2% vs. 68.8%, x 2 (1), p 00.2). In both cities, a large proportion of men reported sex with males and females (46.2%) and experienced STI symptoms in the previous year (34.6%). Inconsistent use of condoms with regular male partners (64.1%; 273/426) and casual male and female partners (48.5%; 195/402) was common, as were condom slippage and breakage (43.7%; 216/494). Ninety percent of MSM who used condoms also reported using lubricant. Of these men, 26.3% (124/472) reported using lotion, saliva, Vaseline or other condom-incompatible lubricants.
As presented in Table 3, crude and RDS-weighted HIV prevalence were 28.6% (73/255) and 25.5% (95% CI 19.1Á 31.9) in Douala and 47.3% (98/207) and 44.4% (95% CI 35.7Á 53.2) in Yaoundé. Age-stratified prevalence is presented in Figure 1. In Douala, only 17 (6.3%) MSM refused to be tested; in Yaoundé, this number was higher (13.4%, n032). An association between having a history of VCT and refusing testing in the study was observed in Yaoundé, although it did not reach statistical significance (15.2% vs. 4.9%, p 00.08). Refusal was not correlated with age, education level, age of sexual debut, condom use, receptive sexual role preference, number of male sexual partners in the past 12 months or
Discussion
The high HIV prevalence and inconsistent use of condoms and CCLs observed in this study highlight that MSM are a priority population for HIV prevention, treatment and care services in Douala and Yaoundé. Furthermore, these data suggest that HIV risks are not evenly distributed given the significant differences in HIV prevalence between cities and between MSM sub-populations [1]. The individual-level factors found to be associated with HIV infection indicate that future HIV programming and interventions in Cameroon should address both behavioural and structural hurdles relevant to MSM. Consistent with data from other countries of sub-Saharan Africa [15,16,24], condom breakage and slippage and inconsistent condom use were common among this sample. CCLs, which decrease the risk of condom breakage, were also used inconsistently [33], suggesting that increased access to quality condoms and CCLs is essential [34,35]. While maximizing the use of condoms and CCLs is necessary in decreasing HIV risks among MSM, likely it will not be sufficient to change the trajectory of the epidemic given the high transmission probability of HIV infection associated with UAI, as observed in other settings [1,34]. The prevalence of active syphilis was low, as observed in other countries in the region [4,5]; however, a high proportion of participants reported experiencing STI symptoms, highlighting another network-level risk factor potentiating HIV transmission within the sexual network. Increasing the capacity for routine STI diagnosis, particularly for genitourinary infections, and linkage to treatment tailored towards MSM should be incorporated to support HIV-prevention programmes [15,36].
MSM in Douala who reported a preference of being the receptive partner during anal intercourse were more likely to identify as gay and be living with HIV. This not only affirms existing data demonstrating the increased HIV acquisition risk associated with unprotected receptive anal intercourse [37] but also echoes previous studies conducted in African settings in which self-reporting as gay was associated with higher odds of living with HIV compared to other MSM in the African setting [24,38]. Given that antiretroviral pre-exposure prophylaxis (PrEP) and rectal microbicides have been identified as research priorities for African MSM [39], and that rectal microbicides are currently in Phase II trials that are enrolling MSM from the African continent [34], evaluating the feasibility of novel biomedical interventions for sub-populations of MSM in Cameroon with significant HIV acquisition risks may be appropriate [34,40,41]. However, the cost-effectiveness of implementationing such biomedical interventions requires further research [42]. In addition, exploring increased antiretroviral therapy (ART) for MSM living with HIV likely represents an important strategy for preventing the transmission of HIV to sexual partners. However, the limited availability of ART for people living with HIV who are currently eligible for treatment, which has been documented in Cameroon, also needs to be addressed in order for ARTbased strategies for people at risk for the acquisition or transmission of HIV to be effective [43].
A significant proportion of the MSM in our sample were living with HIV by the age of 18Á23, indicating a high risk for HIV acquisition for men under 18 in these settings [24,44]; however, men under 18 have traditionally been excluded from HIV surveillance and prevention programmes [1]. Confidential youth sexuality counselling hotlines, web-based education and social marketing campaigns may be useful in reaching younger MSM with HIV programmes [36,37].
While our study did not include a detailed assessment of social stigma, other studies have demonstrated that stigma limits the provision and uptake of HIV prevention, treatment and care for MSM in the region [18,19,27]. Uptake of services delivered by targeted CBO providers such as Alternatives-Cameroun in Douala was high in our study, suggesting that community-based approaches can spread information-leveraging networks of MSM despite the contextual barriers. There was limited uptake of services in Yaoundé and higher refusal of HIV testing in the study; to the best of our knowledge, MSM-tailored HIV programmes were new and in development at the time of this surveillance project. The historically limited services may partially explain the higher HIV prevalence observed among MSM in Yaoundé as compared to Douala, although these participants also tended to be older and report more male partners, drug and alcohol use, and STI symptoms.
Data on the proportion of MSM living with HIV who were eligible for treatment, or who were actually on treatment, were not collected in this study. However, consistent data highlight the importance of addressing the needs of people living with HIV, including linkage to care, to optimize their own health and prevent onward transmission to other men and to women [45]. In Cameroon, only half of all patients eligible for treatment are estimated to be receiving ART, and ART stock outages at health facilities are frequent [43]. Given the significant stigma and discrimination that have been documented as affecting MSM in Cameroon, MSM living with HIV may be at higher risk of being unaware of their diagnosis or not achieving viral suppression [8,20,46]. MSM community groups have long been known to play essential roles in the HIV response, and the data collected here suggest that community-driven approaches should be scaled up to increase uptake of VCT and support linkage to HIV care, treatment and adherence support for those eligible [47,48].
The cross-sectional design of this study does not allow us to assume causality of the associations present in the data. There are several limitations to the generalizability of the HIV prevalence estimates reported in this study, which included individuals who reported receptive or insertive anal intercourse in the past 12 months. The generalizability of the results for MSM living in smaller urban centres and rural settings is unknown given that recruitment occurred in two large cities. Similarly, as our sample was predominantly young and educated, the results may not pertain to older MSM or individuals with lower educational status. Future studies to address these gaps could be conducted. The modest sample size may have reduced our statistical ability to detect other associations [49]. Due to the high refusal of HIV testing during the study in Yaoundé (13.4%), we were unable to assess the potential for bias in the HIV prevalence estimate from this city. However, RDS network homophily was close to 0, which may indicate minimal recruitment bias based on HIV status. Data on self-reported HIV status and the percentage of undiagnosed men were not available, which limit our interpretation of the association between knowledge of one's own HIV status and behavioural factors such as inconsistent use of condoms and CCLs. This requires further investigation in future studies. Although non-significant, the positive association between having been tested and refusing testing may suggest that individuals who are already aware of their HIV status may be underrepresented in our study.
Conclusions
These data provide results that can be integrated into HIV programmes for MSM in Cameroon and highlight the importance of targeted HIV prevention, treatment and care services that address all levels of HIV risk. Coordinating behavioural, biomedical and structural interventions, and supporting the work of local CBOs, will be keys to ensuring that HIV-negative MSM receive regular VCT and appropriate prevention services, and that MSM living with HIV are effectively engaged in the continuum of HIV care. Success in the continuum of HIV care necessitates addressing the barriers to the uptake of care, such as concerns about confidentiality and healthcare-related enacted and perceived stigmas [8,20,36]. Protecting the dignity and rights of MSM in healthcare settings and beyond allows for a safe environment for individuals to receive optimal care to protect themselves and their partners [27,29]. Monitoring the success of the next generation of HIV-prevention approaches will require innovative implementation science exploring changes not only in individual-level risks, community viral load and HIV incidence, but also in social and policy-level factors including stigma, discrimination, violence and criminalization.
Experiences of Kenyan healthcare workers providing services to men who have sex with men: qualitative findings from a sensitivity training programme [8], with the aim to support more inclusive health services for MSM [9]. Implementation of Kenya's AIDS policies requires the ability of healthcare workers (HCWs) to deliver appropriate and sensitive services to MSM patients. Effective HCWs must have accurate knowledge of the sexual health issues of MSM, non-prejudicial attitudes and behavioural skills to treat MSM patients [10]. However, HCWs in Kenya, as elsewhere in sub-Saharan Africa, rarely receive specialized training on how to provide care for MSM [11].
To address this gap in training service providers, Kenya's National AIDS and STI Control Programme (NASCOP) developed an education training programme to strengthen HCWs' skills and capacity to provide non-judgemental counselling and HIV healthcare services for MSM. The training programme incorporated two learning modalities: a computerfacilitated training programme covering eight modules [MSM and HIV in sub-Saharan Africa; stigma; identity, coming out and disclosure; anal sex and common sexual practices; HIV and sexually transmitted infections; mental health, anxiety, depression and substance use; condom and lubricant use; risk-reduction counselling] in addition to facilitated group discussions among programme trainees about the programme content and relevant clinical experiences working with MSM. Both learning modalities offer complementary approaches to educational training. Computer-facilitated training modules can offer a standardized and disseminable approach to improve HCWs' knowledge and health service delivery skills for MSM patients [12], especially in settings such as Kenya where access to formal medical education is constrained. Supplementing the computer-facilitated training with opportunities for peer discussion and support among HCWs can potentially enhance the transfer of standardized learning to the workplace [13]. van We conducted a preliminary pre-post-evaluation of HCWs who participated in the programme [14]. Quantitative findings showed improvements in MSM-related knowledge and reductions in discriminatory attitudes towards MSM. Effects were most pronounced among HCWs who had low levels of knowledge and/or more extreme negative attitudes towards MSM at baseline, and among HCW in clinical roles within governmental settings.
This article reports data from qualitative focus groups with participating HCWs, conducted prior to and three months after completion of the programme. The objectives of this analysis are to explore: (i) how HCWs characterized their professional challenges in serving MSM patients prior to the programme, (ii) how HCWs described the impacts of programme participation on their personal attitudes and professional capacities and (iii) how the computer-facilitated educational training programme can be improved.
Participants and intervention procedures
The study was conducted between October 2011 and March 2012 in four districts in coastal Kenya: Kilifi, Kilindini, Malindi and Mombasa. To recruit trainee participants, NASCOP issued announcements to 49 health facilities providing antiretroviral treatment in the four targeted districts. Announcements described the study as a two-day residential programme involving computer-facilitated training and group discussions on HIV and MSM. Volunteer participants completed informed consent procedures, and those who enrolled received 2000 Kenya shilling (approximately US $24) for travel and lodging adjacent to the training facility in Kilifi.
Participants were 74 HCWs from the four target districts. Fifty were females and 24 males, including 22 clinicians, 43 nurses and counsellors, and nine were administrators/ managers. The average age was 34. All participants identified as Kenyan, 84% as Christian and 15% as Muslim. Eighty-six percent had no previous training about MSM or anal sexual practices. Three participants (two females and one male) were transferred to health facilities outside the study area after the initial training and could not participate in the follow-up focus groups.
A total of four groups were convened to participate in the two-day residential training (one group per district), with 18Á19 participants per group. During Day 1, participants received a general overview of the programme, and each participant then independently self-administered the first four modules of the standardized, computer-facilitated training. Modules were designed to take up to two hours to complete. At the end of each module, participants answered a series of multiple-choice questions (up to 16 questions); to advance to the next module, participants were required to achieve a minimum score of 71% correct. After every two modules, participants engaged in a group discussion to reflect on the information and identify barriers and facilitators to improve on HIV prevention and other services for MSM in Coastal Kenya. A member of the research team facilitated group discussions. During Day 2, participants completed the final four modules and group discussions. At the end of Day 2, participants were asked to discuss work strategies for improving the quality of clinical care and HIV/ STD testing for MSM patients in their districts. Research team members included an MSM counsellor, a community liaison officer, a senior research counsellor and a social scientist; teams were supported by two MSM members from a local non-governmental organization. Research team members received a comprehensive three-day training on the intervention objectives and procedures, including didactics and role-play opportunities for discussion and problem solving.
Focus group discussions
Eight focus group discussions (FGDs) (each comprising 9Á10 participants; two focus groups per training) were conducted with participating HCWs prior to the training and were repeated three months following completion of the training. Focus groups were semi-structured and facilitated by a member of the research team, with a co-facilitator present to observe and take notes. Discussion topics included: identification of subcategories of MSM and their characteristics; sexual practices of MSM and risks for HIV and STI transmission; practices for sexual history taking and sexual health examination with MSM; risk-reduction counselling for MSM; personal values and attitudes towards MSM; strategies to improve communication between HCWs and MSM patients. Most discussions were conducted in English, although participants were also encouraged to speak in Kiswahili depending on their preference and language skills. All discussions were audiotaped, transcribed and entered into NVivo. FGDs conducted in Kiswahili were translated into English.
Analyses of qualitative data followed the ''framework approach'' described by Ritchie and Spencer [15], which involves systematic coding to identify and define concepts emerging from the data, mapping the concepts, creating typologies, finding associations between concepts and seeking explanations from the data. Data were coded by two independent research team members to ensure that interpretations of quotes were consistent and that data analysis was rigorous and transparent. The main concepts emerging from the data included: secondary stigma, professional training and service barriers to MSM patients; types of and justifications for social discrimination towards MSM in Kenyan culture; invisibility and silence about homosexuality in Kenyan culture; and subjective theories about the origins and nature of homosexuality. Differences among coders were resolved by group discussion involving other members of the research team.
The study procedures were approved by the ethical review board at the Kenya Medical Research Institute. All participants provided written informed consent for the FGD.
Results
Discussion of MSM-related attitudes, beliefs and behaviours before training Secondary stigma For most participants, secondary stigma was a dominant concern. Secondary stigma refers here to negative judgements from peers and community members for being associated with MSM. Participants cautioned that professional trainings van You know MSM, as he had mentioned, are regarded as outcasts. Therefore, if you offer to treat them in your clinic, the community will perceive it as . . . the clinicians are also MSM.
Owing to this fear, many HCWs described minimizing the amount of time with MSM patients. For example, one participant described having a basic willingness to serve MSM patients, but would allocate the shortest time possible: The fear of being associated, that's what is making us spend as little time with MSM clients when they come to our facilities. You will hurriedly clear him out.
However, fear of secondary stigma was not consistently expressed by all members of the discussion. A small subset of participants who had previous education and sensitization on MSM prior to the training reported comfort in attending to MSM patients. Consequently, these HCWs had become MSM patient advocates and educators in their clinics prior to engagement in this research study: I was trained . . . on issues to do with MSM. Last week, I met an MSM client who was HIV positive. It was in one of our departments and the nurse was like, '. . . you are the person who deals with these kind of clients'. I told her to refer the client in my office . . . Actually, I had to take [my colleague] for an MSM training. Her attitude has really changed and she is a now a different person.
Inadequate professional training and resources Participants acknowledged having little or no education about MSM health. Indeed, prior to the training programme, many HCWs expressed a sense of denial about the existence of MSM. For example, one reported that: I tend to reason differently when it comes to MSM. I sometimes tell myself, no, this doesn't exist; this is not possible.
Across multiple discussions, others questioned whether MSM are present in their local communities: Some of us are really green, we just hear stories on internet that some men are having sex with other men but we have never had an interaction with the MSM. MSM are unheard of in the place I come from.
HCWs who acknowledged the presence of MSM patients in their clinics described feeling inadequately prepared to provide services. Those with prior experience consulting MSM patients described specific challenges in diagnosing and treating rectal STIs, and argued for more appropriate guidelines: Of late, it's only a few individuals who have been trained in our facility. We don't have a guideline, yet we see them daily. We have no idea on how to manage infections affecting men who have sex with other men . . . Most of the medical personnel are not sensitized on issues to do with anal STIs and they are also not indicated in the STI charts. They only specify about urethral discharge, cervicitis, urethritis in men, PID [pelvic inflammatory disease] etc. It doesn't mention the anus.
Lacking the knowledge, skills and treatment guidelines for rectal STIs, HCWs often relied on guesswork and assumptions. Participants recognized the likelihood of underdiagnosing or misdiagnosing rectal infections transmitted through anal sex.
And when we are counselling or probing them about sex, we only ask them, 'Do you usually have sex?' When they say yes, we don't probe further to know the type of sex i.e., we just assume it is heterosexual. The medics are also not trained and if an individual comes with an anal complaint, they assume that it is haemorrhoids and refer them for surgery.
HCWs described how limitations in assessment forms reinforce the invisibility of MSM in their clinics. By not collecting information about same-sex behaviour or anal sex practices, these topics are reinforced as taboo issues that warrant silence and discomfort.
Most of the tools and the working conditions are not accommodative for this line of sexual orientation. I have never seen a tool in the CCC [comprehensive care centre] or the TB clinic asking for the clients' sexual orientation. So it's like, 'I don't need to know of what you do' . . . Therefore, the tools should be designed to capture the sexual orientation of a person so that the health workers can have a feel that it is a part of the health issues and not a gossip.
Additional resource limitations for treating MSM were discussed. HCWs reported on the inconsistent supply of lubricants for use during anal sex, and also described how the physical structure of the health facility hinders their ability to provide privacy and confidentiality for sexual health consultations.
The MSM usually come to the clinics and ask for the lubricants or condoms but you will find that the lubricants are not available; it's only the condoms. I think there is no confidentiality because of the way our health facilities have been structured, van i.e., someone can bump in while you are attending to a client. You could be talking of sensitive issues but other staffs won't bother. They will sit on the other side and do their stuff. So the client might not be free to open up.
Personal and social homophobia Many HCWs acknowledged holding prejudiced views towards MSM. A number of participants commented on how negative judgements towards MSM may influence the provision of services.
We perceive them negatively and feel that they don't deserve our services. Some health workers don't like to examine them. They claim that such infections are self-inflicted.
HCWs reflected on the influences of culture and religion on their treatment of MSM patients. When reminded of their professional obligation to provide effective services to all patients, they described internalized barriers that must be overcome.
I find it abnormal for a man to have sex with another man. It is both culturally and religiously unacceptable . . . Voices from religion or the community tell me that it is wrong. Professionally, I will have to handle that shock and look at possible ways of helping this person.
Participants reported a tendency to exhibit subtle forms of stigma and discrimination towards MSM patients, such as by maintaining body distance. Other times, HCWs explicitly showed disparaging treatment: When they seek medical assistance in our facilities, the same providers will shout, 'Look at him, he is telling me that he is having an anal STI; can you leave my room'. Instead of treating them with respect, they end up drawing their colleagues' attention.
However, some HCWs challenged those who expressed personal prejudice towards MSM. Participants who had prior exposure to MSM sensitization argued that HCWs have a professional duty and societal obligation to provide non-prejudicial services to MSM.
We as health workers feel that MSM issues need not to be discussed, they are regarded as outcasts. How then would we come up with a constructive discussion about people whom we feel should not be in the society at first place? In my opinion, I think this is the biggest obstacle. If we accept these people and treat them as our clients, then it will be of great help to the society. Post-training discussion of HCWs' attitudes, beliefs and behaviours Recognition of MSM in Kenya A pervasive theme in post-training focus groups was the explicit recognition of MSM in Kenya. Many reflected on how their prior denial of MSM behaviour, and their previous belief that anal sex among men was negligible in Kenya, had inhibited their capacity to provide services. Participants felt ''empowered'' by the training to address HIV and other health needs of MSM, as one stated: I didn't ever believe that MSM were in existence but the training empowered me with a lot of knowledge and information on how to probe about issues of anal sex.
Participants described how the training enhanced their understanding of the complex interplay between homophobia, community denial of MSM and HIV transmission. Some advocated to local colleagues for the acceptance of MSM and educated them about the biological and behavioural circumstances that place MSM at heightened risk for HIV infection. One participant described: I went and gave the feedback to my colleagues immediately after the training and some were as if they have never heard such a . . . They used to hear about it but they were not sure whether it was a real, whether such people exist. Therefore, I had to make them understand that the practice is in existence and that's nature.
Professional responsibilities as a health provider During follow-up focus groups, participants described their professional responsibility to treat all patients with equity and respect. They endorsed a basic value of professionalism and treating MSM patients to the best of their ability. For many, this required a suspension of personal judgement in order to provide effective care: As a professional, I am not supposed to segregate them, whether I support homosexuality or have a different perception or judgment. As a clinician, my duty is to treat without imposing my values on the patient. That's the positive thing I got from [the training program] and it's what I'm doing now. Some described witnessing discriminatory actions towards MSM in their facilities or observing breaches in patients' confidentiality. They reflected on how these experiences could foster distrust of HCWs and discourage MSM patients from seeking care when needed, thus perpetuating a cycle of HIV transmission. There was widespread consensus among group members that a concerted effort must be made to establish trusting rapport with MSM patients, and take extra care to employ discretion at all times. As one participant articulated: I think the problem is that, the individuals we have attended to still want to see if they can trust us, if we can respect their privacy . . . As for now, it will take time because they are trying to internalize on our missions towards them and they will come out once they are convinced that you don't have an ill motive towards them.
During the follow-up focus groups, HCWs were asked to reflect upon and share their experiences, that is, work practices and attitudes towards MSM in their respective health facilities, and to reflect on strategies to change van discriminatory actions towards MSM in their health facilities. Many participants stressed the importance of separating personal and religious values from professional ethics for the sake of HIV prevention in Kenya. While some felt the training had helped to normalize same-sex relations, others adamantly affirmed their aversion to MSM practices, but felt that they could compartmentalize their values to achieve the greater national public health goal.
The key message is almost the same. We are concentrating in breaking the transmission cycle among special groups, neglected groups. The bottom line is: we are not promoting but trying to help.
Sophisticated knowledge of risk in MSM
During the follow-up FGDs, participants exhibited a multifaceted understanding of the biological, behavioural and social influences that place MSM at risk for HIV. They described a better understanding of the processes through which unprotected anal sex contributes to HIV and STI transmission in both men and women, and the ways in which condoms and lubricants help to reduce risk. Moreover, many participants identified quality health education and counselling for MSM patients as integral to HIV prevention efforts in Kenya.
Participants generally recognized the societal pressures on MSM to conceal their sexual orientation, which MSM often mitigated by engaging in heterosexual relationships. They discussed the ways in which discrimination and lack of counselling and support services have hampered access to vital health services for MSM. The stigma endured by MSM in Kenya was consistently identified as an impediment to treatment, and many participants emphasized the need for HCWs to be thorough when examining MSM patients, who might not readily disclose their sexual practices: I think it is good to do an examination as far as STI is concerned. A client might tell you that he is having a problem in his private parts. Such a client will openly tell you the exact location of the problem when you take the initiative to examine him. Even if they go, they tend to be reluctant to disclose to clinicians that they are having anal infections. They end up getting the wrong medication and suffer in silence.
Ongoing challenges
Participants reflected on the challenges they will continue to face in affording appropriate health services to MSM. Many HCWs noted that time constraints and heavy workloads hinder their ability to deliver sensitive health services that MSM patients might require. Despite their desire to provide comprehensive health services to their MSM patients, some of the participants felt this was not always possible in practice: Sometimes, as much as you would like to give all the attention to the client, there is a workload issue as other patients will be waiting. You may want to give the best, but the patients and the workload are too much.
Secondary stigma was considered an ongoing challenge, and HCWs tasked themselves to confront discrimination and stigma towards MSM expressed by their professional peers. Education, institutional support and other monitoring mechanisms were mentioned as powerful means for mitigating the effects of secondary stigma on service delivery to MSM patients, but all HCWs concurred with the fact that ''it begins with openness, respect and understanding.'' HCWs emphasized the social challenges in targeting MSM for HIV preventative care. The marginalization of MSM, the belief that homosexuality runs contrary to cultural values and the fear of secondary stigma and resistance from fellow health professionals were regarded as impediments to the provision of care for MSM. As one participated stated: Personally, I can say that my values have changed, though not 100%. I am not sure of the exact percentage, but I have positively changed. As much as I would like to live and exercise my changed values, there are still so many challenges in the society. I would like to give comprehensive care to MSM, but the society is too negative about them. This is a very big blow, given the fact that I am the only changed person.
In light of this, many participants noted the need for duplication opportunities for HCWs not yet trained on MSM sensitivity issues. They unanimously remarked that the on-line sensitivity course is very beneficial for skill development and in combination with follow-up group discussions allows for interpreting learning and connecting it to daily practice.
All participating HCWs advocated for community-wide sensitization campaigns to reduce stigma and encourage awareness of HIV risk in MSM, expressing the need for the community at large to engage in ongoing and productive dialogue in the struggle against HIV in Kenya.
Discussion
This analysis provides qualitative insight into HCWs' attitudes and experiences with MSM prior to and following a computer-facilitated MSM sensitization training programme [15] that will assist in amending the health workers' e-learning sensitization course in future. Primary concerns expressed at baseline included fear of secondary stigma, lack of professional education about MSM, and negative influences of personal and social prejudice towards MSM. The nature of discussions changed following the programme, in which participants acknowledged the presence of MSM in their clinics, endorsed the need to treat MSM patients with high professional standards, and demonstrated sophisticated awareness of the social and behavioural risks for HIV among MSM. HCWs advocated for continuing the training and inviting more health professionals to participate, but cautioned that exclusively targeting MSM in the programme title could deter participation. HCWs also commented on the need for ongoing community dialogue about MSM, but recognized that community-level change will take time.
The attitudes and beliefs expressed by participants before versus after the training reveal many of the challenges to van service provision for MSM patients. In general, participants' personal beliefs about MSM and their endorsement of stigmatizing attitudes appear to have transformed following the programme. However, participants expressed ongoing concerns about secondary stigma and the influence of their professional peers' negative judgements towards MSM patient and, by association, towards themselves. Professional peers' negative and stigmatizing attitudes can potentially dilute the effects of the training on HCWs. Efforts to train larger cohorts of HCWs, establish networks of trained HCWs across different health clinics and change of institutional norms towards MSM patients may be necessary to counter the effects of secondary stigma and achieve sustainable improvements.
Limitations to this research must be acknowledged. First, due to the nature of qualitative methodology, participants' responses might be influenced by social desirability and peer influences. Second, the findings reported here do not permit temporal, causal, or quantitative inferences, but indeed correspond with programme evaluation data reported in a related paper [14]. Third, due to the voluntary nature of participation, attitudes expressed by HCWs in this sample might not be representative of their peers and colleagues. Fourth, due to the active role of Kenyan health administrators in supporting this programme, the findings might not be replicable in areas where such support is lacking.
Conclusions
This is the first known qualitative evaluation study of an MSM sensitivity training in Africa, which suggests that an online MSM sensitization training combined with group discussions can be a promising approach to improving health providers' awareness, attitudes and beliefs about the health needs of MSM patients. Quantitative evaluation results, which show similar findings, are reported in a companion paper [14]. Further research is needed to evaluate the programme in a controlled study, and examine the implementation processes associated with successful programme delivery. Perspectives and service delivery outcomes from MSM patients would enhance understanding of the impact of this training on patient interaction. A particular strength of the intervention was the incorporation of two complementary training modalities Á computer-facilitated training and group discussions Á to provide didactic content as well as opportunities for group reflection, feedback and support. In general, participants noted a transformation in their personal attitudes and endorsement of stigma towards MSM following the training. However, their comments revealed the continued challenges to providing services to MSM in the context of broader societal homophobia and secondary stigma among their peers; their comments also highlighted challenges in recruiting larger groups of HCWs into the training due to anxiety around secondary stigma. Findings reported here can inform further adaptations of the training, particularly those domains that might influence HCWs' willingness to participate and respond to the training (e.g., by emphasizing professional responsibilities of all health providers) and that diminish the effects of secondary stigma (e.g., by providing opportunities for ongoing support among trained HCWs). Findings underscore the need to view HCWs as an integral, but not absolute, component in addressing HIV and other health adversities among Kenyan MSM. Trained HCWs might benefit from continued opportunities for peer support, to counter feelings of professional isolation and motivate engagement in best practices. As participants noted, multi-component programmes and long-term commitments are necessary to achieve the goal of providing appropriate, effective services to MSM. Results: Crude HIV and syphilis prevalence estimates were 15.4% (RDS-weighted 12.5%, 95% confidence interval (CI): 7.3Á17.8) and 5.3% (RDS-weighted 4.4%, 95% CI: 3.1Á7.6), respectively. Ninety per cent (90.4%, unweighted) of HIV infections were reported as being previously undiagnosed. Participants were predominantly gay-identified (60.8%) or bisexually identified (36.3%); 50.7% reported recent concurrent relationships. Approximately half reported consistent condom use (always or almost always) with casual male partners, and proportions were relatively uniform across partner types and genders. The prevalence of perceived and experienced stigma exceeded 20% for almost all variables, 11.4% ever experienced physical violence and 7% were ever raped. Current age 25 years (RDS-weighted adjusted odds ratio (AOR) 3.9, 95% CI: 1.2Á12.7), single marital status (RDSweighted AOR: 0.3; 95% CI: 0.1Á0.8) and age of first sex with a man B16 years (RDS-weighted AOR: 4.3, 95% CI: 1.2Á15.0) were independently associated with HIV infection. Conclusions: Results demonstrate that MSM represent an underserved, at-risk population for HIV services in Malawi and merit comprehensive HIV prevention services. Results provide a number of priorities for research and prevention programmes for MSM, including providing access to and encouraging regular confidential HIV testing and counselling, and risk reduction counselling related to anal intercourse. Other targets include the provision of condoms and compatible lubricants, HIV prevention information, and HIV and sexually transmitted infection treatment and adherence support. Addressing multiple levels of HIV risk, including structural factors, may help to ensure that programmes have sufficient coverage to impact this HIV epidemic among MSM.
Keywords: HIV; men who have sex with men (MSM); behavioural risks; stigma; Malawi; prevention.
Introduction
Recent years have witnessed an increased awareness of the high burden of HIV among men who have sex with men (MSM) across the globe [1Á3]. Emerging research suggests a greater transmission efficiency of HIV through receptive anal intercourse that is approximately 18 times higher than that of penile-vaginal sexual contact, increasing the risk among MSM for acquisition of HIV during sexual intercourse [4,5]. National HIV strategies and funding priorities, however, remain inequitable in many countries [6,7], particularly where structural factors, such as the criminalization of homosexuality, play critical roles in the level of research and programming available to MSM [8,9].
The HIV response in Malawi has focused on the prevention of heterosexual and vertical transmission of HIV to counteract the observed HIV incidence rates of 2Á4% among adults in the 1990s. Today, the epidemic remains a generalized one, with an estimated 8.0% HIV prevalence among adult men [10]. Like neighbouring countries, assessments of specific risk factors for the acquisition and transmission of HIV, including transmission among MSM and other populations such as female sex workers, have been limited in the country [11]. Criminalization and stigmatization of homosexuality, as in other settings [8,12,13], are likely underlying factors for the limited targeted research and programming in the Malawian context. [14]. This study documented HIV prevalence at approximately 21% [14], individual risk for HIV infections associated with increased age of the participant and inconsistent condom use [14] and high levels of violence and perceived stigma [15].
Another exploratory study examined socio-demographic and sexual behaviour characteristics among 97 MSM in central and southern Malawi. Although HIV prevalence was not assessed, the study found evidence of high-risk behaviours such as inconsistent condom use (32.5%), transactional sex (23.7%), low exposure to HIV messaging (17.5%) and a low history of HIV testing (58.8% ever tested) [16]. Although these studies were the first and only to elucidate the socio-behavioural factors among MSM in Malawi, they were rapid assessments and served to highlight areas for future research and prevention.
In response to the global epidemic of HIV among MSM, combination prevention packages have been put forth as a key method to curb the HIV epidemic among MSM [17,18]. To inform the content and scale of a combination HIV prevention intervention (CHPI) for MSM in Malawi, we conducted this study to estimate HIV prevalence, characterize associations of prevalent HIV infections, and evaluate barriers and facilitators to uptake of HIV prevention services among MSM in Blantyre, Malawi. Research was conducted in collaboration with a community-based organization, the Centre for the Development of People (CEDEP), and the Malawi College of Medicine, University of Malawi.
Study population and setting
This cross-sectional assessment was conducted from August 2011 to March 2012 in Blantyre, Malawi. Eligibility requirements for participation included being born male, being aged 18 years or older, being fluent in Chichewa or English, having reported anal sex with another man in the last 12 months, having no prior participation in this study, and providing informed verbal consent to participate. Study activities were conducted in private rooms of CEDEP's study site and implemented by staff from CEDEP, which provides HIV prevention activities for MSM in Malawi, and the Malawi College of Medicine. All staff members were trained in confidentiality and human subjects protection, qualitative and survey research and respondent-driven sampling (RDS) methods.
Recruitment method
Participants were recruited via RDS, a chain recruitment method often used to achieve more representative samples of hard-to-reach populations [19]. Recruitment began with 10 purposively selected seeds who were each provided with three study-specific coupons with which to recruit peer MSM from their social network into the study. Initiation of seeds was staggered over the duration of the study, taking into consideration potential propagation failure by some seeds and eventual die-out of the chains. Seeds were recruited from the pool of MSM who were involved in local HIV prevention programmes or had participated in prior formative research, and they were selected to represent a range of characteristics, including age, education, employment and sexual identity. Individuals who were recruited by seeds and enrolled in the study were then provided with three study coupons for further recruitment of peers. This process continued until the target sample size was reached. Participants were reimbursed K1500.00 (US$5.00) for transportation costs for participation in the study and K500.00 (US$1.50) for recruitment of each peer into the study. A full description of traditional RDS methodology can be found elsewhere [20]. Netdraw software (Analytic Technologies) was used to monitor RDS recruitment [21].
Sample size
The sample size calculation was powered on the assumed 85% effectiveness of condoms in preventing the transmission of HIV during intercourse [22]. Thus, we assumed that approximately 30% of the sample would be consistent condom users and that they would be 85% less likely to be living with HIV than the 70% who are not consistent condom users. Based on previous research, we estimated that the HIV prevalence in the population would be about 20%, equating to 27% among non-consistent condom users, 4% among consistent condom users and a 30% population prevalence of consistent condom usage. We used a design effect of 1.5 [23], power set at 80% and a significance level of 95%, which resulted in an effective sample size estimate of 345 participants for which we had targeted 350 MSM.
Measures
Participation included a structured survey instrument and a biological assessment of HIV and syphilis. Trained interviewers administered surveys in the Chichewa language, following pilot testing. Measures included sociodemographic characteristics, substance use, mental health and depression symptoms, sexual relationships and disclosure of orientation or sexual practices to family and peers. Measures of sexual practices included practices with men and women, including anal, oral and vaginal sex; number of sexual partners and partner characteristics; concurrent relationships, defined as ''two sexual partnerships at the same time or two ongoing sexual partnerships (male and/or female genders)''; and transactional sex (purchased or sold). We measured HIV knowledge and prevention, including aspects of condom and condom-compatible lubricant use; HIV testing and counselling exposures; and access to and uptake of health services. Human rights measures included experiences of physical and sexual violence, experienced and perceived stigma and history of imprisonment. Recall periods were lifetime, last 12 months or both, and they are specified in the results tables. The development of survey questions, recruitment methods, coupons and study procedures was informed by formative research that was conducted in MayÁJuly, 2011 [24]. Wirtz Biologic specimens Following completion of the interview, participants proceeded to HIV and syphilis testing. A trained nurse from the College of Medicine conducted HIV testing, blood specimen collection and pre-and post-test counselling. Blood-based rapid HIV tests were conducted simultaneously using the Determine † HIV-1/2 and Uni-Gold rapid HIV tests (manufactured, respectively, by Inverness Medical, Chiba, Japan; and Trinity Biotech, Bray, Ireland). Participants received their HIV test results and post-test counselling within 15 minutes of collection. Separate specimens were collected for confirmatory testing of discrepant or indeterminate HIV rapid tests using Western blot in accordance with Malawian National Guidelines [25]. Approximately 5 ml of whole blood was collected for TPHA (treponema pallidum haemagglutination test) syphilis testing (Bio-rad, Hercules, CA, USA). Resource constraints prevented the use of the nontreponemal test, which would differentiate active from past syphilis infections. Confirmatory HIV and syphilis tests were analysed at the Malawi College of Medicine laboratory in Blantyre. Participants returned within one to two weeks to receive their syphilis test results. Participants testing positive for HIV and/or syphilis were referred to the local hospital or to the Johns Hopkins antiretroviral therapy and sexually transmitted infection clinic located at Queens Hospital. Participants were provided with information about local health centres that had, as part of the study, received training for the provision of services to MSM. One trained team member (EU) provided counselling services to MSM participants as needed.
Analysis
Johns Hopkins University conducted secondary data analysis of collected data. The principal outcome of interest was HIV diagnosis with predictor variables that included demographics (education, age, number of children and marital status), socioeconomic variables, lifetime residence in urban or rural locations, recent sexual behaviours, human rights exposures, HIV prevention methods, health-seeking behaviour and laboratory markers of syphilis infection. Variable-specific individualized weights, which take into account estimates for individual degrees, were computed by a data-smoothing algorithm using RDS for Stata [26]. The estimated weights were used in univariate RDS-weighted analyses. HIV status individualized weights were used in the bivariate and multivariate RDS-weighted analyses. Bootstrapped 95% confidence intervals (CIs) were computed using 1000 iterations for the estimated descriptive statistics [23]. Homophily, a measure of to what extent respondents prefer to recruit from their own group rather than at random, was estimated where appropriate and presented in the results in Table 1.
To develop the statistical model, we first carried out bivariate analysis to assess the association of HIV status with the control variables (Table 4). Demographic variables were included in the multivariate logistic regression model regardless of the estimated strength of their bivariate association with HIV status. Selected non-demographic variables were included in the multivariate model if the chi-square p-value of association with HIV status was 50.25. Some variables such as HIV testing were not included in the multivariate model due to collinearity. The final model, presented in Table 4, includes demographics and variables left in the final model following goodness-of-fit tests. All statistical analyses were conducted using Stata 12.1 [27]. Results provided in the text report RDSweighted estimates (unless otherwise specified), while tables display unweighted and RDS-weighted estimates as well as 95% CIs for weighted estimates. Table 4 presents the results of bivariate and final multivariate analyses, including unweighted and RDS-weighted odds ratios (ORs) and adjusted ORs (AORs) for the final multivariate model.
Human subjects
Research activities were reviewed and approved by the Malawi College of Medicine Ethics and Research Committee and for secondary analysis by the Johns Hopkins Bloomberg School of Public Health Institutional Review Board.
Results
A total of 338 MSM (including original seeds) were recruited via RDS and enrolled into the study, reaching 19 waves of recruitment. Out of 10 seeds, five recruited participants; one recruitment chain was responsible for the recruitment of 70% of the study population. Three recruitment chains are reflective of later seed initiation. A total of 706 coupons were distributed with a return rate of 48%. The majority of participants reported recruitment by a friend (60.5%) or sex partner (32.3%). Median MSM network size was 8 (range 1 to 800). Figure 1 displays the RDS recruitment diagram, highlighted by HIV diagnosis. We used this method to monitor recruitment and to assess whether HIV diagnosis inhibited recruitment, which appeared not to be the case.
Participants' median age was 25.1 years (range: 18 to 49). Based on RDS-weighted estimates, 51% were unemployed and 21.6% had ever been in jail or prison. Eighty per cent identified as male gender. Sixty-one per cent identified as gay or homosexual, and 36.3% reported bisexual identity. Sixteen per cent were married or cohabitating with a woman. Table 1 displays sociodemographic characteristics.
The crude prevalence of HIV infection in this population was 15.4%, with an RDS-weighted estimate of 12.5% (Table 1). The majority, 90.4% (unweighted), of these infections were previously undiagnosed; these participants had either selfreported as negative or reported never being tested for HIV infection. Positive syphilis diagnosis was low at 4.4%. Table 2 presents sexual practices, partner characteristics and social exposures. Only 18.1% had ever disclosed sexual practices or orientation to their family, and equally few (18.9%), had ever disclosed to a health provider. Participants reported a mean of four male partners (range: 1 to 50), and 31% reported having female partners in the last 12 months. Half of the population reported concurrent sexual relationships, and, among those in a relationship, 61.3% believed their partner was also involved in a concurrent relationship. Prevalence of perceived and experienced stigma and discrimination exceeded 20% of the population for almost all variables, 7.0% were ever raped and 11.4% had ever experienced physical violence.
Responses to questions on knowledge of HIV risk, prevention methods and practices are reported in Table 3. 16.4.18742 Approximately half of the participants with casual male partners (n 0256) reported using condoms always or almost always with casual male partners; frequencies were approximately similar across partner type (e.g., casual or main) and partner gender. Approximately 44.3% had never been tested for HIV. Among those ever tested for HIV infection, 45.5% (unweighted) had not been tested within the last year. Several sociodemographic variables were associated with HIV infection in the bivariate analysis (Table 4). These included current age 25 years (RDS-weighted OR: 8.1, 95% CI: 2.9Á 22.2), single marital status (RDS-weighted OR: 0.2, 95% CI: 0.1Á0.4) and having more than one child (RDS-weighted OR: 5.3, 95% CI: 1.8Á15.6). Age B16 years at first sex with a man was associated with HIV infection (RDS-weighted OR: 1.7, 95% CI: 0.4Á7.5). Considering water-based lubricants to be the safest lubricant (RDS-weighted OR: 0.9, 95% CI: 0.2Á3.6) and use of water-based lubricant (RDS-weighted OR: 0.6, 95% CI: 0.2Á2.0) were also marginally protective. The final multivariate model included age, marital status, number of children, knowledge of risk related to positioning (insertive or receptive anal intercourse), lubricant type used, age of first sex with another man, history of rape, number of male anal or oral sex partners and other known confounders such as employment, education and syphilis diagnosis. Of these, current age 25 years (RDS-weighted AOR 3.9, 95% CI: 1.2Á12.7), single marital status (RDS adjusted AOR: 0.3, 95% CI: 0.1Á0. 8) and age of first sex with a man B16 years (RDS adjusted AOR: 4.3, 95% CI: 1.2Á15.0) were independently associated with HIV infection.
Discussion
This cross-sectional study, the most comprehensive yet conducted among MSM in Malawi, describes the high prevalence of HIV infection as well as the limited uptake of HIV prevention, testing and care services among MSM in Blantyre, Malawi.
HIV prevalence was high among MSM, and nearly all HIV infections were among men who reported being unaware of their status of HIV infection. Only slightly more than half of the population reported ever having been tested, and only half of those were within the last year, potentially explaining this level of undiagnosed HIV infections. Knowing one's status is increasingly more important for HIV prevention. Novel HIV interventions, including pre-exposure prophylaxis for HIVuninfected men [28,29] and early treatment for people living with HIV [30], represent a new generation of HIV-statusdependent interventions. Awareness of one's HIV status has also been associated with decreased self-reported prevalence of high-risk sexual practices that are associated with HIV transmission [31]. Recent US Centers for Disease Control guidelines have suggested more frequent testing (every 3 or 6 months) based on individual assessment of sexual risk behaviours [32], representing a strategy which may also be relevant for MSM in Malawi. Young age of first sexual intercourse with a man ( B16 years) was independently associated with HIV infection in this population, with almost four times greater odds of HIV infection compared to the referent group. This association may suggest biologic susceptibility during physical development, high-risk sexual behaviours and lack of access to or low use of condoms at a young age, and/or an association with duration of sexual activity [33,34]. Likewise, the association of prevalent HIV infection with older current age may be due to higher cumulative risk exposures for acquisition of HIV.
However, estimating the duration of sexual activity is challenging as sexual behaviours are not static, but vary across the life course and as partnerships change [34]. While study-related factors such as low power and potential misclassification of behaviours may partially explain insignificant findings, broader factors such as high background prevalence of HIV in the MSM population [35], biologic susceptibility of rectal mucosa [36] and network-level characteristics may also be more determinative in driving HIV transmission and acquisition risks among these men [35,37]. Nonetheless, this study described a population reporting high-risk behaviours, suggesting the need to ensure accessibility to HIV prevention interventions across ages [38Á40]. These behavioural risks, combined with the high proportion of undiagnosed HIV infection in this study, also suggests there is a high likelihood of someone with a high viral load within a sexual network potentially driving onward transmission [41,42]. Future research among MSM in Malawi to better characterize different risk strata among MSM, including reported sexual practices and sexual network characteristics, is needed to better tailor the content of interventions and enable the identification of infection. While addressing the unique needs of the individual is fundamental, stigma and discrimination have been reported commonly as structural barriers to the uptake of services [43,44]. Experienced and perceived stigma as well as other physical and sexual violence were common among MSM in this study, consistent with earlier quantitative and qualitative studies in Malawi [15,24]. Stigma has been shown to limit health-seeking behaviours and use of HIV prevention methods, disclosure of sexual practices to health providers, and providers' liberty to provide services to MSM [14,15,24,45,46]. The need to keep male-male partnerships hidden may lead to more frequent, short-term relationships and increased highrisk behaviours [24]. Such responses to stigma and social pressures may explain the high prevalence of concurrency, the high-risk sexual practices reported in this study, the proportion of men who are married or cohabitating with women and the protective effect of single marital status in this analysis. Addressing these social issues is a necessity for improving access to and uptake of effective HIV prevention interventions [8].
Taken together, these data demonstrate that MSM are an underserved and important population for targeted HIV prevention interventions; MSM may specifically benefit from the CHPI that we subsequently developed based on the quantitative results presented here. Mathematical models have shown that high levels of coverage among MSM (i.e., 60Á 80%) are required to change the trajectory of the HIV epidemic among MSM, and such findings are likely to be relevant in Malawi [2,47,48]. To address low coverage of prevention options among Blantyre MSM and the limitations of single interventions, comprehensive packages of interventions that include behavioural, biomedical and structural approaches may be the most effective approach to reducing HIV among MSM [17]. Such interventions may be feasible in Malawi and may have the same positive impact on sexual transmission that has been observed in other settings, including countries where same-sex practices are criminalized [30,47,49,50].
The method of intervention delivery is critical to the success of HIV prevention programmes in the context of complex social environments. The success of RDS recruitment suggests that interventions leveraging existing peer networks, which have demonstrated efficacy in other settings [51,52], may serve as a feasible approach to providing and supporting HIV prevention interventions for MSM in Malawi. Addressing stigma in healthcare settings may improve provider-patient relationships, facilitate disclosure and meaningful discussion of risk practices, and foster linkage to HIV testing and care [53]. While the subsequent feasibility assessment of the CHPI programme for MSM in Blantyre will be informative for understanding how a comprehensive package may address individual social and behavioural risks for HIV infection, broader social acceptance of MSM may take time and remains a crucial step towards improving the health status of MSM and thus all Malawians [8].
Limitations
The cross-sectional nature of this study limits the investigation of temporal associations and thus the causality of the exposures and HIV-related outcomes. Additional limitations are related to the ability to fully assess correlates of prevalent HIV infection through behavioural surveys, which may have limited the significance of findings in this study. This may also be amplified by the potential response bias related to asking sensitive questions of a highly stigmatized population. We attempted to address these limitations to the fullest extent possible, including using lifetime and recent recall periods, developing survey questions based on formative research and prior research studies among MSM, and taking measures to ensure the confidentiality and privacy of participants and inform them of these privacy control measures. This study provides equipoise for prospective cohorts of MSM to better characterize HIV incidence and, ultimately, appropriately powered HIV prevention and implementation science studies to assess effective strategies in HIV risk reduction.
There are limitations associated with the use of RDS methodology [54]. Specifically, there is debate around appropriate interpretation of the measures of association and optimal strategies to handle variance in studies using RDS. For example, use of water-based lubricants appeared to be independently protective in the model that did not adjust for RDS, but this association is no longer significant with the introduction of the increased variance associated with RDS adjustment in the model. Despite these analytic challenges, RDS represents a relevant sampling strategy to obtain a diverse sample of a hidden population in the absence of a sampling frame or a sufficient number of established venues [19].
Conclusions
This study presents an assessment of individual, sexualnetwork and structural factors and their relationship with prevalent HIV infections among MSM in Blantyre, Malawi. The burden of HIV is high among these men, with the vast majority apparently unaware of their HIV status. Approaches rooted in engagement in the continuum of HIV care will be central moving forward in Malawi [55]. Addressing stigma and discrimination should also represent a core programmatic and policy element of the HIV response, to ensure that these efficacious approaches are translated into effective ones and to optimize the health of MSM living with HIV in Malawi while preventing onward HIV transmission. | 2016-05-12T22:15:10.714Z | 2013-02-12T00:00:00.000 | {
"year": 2013,
"sha1": "56542ec80c4f9caa9ef4d95cf2280e179bbfe98f",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.7448/IAS.16.4.18972",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "56542ec80c4f9caa9ef4d95cf2280e179bbfe98f",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
238758831 | pes2o/s2orc | v3-fos-license | Machine learning approaches for predicting geometric and mechanical characteristics for single P420 laser beads clad onto an AISI 1018 substrate
The final mechanical and physical properties should be predicted in tandem with the bead geometry characteristics for effective additive manufacturing (AM) solutions for processes such as directed energy deposition. Experimental approaches to investigate the final geometry and the mechanical properties are costly, and simulation solutions are time-consuming. Alternative artificial intelligent (AI) systems are explored as they are a powerful approach to predict such properties. In the present study, the geometrical properties as well as the mechanical properties (residual stress and hardness) for single bead clads are investigated. Experimental data is used to calibrate multi-physics finite element models, and both data sets are used to seed the AI models. The adaptive neuro-fuzzy inference system (ANFIS) and a feed-forward back-propagation artificial neural network (ANN) system are utilized to explore their effectiveness in the 1D (discrete values), 2D (bead cross-sections), and 3D (complete bead) domains. The prediction results are evaluated using the mean relative error measure. The ANFIS predictions are more precise than those from the ANN for the 1D and 2D domains, but the ANN had less error for the 3D scenario. These models are capable of predicting the geometrical and the mechanical properties values very well, including capturing the mechanical properties in transient regions; however, this research should be extended for multi-bead scenarios before a conclusive “best approach” strategy can be determined.
Additive manufacturing
Additive manufacturing is the process of building parts from a computer-aided design (CAD) model by successively adding material layer by layer, realizing the part with minimal excess material. Usually, a heat source is applied to melt or cure the raw materials as they are being formed into the final component shape. Conversely, conventional fabrication methods for objects by removing material via milling or other machining processes introduces much waste, but there is no significant heat introduced into the process. There are seven main categories of AM technologies including vat photopolymerization, material jetting, binder jetting, material extrusion, powder bed fusion, directed energy deposition, and sheet lamination [1]. The directed energy deposition method, which is the focus of this research, is one of the metallic additive manufacturing processes where a machine tool or a robot with a deposition nozzle traverses around an object and deposits metal powder onto existing surfaces. Material is melted using a laser, electron beam or plasma arc upon deposition [1].
Directed energy deposition additive manufacturing
Directed energy deposition (DED) is a subset of the additive manufacturing process family. It is a metal additive process in which blown powder or a wire is fed through a nozzle, and a power source or energy type is introduced to melt the material, and the beads are deposited onto a layer or substrate. Components can be repaired as well as built up using DED processes. Laser clad overlay operations are in the DED domain and usually utilized for coating surfaces to improve the performance of the surface or to repair components such as moulds. In this process, a laser beam melts the material while it is being distributed onto a surface. A thin layer on the surface of the substrate melts to form a bond between the clad and the substrate; this is the dilution zone ( Fig. 1). This research focuses on single bead depositions of 420 stainless steel onto a mild steel substrate. The input parameters for a laser cladding operation play a significant role in the quality of the bead. As a result, selecting and controlling the input parameters to achieve the desired results is a concern for the manufacturers. Each process parameter in laser cladding process including the power, travel speed, material feed rate, the contact tip to work piece distance, and the focal length has a distinct effect on the geometry and the mechanical properties of the bead [2][3][4][5][6]. Several experimental investigations have been found in the literature review to analyse the effect of process parameters on clad bead geometry and clad mechanical characteristics. Chen et al. investigated the effects of the process parameters including laser power, scanning speed, pre-placed powder thickness, laser spot diameter, and multitrack overlapping ratio on the quality characteristics of the ceramic coatings on Ti6Al4V substrates. Using L27(313) orthogonal arrays designed with the Taguchi method, they conducted multi-track cladding experiments to investigate the geometric properties and microhardness of coatings [6]. Zareh and Urbanic investigated the effects of varying the percentage overlaps between multiple beads, ranging from 30 to 47%. Using experimental measurements, they showed that the percentage overlap impacts the hardness and the depth of the melt pool [7]. Zhao et al. conducted a single factor experiment with 125 groups to investigate the impact of process parameters on the cross-sectional area of the YCF104 clad track. It has been found the height of the clad track is largely determined by scan speed, while laser power is the most significant factor for determining the width and depth of the heat-affected zone [8].
Understanding the process parameter to geometric relationships is important for process planning scenarios, but the mechanical and physical properties also need to be considered. Therefore, comprehensive prediction models are required for effective process planning. Due to the high thermal gradients and the rapid solidification rate, the generation of residual stresses with high magnitudes can occur. The high amount of residual stress leads to non-uniform plastic deformation of a substrate and the bead geometry. This is one of the most important issues when analysing the mechanical properties of a bead. Residual stresses could lead to cracks within the piece in addition to undesirable distortion; therefore, it is important to achieve a laser clad bead with a minimum amount of residual stress. Consequently, the process parameters play an important role in the magnitude of the induced residual stress, the distortion development, the final mechanical properties, and the shape of the bead. Using experimental approaches to investigate the bead geometry and mechanical properties is costly, timeconsuming, and only provides data at specific data collection points. Typically, transient regions are not considered. Consequently, using experimental data to seed simulation models and machine learning strategies is the focus of this work.
Finite element analysis (FEA) and analytical models have been utilized to predict the mechanical properties as well. Mirkoohi et al. [9] proposed a thermomechanical analytical model to predict the in-process elastoplastic hardening thermal stress and strain which can model the thermal stress of a single track either in powder bed systems such as laser powder bed fusion (LPBF) or powder feed systems such as directed metal deposition (DMD). Nazemi and Urbanic [10] proposed a three-dimensional finite element model (FEM) for a powder-feed laser cladding process to predict the mechanical and physical properties, but the geometry needed to be predefined and the simulation approaches were computationally costly for simple single bead and linear multiple bead case studies. For more complex and realistic components, the simulations might take weeks or months of processing time. Several researchers have explored the residual stress formation and its pattern using FEA methods for the LPBF AM process [11][12][13]; however, the computational cost is considerable. Therefore, a hybrid approach where data fusion between experimental, simulation and machine learning strategies are being investigated for predicting the results for DED processes. Machine learning is a tool for using data that includes a variety of conditions, followed by an implicit relationship between inputs and outputs. Thus, with the use of machine learning, the impacts of the process parameters on the mechanical and geometrical properties of the parts can be directly obtained in computationally efficient manner with no need of solving the mechanical equilibrium equations once these models are trained. Presently, the machine learning techniques are becoming popular in the field of material science and manufacturing.
Machine learning approaches have been widely implemented to investigate and predict the geometry of the bead. A range of process parameters implemented in various deposition methods are used as an input to generate and train the mathematical model and predict the geometrical data [14,15].
Thermal profiles for a deposited part were predicted by Ren et al. in 2019. They implemented a recurrent neural network and a deep neural network to correlate characteristics between the toolpath (laser scan pattern) and the thermal profiles. They used finite element simulations for the data generation and introduced a unique data set structure to train the neural network based on the geometry of the part and the laser scanning strategies [16].
Mechanical properties, including tensile and compressive stresses, were calculated using an artificial neural network as a tool to link the process parameter such as layer thickness, orientation, raster angle, raster width, and air gap to predicted compressive and tensile stresses in specimens built by fused deposition modelling and metal arc welding [17,18]. Only discrete values were considered in this work.
Wu et al. in 2020 predicted residual stresses considering four process parameters including the arc power, scanning speed, substrate preheat temperature, and the substrate thickness in wire-arc additive manufacturing. In their approach, these four process parameters are the inputs, and the longitudinal residual stress at a centre point is the model output. Their solutions predict residual stresses with 97% accuracy [19].
Residual stress profiles in stainless steel pipe girth welds were predicted by developing the artificial neural network (ANN) and adaptive neuro-fuzzy inference system (ANFIS). The performance of models was evaluated. It was concluded that the ANN trained using Levenberg-Marquardt, and ANFIS based on a hybrid algorithm, were far superior to ANN model trained by resilient backpropagation and ANFIS using backpropagation method [20].
Although some research has been performed in the welding domain, there is a lack of research related to a performing comprehensive analysis which considers the effects of the laser cladding input parameters on both the geometrical and mechanical characteristics of a laser clad bead simultaneously. The goal of this research is to evaluate the effectiveness of a machine learning (ML) approach for predicting the bead geometry (a discrete value), and selected mechanical and physical characteristics, which can vary throughout the bead. Residual stresses are emphasized in this research as they can lead to undesirable distortion and cracks. We need to (i) understand the residual stress characteristics, (ii) link them to the bead geometry and input parameters, and (iii) develop predictive models. Hardness is also considered. The types of effective predictive models need to be determined; therefore, two ML strategies are utilized. Data fusion approaches are applied to generate data sets for these analyses. This is described in the next section.
Research methodology
The research methodology consists of two main steps: (i) data collection and (ii) machine learning model development for predictive models. The process flow is shown in Fig. 2. Geometrical, Vickers microhardness, and residual Fig. 2 The general process flow for this research stress characteristics were collected for single laser clad beads of P420 stainless steel powder deposited onto low alloyed carbon steel plates for a wide range of process settings. A coaxial powder injection laser cladding process was employed for the experimental activities. In addition to the data from the experiment sets, calibrated simulation models were developed to seed the ML-based mathematical models. A multi-perspective analysis has been performed by using the ANN and the ANFIS models to predict geometric and mechanical properties. Both the ANN and ANFIS models are validated using the experimental and numerical data. The performance of ANFIS and ANN approaches to predict the residual stresses is also compared.
This study has comprehensively assessed ML prediction strategies for three different domains: (i) the 1D domain in which both the geometrical and properties are to be predicted by an ANN and ANFIS model for discrete geometry and mechanical characteristics; (ii) the 2D domain, where the residual stress and hardness along the middle cross-section of the laser clad bead and the substrate are predicted by the ANN and ANFIS models, and (iii) the 3D domain, where the residual stress and hardness are predicted throughout the bead considering the entire bead geometry and the substrate using the ANN and ANFIS models. The 1D and 2D models contain simplifying assumptions to provide an initial performance overview for predicting cladding bead characteristics. For the 1D approach, the average values within the bead are considered for hardness, and maximum and minimum values are considered for the residual stress. This reduces the computational cost significantly, but it should be noted that the average value does not necessarily represent critical information. A 2D-based approach was considered to establish initial relationships between variable residual stresses, locations, and the process parameters. However, this consideration is limited to the assumption that the thermal gradients occur in the depth direction only. In the 2D model, it is assumed that the thermal gradient is constant along the bead and has no effect on the induced residual stresses. However, in reality the residual stress varies throughout the bead. Consequently, an extended approach considering the variable data along the bead length has been investigated. The 3D model explores a comprehensive big data predictive solution that increases the data collection time and computational cost. However, the residual stress (or hardness) can be predicted in each point within the bead, including the start-stop transient zones. Therefore, critical information can be predicted with confidence. Overall, deep learning methods are applied in all mentioned models which reduce the computing time. Figure 3 illustrates the domains being considered for this research. The yellow dots demonstrate the measuring points in a single laser clad bead for residual stress.
Experimental setup
Single-pass bead sets of P420 were deposited onto AISI 1018. A comprehensive design of experiments approach was taken to explore five process parameters at five different levels [21]. A coaxial deposition head, which was mounted onto 6 serial axis robots employing a 4 kW diode laser, was employed to deposit the clad beads. Three replicates were performed for each experiment. Argon gas was employed to protect the melt pool from the atmosphere and was the conveying media for the powder. The process parameters are listed in Table 1.
The metallographic operations, i.e., grinding and polishing of the cross-sectional samples were done manually according to the Struers application notes for the stainless steel materials [21]. The observations were performed using a Leica Q5501W light microscope. The bead width, depth of penetration, and height of the beads were measured using Image-Pro Plus software. Figure 4 shows the details of the geometry measurements.
The microhardness of the beads was measured by a Buehler microhardness tester using a load of 200 g and a 12-s dwell time. The measurements were performed at the centre of the bead at a 100-μm interval from the top of the bead, through the dilution and HAZ, and into the substrate material. Two measurements were performed at a 250-μm distance, from each side of the first indentation. To measure the stress introduced to the beads, a Proto X-ray diffraction system (Lab 002/LXRD 06,024) was used for the first two samples presented in Table 2. Six points were taken along the centreline of the bead where 0 is located at the top of the bead. The measurements were calculated through the bead, the dilution zone, the heat-affected zone, and the substrate material. Data for the as-clad and post heat treatment conditions were collected and used to calibrate the simulation model described in the next section [10].
Simulation model
The laser cladding simulation was performed by the coupled thermal-metallurgical-mechanical analysis with the FEA software, SYSWELD (version 15). The results of the thermal analysis were used as an input of the mechanical and metallurgical analysis. In the FE model, the heat source was defined, and the boundary conditions were applied to the heat equation. The solver in the SYSWELD software solves a system of differential equations using a generalized trapeze method. The material chemical composition for the substrate and deposition materials is shown in Table 2. A three-dimensional moving heat source was applied. The thermal properties such as the thermal conductivity, specific heat, and coefficient of thermal expansion and mechanical properties of the material such as Young's modulus, Poison's ratio, yield strength, and strain-hardening curves are depicted in Fig. 5 [10].
The FE model was meshed using eight-nodded hexahedron elements, four-nodded surface elements, and twonodded linear elements for the clad lines. The mesh for the single-track cladded specimens consists of 16,364 elements. The size of the uniform element of the substrate was 0.5 × 0.5 × 2 mm [10]. The input parameters to set up different iterations of the FE models of the single bead are the ones used in the experimental setup as shown in Table 1. It is noted that the measured bead geometry is used to create the mesh model for the FEA. Figure 6 demonstrates the geometry of the substrate used in FE model.
The residual stress in the middle section of the clad bead was measured through the depth of the bead for the Figure 8 shows the residual stress measurement through the depth of the single-track bead. The residual stress changes from tensile to compressive form top surface of the bead to the substrate through the depth and again tensile.
Hardness correlates to yield strength and can be utilized for strength characterization. The Vickers microhardness was measured experimentally using a Buehler microhardness tester. A load of 200 g and loading time 12 s were applied vertically in the centre of the bead keeping 150-μm distance from each other form the bead surface through the substrate [10]. Figure 9 shows the hardness measurement through the depth of the single-track bead specimen. The hardness decreases moving from the top surface of bead through the depth to the substrate.
Development of mathematical models
Artificial neural networks, like a biological neural network, contain neurons and activators to learn from supervised data. Among the various methods that have been proposed for artificial neural networks, the feed-forward back-propagation Fig. 8 The measurement of residual stress in the FE model of the single bead clad and the substrate Fig. 9 The measurement of hardness in the FE model of the single bead clad and the substrate method is well-suited to physical applications. This network normalizes the input domain, assigns the weights to the inputs, and sends the sum of the inputs with their associated weights to the next layer neurons. The weight assigned to the input or neurons represents the importance of that input or neuron. The activator then maps the calculated values for each neuron to an interval between minus one and one. The outputs of the neurons with weights are sent finally to the last layer. However, this method uses bias to reduce disturbance and control computation. The calculations start with an initial guess of the weights and biases. They are modified by optimizing the gradient of these guesses. The objective function, which is the difference of the actual output values from the predicted output, is optimized.
As a result, the neural network can specify an analytical mathematical model for correlating inputs to physical outputs. However, this method fails to detect constructive physical parameters in predicting outputs and provides the model solely based on what the user defines as data. However, once the inputs are introduced, the neural network will be able to correlate the input to determine the outputs. Also, there is no unique criterion for determining the number of layers and neurons for the best prediction.
The adaptive neuro-fuzzy inference system was also explored to predict residual stress and hardness in 1D, 2D, and 3D domains for the single laser clad bead. The ANFIS model combines the best features of a neural network system and a fuzzy system. The structure of an ANFIS model is demonstrated in Fig. 10. An ANFIS is used to map input characteristics to the output through the input membership functions, TSK-type fuzzy if-then rules, and output membership functions [22][23][24]. In the ANFIS model developed here, the input parameters which were used as an input to train the ANN model were employed.
At the computational level, ANFIS can be regarded as a flexible mathematical structure that can approximate a large class of complex nonlinear systems to a desired degree of accuracy [23,24].
To clarify, assume that the fuzzy inference system has two inputs, x and y, and one output f. For the first-order Sugeno fuzzy model, a single fuzzy if-then rule assumes the form: Rule number 1, if x is A1 and y is B1, then: Rule number 2, if x is A2 and y is B2, then: where p i , q i , and r i are linear output parameters: • Layer 1: Every node in this layer contains membership functions described by the triangular function [25].
where a i , b i , and c i are referred to premise parameters.
• Layer 2: Every node in this layer is a fixed node and calculates the firing strength of a rule multiplication. • Layer 3: Every node in this layer calculates the weight, which is normalized. The outputs of this layer are called normalized firing strengths. • Layer 4: This layer output is a linear combination of the inputs multiplied by the normalized firing strength. • Layer 5: This layer is the summation of the layer 4 outputs.
(1) Fig. 10 The ANFIS model structure based on Takagi-Sugeno [18] The adjustment of the modifiable parameters is a two-step process. First, the consequent parameters are identified by the least square estimation, and then the premise parameters are updated by the gradient descent [23,24].
The mean relative error (MRE) is defined by the following formula, Eq. (4), to be used for performance comparison between ANFIS and ANN model [24].
In this equation, X(exp) shows the actual data, and X(pred) stands for the predicted data by mathematical models. N is the number of data points.
Mathematical models in the 1D domain
To calculate the geometry, hardness, and residual stress in the 1D domain, the following forward network architecture (MLP Network) was developed. The input parameters including the powder feed rate (X1), laser power (X2), focal length of the lens (X3), laser speed (X4), and the contact tip to work piece distance (X5) were connected to the hidden layer.
For the hidden layer, a Tan Sigmoid activation function (40 neurons), and for the output layer, a linear activation function (7 neurons) was developed. The predicted outputs were width, height, penetration, dilution, hardness, tensile residual stress, and compressive residual stress. Here a 70-15-15 (Training-Testing-Validation) division of data is used to obtain the best prediction results. A schematic view of the proposed artificial neural network (ANN) has been shown in Fig. 11.
Mathematical models in 2D domain
In this application, the model is trained using the input data from the finite element models and validated by comparing its output with the experimental data and FEA results. As with the 1D model, the input layer consisting five input parameters X1-X5 were connected to three hidden layers. The hidden neurons were then connected to the output layer. The back-propagation algorithm used the Levenberg-Marquardt algorithm as the training function; the MRE was considered as the performance criterion during the training for this model. The data set used to train the model was divided into training set (85% of the data) and the test and verification set (15% of the data). Figure 12 shows the structure of the neural network model. The model with a 20-10-10 architecture was selected as the best model performance for predicting the residual stress and hardness.
Mathematical models in the 3D domain
In regions that have been exposed to multiple heating and cooling cycles, thermal residual stresses result in completely non-uniform stresses. Any symmetry assumptions could therefore lead to unrealistic results for residual stresses and hardness. Residual stresses and hardness values change locally, and in order to avoid failure, the entire domain of a part should be simulated before being manufactured to determine whether the part has the expected performance characteristics.
The purpose of this section is to explain how to analyse residual stresses and hardness at every single point in a single bead specimen using ANN and ANFIS models. A comparison is made between ANN and ANFIS models with regards to the residual stress and hardness predictions.
The feed-forward back-propagation algorithm is used in the current neural network. The input and output data, as well as the general architecture of the feed-forward backpropagation neural network is shown in Fig. 13. The x, y, and z coordinates are three geometric features, and the laser speed and power are two process parameters for determining residual stress. However, the structure of available data in this study allowed these variables to be selected and does not mean that these are the only effective parameters to determine residual stresses.
For the data collection, a finite element three-dimensional simulation for one bead and nine different sets of process parameters was modelled. This part contained 115,000 finite element cells, and each cell is regarded as one sample data. Laser speed and laser power are two variables. Each cell had three different laser power settings (1.5, 1.8, and 2 KW), with laser speeds of 8, 10, and 12 mm/sec, which resulted in nine distinct simulation results. As a result, the maximum number of samples is 115000 × 9 = 1,035,000. However, 115,000 data points were selected from the bead-and heataffected zone to train the ANNs and ANFIS models.
After a systematic assessment to determine the number of layers and neurons, it was found that for this data structure, a neural network with one hidden layer is able to predict the residual stress and hardness with good performance. A summary of the type of network, the number of samples, and features investigated for the 3D model are presented in Table 3.
1D modelling results
The 1D model performance results are shown in Fig. 14, where a regression plot constructed between the target and the network output values is shown. The overall fitness of the network is equal to 0.976 which represents a very good fit. This is aligned with the performance characteristics when considering geometric characteristics only [5]. In Fig. 15, the actual value and the network output for the bead hardness are plotted. The network has been consistent in following the trends in the data. However, in some samples, there are minor differences between the actual data and the network output. The residual error is mostly below 50 Vickers (HV) as demonstrated in Fig. 16. Overall, the network has been able to provide a good correlation to the experimental hardness data, and more data should improve the results. As shown in Fig. 17, the ANN model has been able to generate relatively accurate predictions for the residual stress. In Fig. 18, the residual error for the residual stress has been plotted. The absolute error is below 5 ksi between the actual and the predicted stress values for most points.
The ANFIS model was used to predict the residual stress and the hardness as a multi-input single output model. The performance of both ANFIS and ANN model were compared by calculating the MRE in Tables 4 and 5.
The results show that the performance of ANFIS model to predict residual stress and hardness is better than the neural network. Figure 19 demonstrates the residual stresslaser speed diagram being predicted by ANFIS, ANN, and the actual data. Figure 20 shows the microhardness-laser speed diagram being predicted by ANFIS, ANN, and the actual data. The performance of both ANFIS and ANN models were compared using the MRE ( Table 4). The calculated MRE indicates that the ANFIS method has superior performance.
Although the maximum and minimum residual stresses could be predicted with a relatively high level of confidence, no location data is included. A 2D or 3D model is required to illustrate this.
2D modelling results
The 2D cross-section ANN and ANFIS results are explored for 10 and 26 sample data sets for selected curves. The residual stress results are shown in Fig. 21. The MRE is evaluated for the ANFIS model and ANN models ( Table 6).
For both the 10 and 26 sample data sets, it was observed that the MRE for the ANFIS model is lower than those for the ANN results. It was observed that ANFIS model converges in less time than the ANN model. The results predicted by these models agreed with the output of the finite element model and showed a good accuracy in prediction [26].
The regression plot is displayed to validate the network performance. The regression plot shows the network outputs with respect to targets for training, validation, and test sets.
The 10-sample data set (Fig. 22) fits are not as good as the 26-sample data set as expected. For the 26-sample data set (Fig. 23), the fit is reasonably good for all data sets, with the R values in each case R > 0.91.
The microhardness predicted by ANFIS, ANN, and the experimental data for the 2D domain are shown in Fig. 24.
The MRE for the ANFIS and ANN models are summarized in Table 7. Less variation occurred for the hardness predictive modelling for the 2D domain, which is different than what is observed for the 1D domain MRE results. For both the residual stress and hardness results, the calculated MRE indicates that the ANFIS model has superior performance.
3D modelling results
This section discusses residual stress and hardness prediction in the 3D domain followed by a sensitivity analysis of these two AI approaches. A feed-forward back-propagation configuration with one hidden layer was used for the ANN, and two membership functions were used to achieve acceptable predictions for the ANFIS model. However, in the 3D domain, there is a large data set with several near-zero values of residual stresses. Therefore, using the MRE for data with near-zero values may result in a high error. For example, a 5 ksi stress prediction may occur for the actual value of 1 ksi, which results in a relative MRE of 400%. However, the prediction of 5 ksi for a 1 ksi data value is extremely good where the maximum residual stress is 450 ksi in the part. Therefore, for the purpose of calculating the MRE, we selected residual stress data whose absolute value exceeded 100 ksi to avoid misinterpretation of the models' performance. Due to the fact that the hardness values are far from zero, all the data points are used to calculate the MRE. First, the effects of the number of neurons on the performance of the neural network are examined in transverse residual stress prediction (Figs. 25 and 26). Figure 25 shows that the least error occurs with a model with 9 neurons, which leads to a 17% error rate. Figure 26 compares the different number of neurons along with the ANFIS results to the actual numerical data in the middle section of the bead. The actual data is shown by discrete solid black circles. It can be seen that the green-coloured case (9 neurons) aligned well with the actual data. Therefore, for the rest of the results, one hidden layer with 9 neurons was used for neural network models.
For the sake of better representing the outcome of these ANNs and ANFIS models in the three-dimensional domain, the predicted residual stresses and hardness values are shown in the middle section of the bead. The following two graphs compare the ANN and ANFIS predictions of the residual stress and hardness to the actual numerical results (Figs. 27 and 28). It is clearly illustrated that right at the actual points (emphasized by black circles), the ANN results are closer.
For the sensitivity analysis, the derivative of the transverse residual stress with respect to the length of the bead (z-direction), laser power, and laser speed are considered (Figs. 29, 30, and 31). Figure 29 shows that on one hand, the derivative of the residual stress with z-direction in both ANN and ANFIS is zero or near zero. This means that the transverse residual stress does not vary along the length of the bead. In this way, the two-dimensional assumption can be applied to analyse one-bead cases. However, in the case Fig. 30, as the heat energy changes a little, the residual stress can vary up to 110 ksi, and this sensitivity decreases as the applied heat energy rises. It appears that ANFIS approach does not show predictable trends for sensitivity. More research needs to be done to determine whether patterns could emerge. Figure 31 demonstrates the sensitivity of residual stress with respect to the laser speed. As expected in terms of a physical point of view, transverse residual stresses are affected by the laser speed. The ANN results in this figure point that this sensitivity remains constant in different speed levels, while the ANFIS results show variations. The ANN and ANFIS methods can be used to predict characteristics throughout the bead, but the goodness of the prediction varies between the ANN and ANFIS approaches. The ANFIS model results show similar patterns to the collected data, but do not align well to the collected data points. However, the ANN modelling approach does not appear to have this issue, but there are overshoot regions that are observed. Data sufficiency may be the issue for both approaches. There will be other issues that will arise when developing predictive models for complex multi-bead scenarios, but this research shows the potential with an AI predictive modelling strategy using experimental and simulation data.
Summary and conclusions
Much experimental data must be collected to develop comprehensive prediction models for the DED process, and simulation approaches are computationally intensive. Therefore, machine learning approaches using data from the experimental and simulation domains has much potential. Using a data fusion approach, machine learning-based predictive models for a single laser clad in the 1D, 2D, and 3D domains are explored. Each solution has its unique model structure; therefore, the nature of the problem being considered influences the structure of the solution.
For the 1D domain, discrete geometry and properties are predicted. Averaged values are used for the hardness values as previous analyses have shown that the hardness is consistent in the bead but changes in the dilution and heat affected zones. The residual stress varies throughout the bead. This shows the need for a 2D or 3D approach for this property. Interestingly, with minimal data, the model predictions are generally accurate for all the parameters being assessed; however, the prediction data is limited in scope. More data will improve the model accuracy, but the limitations will remain.
When assessing the residual stress model in the 2D domain (the bead cross-sections), the ANFIS model generated less error. The prediction has good accuracy, but there might be the chance of missing the maximum residual stress since the analysis is only for one cross-section of the bead. It cannot be assumed that the residual stress patterns in the centre of the bead are consistent throughout. Therefore, the study is expanded to the 3D domain, where the residual stress values along with the bead predicted. This data included variability throughout the data set. Although the ANN and ANFIS models can predict results with very good accuracy, issues related to both solution approaches are raised. Data sufficiency is one issue as properties vary between the nodes, as shown in Figs. 8 and 9. Training a neural network with non-dimensional geometry parameters could lead to more comprehensive results, and this is future work. This research will be expanded to include multiple bead scenarios with different percentage overlaps, tool path deposition strategies, and bead stacking, which introduces another level of complexity. | 2021-09-28T17:06:10.378Z | 2021-07-12T00:00:00.000 | {
"year": 2021,
"sha1": "27fb0961a9e95844b5af426ae8993b37478f1884",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-682870/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "2e4187468d681e4f87c34912f4f427c67d43df9a",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
49526354 | pes2o/s2orc | v3-fos-license | Molecular mechanism of SRP-dependent light-harvesting protein transport to the thylakoid membrane in plants
The light-harvesting chlorophyll a/b binding proteins (LHCP) belong to a large family of membrane proteins. They form the antenna complexes of photosystem I and II and function in light absorption and transfer of the excitation energy to the photosystems. As nuclear-encoded proteins, the LHCPs are imported into the chloroplast and further targeted to their final destination—the thylakoid membrane. Due to their hydrophobicity, the formation of the so-called ‘transit complex’ in the stroma is important to prevent their aggregation in this aqueous environment. The posttranslational LHCP targeting mechanism is well regulated through the interaction of various soluble and membrane-associated protein components and includes several steps: the binding of the LHCP to the heterodimeric cpSRP43/cpSRP54 complex to form the soluble transit complex; the docking of the transit complex to the SRP receptor cpFtsY and the Alb3 translocase at the membrane followed by the release and integration of the LHCP into the thylakoid membrane in a GTP-dependent manner. This review summarizes the molecular mechanisms and dynamics behind the posttranslational LHCP targeting to the thylakoid membrane of Arabidopsis thaliana.
Introduction and overview of LHCP transport to the thylakoid membrane
The capture of light energy is essential for biomass production through photosynthesis. In organisms ranging from green algae to vascular plants, photosystems I and II are associated with antenna complexes that consist of the lightharvesting chlorophyll a/b binding proteins (LHCPs) and are specialized for the harvesting and transfer of energy to the photosystems. LHCPs are integral thylakoid membrane proteins with three membrane-spanning regions and represent the most abundant proteins in this membrane system. LHCPs are encoded in the nucleus, translated in the cytosol, and targeted to the chloroplast via N-terminal transit sequences. Upon import into the chloroplast, which is mediated by two translocons in the outer and inner envelope membrane (TOC/TIC) (Jarvis 2008;Paila et al. 2015;Bölter and Soll 2016;Sjuts et al. 2017), the transit sequence is cleaved off (Richter and Lamppa 1999) (Fig. 1). The question of how the LHCPs are translocated through the stroma and subsequently inserted and assembled in the thylakoid membrane has been a subject of study for approximately three decades. In early studies, it was shown that a proteinaceous stromal factor is required for the formation of a soluble, stable ~ 120 kDa LHCP intermediate termed the transit complex, which traverses the stroma before thylakoid insertion (Fulsom and Cline 1988;Cline et al. 1989;Reed et al. 1990;Payan and Cline 1991). This factor was later identified as the so-called chloroplast signal recognition particle (cpSRP), which is located in the stromal fraction of the chloroplast (Li et al. 1995;Schünemann et al. 1998;Klimyuk et al. 1999). The cpSRP complex of higher plants is well characterized; it consists of two subunits, the conserved 54 kDa GTPase cpSRP54 and a unique chloroplast-specific 43 kDa protein, cpSRP43 (Franklin and Hoffman 1993;Schünemann et al. 1998;Klimyuk et al. 1999) (Fig. 1). CpSRP54 is homologous to cytosolic eukaryotic SRP54 and to the prokaryotic 54 homolog (Ffh) (Franklin and Hoffman 1993;Li et al. 1995), which are required for cotranslational protein transport to the endoplasmic reticulum and the plasma membrane, respectively (Akopian et al. 2013;Saraogi and Shan 2014; 1 3 Voorhees and Hegde 2016). Consistent with the previous finding of a soluble LHCP intermediate, it has been demonstrated that complex formation between cpSRP and LHCP prevents aggregation of the hydrophobic LHCP in the aqueous milieu of the stroma and maintains it in an insertioncompetent stage (Schünemann et al. 1998;Yuan et al. 2002;Goforth et al. 2004). The handover of the LHCP from the TOC/TIC import translocon to the cpSRP complex involves the ankyrin-repeat protein LTD (LHCP translocation defect), which is able to interact with the Tic machinery, LHCP, and cpSRP (Ouyang et al. 2011) (Fig. 1). Although cpSRP is sufficient to keep LHCP soluble and in an insertion-competent stage, the insertion of LHCPs into the thylakoid membrane requires additional factors. They comprise (i) the thylakoid membrane-associated SRP receptor cpFtsY (Kogata et al. 1999;Tu et al. 1999;Yuan et al. 2002), which is a homolog of the eukaryotic SRP receptor SRα and the prokaryotic FtsY, (ii) GTP, which is hydrolyzed by the SRP GTPases cpSRP54 and cpFtsY (Akopian et al. 2013) and (iii) the integral thylakoid membrane translocase Alb3 (albino 3) (Fig. 1). Alb3 is a homolog of the bacterial YidC and mitochondrial Oxa proteins, which mediate the insertion, assembly, and folding of membrane proteins in the plasma membrane and inner mitochondrial membrane, respectively Wang and Dalbey 2011;Saller et al. 2012;Hennon et al. 2015).
In this review, we summarize the molecular details of the individual steps of posttranslational cpSRP-dependent LHCP transport in plants, including cpSRP43/cpSRP54 heterodimerization, cpSRP/LHCP transit complex formation, docking of the transit complex at the thylakoid membrane, and insertion of LHCP into the membrane. We also discuss aspects of the regulation and dynamics of the transport machinery. For information on the evolution of this transport system and on the overlapping function of cpSRP pathway components in the cotranslational transport of plastidencoded proteins, we refer to previous reviews (Henry et al. 2007;Richter et al. 2010;Ziehe et al. 2017).
Formation of the cpSRP43/54 heterodimer in Arabidopsis thaliana
The chloroplast-specific cpSRP43 is a multidomain protein that consists of three chromodomains (CD1, CD2, CD3) and four ankyrin repeats (Ank1-Ank4) (Klimyuk et al. 1999;Goforth et al. 2004;Stengel et al. 2008). The N-terminal region of cpSRP43 harbors the first chromodomain (CD1), which is followed by 4 ankyrin repeats (Ank1-4) and two additional chromodomains (CD2, CD3) in the C-terminus (Fig. 2b). The second cpSRP subunit, cpSRP54, consists of an N-terminal N domain, a central G domain with GTPase activity and a methionine-rich M domain in the C-terminus (Franklin and Hoffman 1993) (Fig. 2c). In 2008, Stengel et al. published the first crystal structure of cpSRP43 (CD1-Ank4), revealing a unique arrangement of the chromodomains and the ankyrin repeats (Stengel et al. 2008) (Table 1). The crystal structure shows the characteristic helix-turnhelix motifs of Ank2 and Ank3 and the elongated nature of the Ank1 and Ank4 helices. CD1 is composed of three antiparallel β-sheets and a vertical α-helix that is oriented in the direction of the first ankyrin helix. Overall, the crystal structure reveals the elongated horseshoe character of the CD1-Ank4 region that is typical of ankyrin-repeat proteins. LHCPs are targeted to the thylakoid membrane via the posttranslational cpSRP-dependent transport pathway. LHCPs are imported posttranslationally into chloroplasts via the TOC/TIC translocon in the outer and inner envelope membrane. After import into the stroma, the transit peptide is cleaved off and the LHCPs are forwarded to the cpSRP complex by LTD. The transit complex consisting of cpSRP43, cpSRP54, and LHCP traverses the stroma and docks to the thylakoid membrane via interaction with cpFtsY and the Alb3 insertase. Alb3/cpFtsY are associated with the cpSecY translocase, which is, however, most likely not involved in the insertion process. GTP hydrolysis catalyzed by the SRP GTPases cpSRP54 and cpFtsY drives the dissociation of protein components 1 3 CpSRP43/cpSRP54 complex formation was intensively studied by several groups. Initially, the cpSRP54 M domain was identified as the main binding region for cpSRP43 (Jonas-Straube et al. 2001;Groves et al. 2001;Goforth et al. 2004). Later, a 10-residue segment within the C-terminal tail region of cpSRP54M (RRKRp10) was shown to be important for cpSRP43 binding. This segment contains the conserved positively charged cpSRP43 binding motif ARRKR comprising residues 535-539 of cpSRP54 (Funke et al. 2005;Dünschede et al. 2015) (Fig. 2c). The formation of the cpSRP complex is mainly accomplished by the interaction of the ARRKR motif with cpSRP43-CD2. CD2 consists of three-stranded antiparallel β-sheets with a perpendicular α-helix and thus has the characteristic chromodomain architecture (Holdermann et al. 2012) (Table 1). In contrast to CD1, which is tightly connected to the ankyrin repeats, CD2 does not participate in any tertiary interactions with the N-terminal domains of cpSRP43 and is therefore more flexible. Within CD2 are located residues that form two aromatic cages that together present the binding interface for cpSRP54 (Holdermann et al. 2012) (Fig. 2b). Cage 1, which is formed by E268, W291 and D293, recognizes R537 of the cpSRP43 binding motif, whereas R536 is bound by the second aromatic cage, which consists of F267, Y269, and H304. Further detailed study revealed that the RRKRp10 peptide binds at the interface between CD2 and Ank4 and that in this complex CD2 is more closely positioned to Ank4 compared to free cpSRP43 (Holdermann et al. 2012). The importance of the Ank4 region for the cpSRP43/cpSRP54 interaction was also indicated by the observation that the affinity of binding of full-length cpSRP43 to RRKRp10 (K d 0.39 µM) is significantly increased in comparison to that of cpSRP43 CD2 (Holdermann et al. 2012). Notably, full-length cpSRP54 and cpSRP54M bind cpSRP43 with even higher affinity (K d 2-95 nM) (Hermkes et al. 2006;Gao et al. 2015;Ziehe et al. 2016) ( Table 2), suggesting that additional regions of cpSRP54 are required to support highaffinity cpSRP complex formation. Little is known about the dynamics of the formation of this complex in vivo, but current data indicate that most if not all of the cpSRP43 is complexed with cpSRP54 in the stroma (Schünemann et al. 1998;Klimyuk et al. 1999).
CpSRP binds LHCP to form a soluble LHCP transport intermediate, the transit complex
As described in the introduction, the LHC proteins are bound by the cpSRP complex in a way that maintains their solubility and insertion competence. Several studies have aimed to identify the intermolecular contacts between LHCP and the cpSRP subunits within the transit complex as summarized below.
Using various LHCP truncation constructs, an 18-residuelong binding site between the second and third transmembrane domains, L18 (VDPLYPGGSFDPLGLADD), and a hydrophobic region following the L18 motif were shown to be crucial for transit complex formation with cpSRP (DeLille et al. 2000) (Fig. 2a). The L18 motif harboring the DPLG sequence is conserved among LHCPs (Stengel et al. 2008;Barros and Kühlbrandt 2009) and therefore seems to be an important feature of members of this protein family. Tu et al. identified cpSRP43 as the binding partner for the L18 motif, while a direct interaction of cpSRP54 with LHCP was not detected (Tu et al. 2000). Further studies mapped the binding interface between LHCP and cpSRP via a pepscan analysis and confirmed the cpSRP43/L18 interaction (Groves et al. 2001). Cross-linking studies with pea Lhcb1 and cpSRP43 or a cpSRP complex revealed the presence of direct contacts between the L18 motif of Lhcb1 and the first part of TMD3 with cpSRP43; no contacts between Lhcb1 and cpSRP54 were detected (Cain et al. 2011). The structure of the cpSRP43/L18 complex was resolved by Stengel et al. (2008) (Table 1). CpSRP43 forms two predominantly hydrophobic grooves on its concave surface. L18 binds to groove 1, which is formed by ankyrin repeats 2-4. The DPLG motif is compactly folded and wraps around Y204 of Ank3 ( Fig. 2b) and it was shown that mutations in the DPLG motif or in Y204 of cpSRP43 impair the cpSRP43/L18 interaction (Stengel et al. 2008). Studies to quantitatively analyze the interaction of cpSRP43 with the L18 region of LHCP reported dissociation constants (K d ) ranging from 22 nM to 1.17 µM (Table 2). While the interaction between cpSRP43 and LHCP has been unambiguously proven, the question whether cpSRP54 contacts LHCP directly is less clear. As mentioned above, binding of cpSRP54 to LHCP was not observed in some studies, but other studies have reported evidence for cpSRP54/LHCP interaction. Initial reports revealed that cpSRP54 binds to residues within the third transmembrane domain (High et al. 1997;Groves et al. 2001) (Fig. 2c), emphasizing the importance of this transmembrane domain for transit complex formation (DeLille et al. 2000). Recently, it has been demonstrated that the absence of cpSRP54 or mutations within the M domain of cpSRP54 impair the formation of the cpSRP/LHCP transit complex (Dünschede et al. 2015;Henderson et al. 2016). Although the foregoing studies indicate that cpSRP54 plays an important role in transit complex formation, its precise contribution remains unclear. Apparently, it is not essential for the formation of soluble LHCP as it was shown that cpSRP43 alone acts as an ATP-independent chaperone for LHCP and is sufficient to maintain its solubility (Falk and Sinning 2010b;Jaru-Ampornpan et al. 2010). Therefore, cpSRP54 probably acts as an optimizing element that maintains the transit complex in an ideal insertion-competent state, thereby rendering the transport process more efficient (see also below in 'Regulation and dynamics of the transport machinery').
Docking of the transit complex at the membrane and LHCP insertion
The cpSRP receptor cpFtsY binds peripherally to thylakoid membranes, and biochemical and genetic data prove that cpFtsY is linked to LHCP insertion (Kogata et al. 1999;Tu et al. 2000;Yuan et al. 2002;Tzvetkova-Chevolleau et al. 2007;Marty et al. 2009). Similar to cpSRP54, cpFtsY contains an NG domain that is necessary for GTP binding and hydrolysis (Fig. 2d). Crystal structures of various plant cpFtsY proteins reveal the characteristic four helix bundle within the N domain and the five G motifs within the G domain (Stengel et al. 2007;Chandrasekar et al. 2008;Träger et al. 2012) (Table 1). Tethering of cpFtsY to the membrane is mediated via an amphipathic helix located at the N-terminus (Stengel et al. 2007;Marty et al. 2009) (Fig. 2d). Within this region, two conserved phenylalanine residues, F48 and F49, are crucial for membrane binding, and it was demonstrated that cpFtsY is only functional in LHCP insertion when it is attached to the thylakoid membrane (Marty et al. 2009). CpFtsY is able to bind cpSRP54 and complex formation is established by interaction between the homologous NG domains of the two proteins (Jaru-Ampornpan et al. 2007;Stengel et al. 2007; Chandrasekar (Table 2). Furthermore, complex formation between cpFtsY and cpSRP54 is considerably stimulated by anionic phospholipids (Table 2).
In addition to cpFtsY, the integral thylakoid membrane protein Alb3 is involved in LHCP insertion. The characterization of Alb3 as the responsible insertase is based on the results of several studies that demonstrated specific inhibition of LHCP insertion by anti-Alb3 antibodies (Moore et al. , 2003, and a direct interaction of Alb3 with components of the cpSRP transport pathway (Moore et al. 2003;Bals et al. 2010;Falk et al. 2010;Lewis et al. 2010;Walter et al. 2015;) (see also below). Consistently, the alb3 null mutant in Arabidopsis thaliana displays an albino phenotype (Sundberg et al. 1997). Similar to bacterial YidC, the crystal structure of which was recently solved (Kumazaki et al. 2014a, b), Alb3 is predicted to contain five conserved transmembrane helices and a structurally disordered C-terminus protruding into the stroma of the chloroplast (Falk et al. 2010) (Fig. 2e). Blue native PAGE indicates that Alb3 can form dimers (Fig. 1).
Several studies have described a direct interaction between Alb3 and cpSRP43 Falk et al. 2010;Lewis et al. 2010;Liang et al. 2016). Two positively charged binding motifs within the C-terminus of Alb3, motif II Falk et al. 2010) and motif IV (Falk et al. 2010), are important for cpSRP43 binding (Fig. 2e). Structural data revealed that motif IV binds to cpSRP43 CD3 (Horn et al. 2015) ( Table 1). Biochemical data point to an interaction of motif II and cpSRP43 CD1-Ank4 (Liang et al. 2016). Furthermore, a binding site within the transmembrane region of Alb3 was described (Fig. 2e). These data led to the conclusion that the transit complex is recruited to Alb3 via cpSRP43/Alb3 interaction. This docking model was further supported by the finding that cpSRP43 alone is able to keep LHCPs soluble (Falk and Sinning 2010b;Jaru-Ampornpan et al. 2010) and by the results of Tzvetkova-Chevolleau et al., who postulated an alternative LHCP transport pathway in Arabidopsis thaliana that bypasses cpFtsY and cpSRP54 but still requires cpSRP43 for LHCP targeting (Tzvetkova-Chevolleau et al. 2007). The latter authors demonstrated that the ffc/cpftsy double-knockout mutant lacking functional cpSRP54 and cpFtsY has a less severe phenotype and accumulates more LHCPs than the cpftsy single-knockout mutant. Therefore, these data provide support for an LHCP transport mechanism that depends on an efficient interaction between Alb3 and cpSRP43. However, the dissociation constant of the cpSRP43/Alb3 C-terminus interaction was described inconsistently. Whereas a K d of ~ 90 nM, indicating high-affinity binding, was reported by Lewis et al. (2010), other reports point to a rather weak, transient interaction (K d 5-18 µM) (Falk et al. 2010;Falk and Sinning 2010a;Horn et al. 2015;Liang et al. 2016) (Table 2). Notably, the affinity of cpSRP43 for full-length Alb3 has not been determined yet. Therefore, the contribution of the Alb3/cpSRP43 interaction to the recruitment of the transit complex to the membrane remains unclear.
Other data support the existence of an alternative LHCP targeting mode in which the transit complex recruitment to Alb3 is accomplished primarily via an interaction between , 1-3). The L18 region containing the crucial DPLG motif, which is responsible for cpSRP43 (Ank3) binding, is located between the second and third transmembrane domains. The binding region for cpSRP43 extends into LHCP's transmembrane domain three. It is discussed if cpSRP54 binds to transmembrane domain three of LHCP. Furthermore, there is a direct interaction between Alb3's C-terminus and LHCP. b CpSRP43 comprises three chromodomains (CD1-CD3, red) and four ankyrin repeats (Ank1-Ank4, orange). The Ank2-Ank4 region with the conserved Y204 binds the LHCP L18 peptide. The interaction with the ARRKR motif of cpSRP54 is accomplished via a twinned aromatic cage located in CD2, which is formed by six residues. A stimulating effect of Ank4 for cpSRP54/cpSRP43 complex formation was demonstrated (depicted by dashed line). CpSRP43 binds to the Alb3's C-terminus (motifs II and IV) via CD2-3, whereby CD3 plays the major role (dashed and solid lines, respectively). c CpSRP54 is composed of a N-terminal NG domain (gray) and a C-terminal M domain (yellow) connected by a linker region (light gray). CpSRP54 binds to cpSRP43 with its C-terminal ARRKR motif (red) within the M domain. The NG domain binds to the homologous domain in cpFtsY. An acidic patch (dark shaded, E313; D314; E316; D317) next to the M domain forms an additional interaction site for cpFtsY. The M domain possibly also binds to the third transmembrane domain of LHCP. d Like cpSRP54, cpFtsY comprises a NG domain. A membrane targeting sequence (MTS, dark shaded) is located close to the N-terminus. As mentioned above cpFtsY interacts with cpSRP54 via its NG domain. Additionally, it contains a basic patch (K191; K193; K203; R204; K235; K236; K 240) as counterpart for cpSRP54's acidic patch to provide an interaction via the complementary charged regions. e Alb3 is predicted to have five transmembrane domains which are summarized and depicted as transmembrane (TM) region (dark blue). CpSRP43 binds to a motif within the TM region, motif II and motif IV. Further data indicate that the binding to motif II and motif IV is mediated by CD1-Ank4 and CD3, respectively. The Alb3 C-terminus also binds the cpSRP54/cpFtsY complex, whereby the binding interface is probably provided by motifs II and IV. Additionally, LHCPs bind to the C-terminal region of Alb3 ◂ Alb3 and the cpSRP54/cpFtsY complex. The studies of Moore et al (2003) indicate that Alb3 can bind the cpSRP54/ cpFtsY complex even in the absence of cpSRP43 and LHCP, and a recent study reported that Alb3 C-terminus binds the cpSRP54/cpFtsY complex with an affinity in the submicromolar range . Additional data suggested that motifs II and IV within Alb3 C-terminus, which are responsible for the cpSRP43 interaction, might also play a role in binding the cpSRP54/cpFtsY complex (Fig. 2e).
Various studies have demonstrated that LHCP insertion is Alb3-dependent and independent of the thylakoid membrane cpSecY/E translocase (Mori et al. 1999;Moore et al. 2003). However, a direct association between Alb3 and the cpSecY translocase has been shown by coimmunoprecipitation experiments, double immunogold labeling and cross-linking studies, while there is no clear evidence for the presence of an uncomplexed pool of Alb3 (Klostermann et al. 2002). The Alb3/cpSecY translocase association was confirmed by Moore et al. (2003) who showed in diverse precipitation analyses that a stabilized complex consisting of cpFtsY and cpSRP can precipitate Alb3 and cpSecY from solubilized thylakoid membranes. Interestingly, recent data obtained in comigration and coimmunoprecipitation analyses of solubilized thylakoid membrane complexes indicate that cpFtsY and Vipp1 are additional components of the Alb3/cpSecY-containing complex in the thylakoid membrane (Walter et al. 2015). Therefore, it seems possible that the transit complex docks to a preformed cpFtsY/Alb3/ cpSecY complex in the thylakoid membrane; however, the cpSec translocase does not appear to be involved in contact formation or the insertion process (Fig. 1).
Regulation and dynamics of the transport machinery
The nucleotide requirement for LHCP integration was examined by in vitro reconstitution assays in two main studies. Initially, Hoffman and Franklin (1994) showed that GTP is the only nucleotide required for integration and demonstrated an inhibitory effect of non-hydrolyzable analogs of GTP (Hoffman and Franklin 1994). The requirement for GTP hydrolysis in LHCP insertion was confirmed by Yuan et al. (2002). Notably, this study also described a stimulatory effect of ATP and of the non-hydrolyzable analog AMP-PNP, indicating that a yet unknown ATP-binding protein might be involved in LHCP integration. GTP is not required for formation of the transit complex (Yuan et al. 2002); however, it is important for triggering the GTPase cycle of the cpSRP54/cpFtsY complex at the membrane (Jaru- Ampornpan et al. 2007Ampornpan et al. , 2009, which is a multistep process comprising assembly of the GTP-loaded cpSRP54/cpFtsY complex, reciprocal GTPase activation and dissociation of the complex. Interestingly, within the GTPase cycle, the cpSRP54/cpFtsY assembly step plays a crucial role in LHCP insertion, and GTPase activation enhances the insertion efficiency to some extent (Nguyen et al. 2011). Molecular dynamic simulations indicate that binding of GTP to cpFtsY is an important step in cpSRP54/cpFtsY complex formation because it induces conformational changes in cpFtsY that favor the formation of a complex with cpSRP54 (Yang et al. 2011), which is a kinetically fast interaction (Jaru- Ampornpan et al. 2007). In recent years, several mechanisms that regulate the GTPase activity of the individual SRP GTPases and of the cpSRP54/cpFtsY complex have been described. GTPase assays using the soluble recombinant cpSRP54/ cpFtsY complex were used to demonstrate that cpSRP43 and the C-terminus of Alb3 stimulate GTP hydrolysis by the complex and that the stimulatory effect of Alb3 C-terminus is strictly coupled to the presence of cpSRP43 (Goforth et al. 2004;Lewis et al. 2010). A regulatory effect of Alb3 C-terminus on GTP hydrolysis by the cpSRP54/cpFtsY complex was also described by . However, in that case Alb3 C-terminus had an inhibitory, cpSRP43-independent effect on GTP hydrolysis, which led to the hypothesis that this negative regulation might enable positioning of the transit complex on the translocase and transfer of LHCP to Alb3 before GTP hydrolysis occurs. The inconsistent findings are probably due to the use of different experimental conditions. performed GTPase assays in the presence of PG liposomes and reported that the regulatory effect of Alb3 C-terminus on GTPase activation is dependent on the presence of anionic phospholipids . The role of lipids in regulating the GTPase cycle is further supported by the finding that liposomes stimulate the basal GTP hydrolysis rate of cpFtsY (Marty et al. 2009). Although knowledge of the dynamics of the transport machinery is rather limited, some of the mechanisms (besides regulation of the GTPase cycle) involved in coordinating the order of events have recently been elucidated. A central role is assigned to cpSRP43 because it shows high interdomain dynamics, a feature that probably enables it to undergo flexible interactions with its several binding partners (Gao et al. 2015) (Fig. 2b). The binding of cpSRP54 to cpSRP43 reduces the flexibility of cpSRP43 (Gao et al. 2015) and induces a conformational change (Liang et al. 2016) that results in an enhanced binding affinity of cpSRP43 to LHCP (three to sixfold) (Gao et al. 2015;Liang et al. 2016). The affinity between the activated cpSRP43 and the L18 motif of LHCP was determined to be in the range of 100-300 nM (Gao et al. 2015;Liang et al. 2016) ( Table 2). The release of LHCP from cpSRP is triggered upon interaction of cpSRP43 with the insertase Alb3, as it was shown that the addition of recombinant Alb3 C-terminus dissociates soluble cpSRP43/LHCP complexes (Lewis et al. 2010;Liang et al. 2016) and that this effect is coupled to the presence of the cpSRP43 binding motifs II and IV in Alb3 C-terminus (Liang et al. 2016). Furthermore, it was observed that Alb3 C-terminus weakens the interaction between cpSRP43 and the RRKR10p peptide, leading to the hypothesis that this might contribute to the release and transfer of LHCP to the insertase (Falk et al. 2010;Horn et al. 2015).
The thylakoid membrane as the site of cpSRP-dependent LHCP insertion and pigment loading
Approximately 30 years ago, it was demonstrated in in vitro experiments that LHCP is inserted into thylakoid membranes but not into envelope membranes (Cline 1986). In later studies, the in vitro insertion assay of LHCP into thylakoids has been extended and successfully used by several groups to study the molecular details of this pathway (see above and Kuttkat et al. 1995). The thylakoid membrane as the site of cpSRP-dependent LHCP insertion is further supported by the exclusive localization of the Alb3 translocase in thylakoid membrane (Gerdes et al. 2006). In vivo data support the important role of cpSRP-dependent LHCP transport; the ffc/chaos double-knockout mutant, which lacks cpSRP54 and cpSRP43, showed pale green leaves due to the loss of 85% of its chlorophyll as well as a strong decrease in the levels of most LHCPs and greatly reduced number of thylakoids (Amin et al. 1999;Hutin et al. 2002). For a detailed summary of the phenotypes of cpSRP pathway mutants, we refer to previous review articles (Schünemann 2004;Henry et al. 2007;Richter et al. 2010). The biogenesis of stable LHC complexes in the thylakoid membrane requires assembly with pigments (Kuttkat et al. 1995(Kuttkat et al. , 1997Plumley and Schmidt 1995;Tanaka and Tanaka 2011). Because the soluble LHCP/cpSRP transit complex forms in the absence of pigments and in vitro experiments have demonstrated that inserted LHCP is complexed with pigments and assembled in trimers (Kuttkat et al. 1995), it is very likely that pigment loading occurs at the site of insertion in the thylakoid membrane. This is consistent with the finding that the chlorophyll (chl) b-deficient Arabidopsis thaliana cao-1 mutant can efficiently import LHCPs, while stable assembly with PSII is affected (Nick et al. 2013). Furthermore, studies using a chl b-deficient Chlamydomonas reinhardtii mutant point to an interconnection between pigment synthesis and LHCP biogenesis that occurs at the thylakoid membrane (Plumley and Schmidt 1995). However, Reinbothe et al. (2006) observed severely impaired LHCP import into chloroplasts from a chl b-deficient mutant of Arabidopsis thaliana (Reinbothe et al. 2006), and studies with Chlamydomonas reinhardtii mutants showed that the absence of chl b led to an accumulation of LHCPs in the cytosol and the vacuole (Park and Hoober 1997). Based on that evidence, a model of LHCII assembly was hypothesized in which chl b is incorporated into LHCP in the envelope membrane during import. Non-pigmentloaded LHCP would reenter the cytosol for degradation (Hoober et al. 2007). The transfer of the pigment-loaded LHCII from the envelope to the thylakoid was hypothesized to be mediated by vesicles. Indeed, in recent years, there has been increasing evidence for the presence of a vesicle transport system in chloroplasts. However, the question of whether or not proteins are transported in addition to lipids has not yet clearly been answered Karim and Aronsson 2014;Lindquist et al. 2016). Tanz et al. (2012) suggested a vesicle-based transport of LHCPs predominantly in cotyledons. As described above, the ffc/chaos mutant shows a severely compromised phenotype but still contains residual levels of LHCPs, indicating that at least some members of the LHCP family can be transported in a cpSRP43/54-independent manner in these plants. Considering that the upregulation of stromal chaperones such as ClpC plays a role in compensating for the absence of cpSRP54 in ffc mutants (Rutschow et al. 2008) and that yeast mutants compensate for the loss of the SRP-pathway by a reduced growth rate and induction of heat shock proteins (Mutka and Walter 2001), it is likely that the cpSRP double mutant uses similar strategies to adapt to the loss of cpSRP. Taken together, the model of vesicle-mediated transport of (pigment-loaded) LHCP remains speculative, at least for chloroplasts of higher plants, whereas the current in vitro and in vivo data indicate that cpSRP-dependent LHCP transport, which includes insertion and pigment loading at the thylakoid membrane, plays a primary role in LHCP biogenesis in higher plants.
Conclusions
In the last decades, considerable progress has been made in understanding the molecular details of cpSRP/Alb3dependent LHCP transport to the thylakoid membrane. However, several central issues need to be investigated in the future to get a more complete picture of LHCP transport. To further decipher this mechanism, it is important to get more structural information about single components and protein complexes. Here, it will be especially challenging to elucidate the structure of the transit complex, the Alb3 insertase, and finally the docking complex. In addition, little is known about the spatiotemporal coordination of LHCP transport, insertion, and pigment delivery/ assembly. | 2018-06-30T01:37:22.306Z | 2018-06-28T00:00:00.000 | {
"year": 2018,
"sha1": "54434c11cce5833e6d58b5bf3e12179a1d967eaf",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11120-018-0544-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "54434c11cce5833e6d58b5bf3e12179a1d967eaf",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
243778556 | pes2o/s2orc | v3-fos-license | Implant Survival in Immediately Loaded Full-Arch Rehabilitations Following an Anatomical Classification System—A Retrospective Study in 1200 Edentulous Jaws
This retrospective study analyzed implant survival of immediate implant-supported fixed complete denture (IFCD) treatment options (TOs) based on the level of alveolar atrophy (CC). Records of 882 patients receiving a total of 6042 implants at one private referral clinic between 2004 and 2020 were considered. The mean follow-up period was 3.8 ± 2.7 years. Cumulative implant survival rates (CSRs) were analyzed as a function of CCs and TOs according to Mantel-Haenszel and Mantel-Cox. Hazard risk ratios for implant loss were compared using Cox regression. Confounding factors were identified using mixed Cox regression models. The 2- and 5-year CSRs were 98.2% and 97.9%, respectively. Maxillary 2- and 5-year CSRs were lower (97.7% and 97.3%) compared to mandibular CSRs (99.8% and 98.6%) (p = 0.030 and 0.0020, respectively). The CC did not influence CSRs of IFCDs in the mandible (p = 0.1483 and 0.3014, respectively) but only in the maxilla (p = 0.0147 and 0.0111), where CSRs decreased with increasing atrophy. TOs did not statistically differ in terms of survival rate for a given level of alveolar atrophy. The adaption of IFCD treatments to the level of atrophy and patient-specific risk factors can result in high CSRs, even at different levels of bone atrophy.
Prolonged edentulism is associated with progressing resorption of alveolar processes [11,12]. This may require adjustment of the implant restoration scheme. As a result, many different treatment options of immediate IFCDs have been suggested [10,13]. Further additional augmentative procedures may be considered, which may impact the long-term clinical prognosis [12,14]. The existing literature evidences that maxillary and mandibular edentulism may be treated successfully using alternative treatment approaches involving four, six, or more implants [15]. However, most of the available scientific literature on fixed rehabilitation of a fully edentulous patient does not establish an association between implant survival rates and the level of bone atrophy.
The importance of diagnostics, treatment planning and choice of an adequate rehabilitation scheme may be supported by different clinical decision support systems (CDSS). Clinical decision support systems (CDSS) can be powerful tools to assist clinical treatment decisions based on patient-specific diagnostic findings [16]. Despite a potential demand, the use of such systems in the dental field has remained low to date [17,18]. Polakovska et al., provided an example of how CDSS may help provide treatment suggestions based on the alveolar anatomic dimensions [19]. Different systems to classify the level of progressing atrophy associated with edentulism have been described in the literature [11,20]. Jensen et al., and Papadimitriou et al., have described the first attempts to use such classifications to select a specific therapeutic scheme for IFCDs [21,22]. A more comprehensive CDSS for IFCDs was recently proposed [23]. This CDSS aims to standardize and propose restorative/regenerative treatment schemes depending on the level of alveolar atrophy from a list of well-established maxillary and mandibular IFCD implant schemes. The system considers a decision process based on the anatomic level and pattern of atrophy of the alveolar process and on patient-specific risk factors to select a specific treatment option from a set of predefined implant rehabilitation and surgical workflow schemes.
To evaluate a possible association between the CSRs of IFCDs and the level of alveolar atrophy, we retrospectively analyzed implant survival in a set of 882 patients treated with immediate IFCDs by applying the mentioned CDSS [23]. Implant survival was analyzed as a function of anatomic classifications and individual treatment options of the CDSS. This study also discusses the potential influence of confounding factors.
Treatment and Follow-Up Regimes
This retrospective study analyzed clinical records of a total of 882 patients that consecutively received routine immediate IFCDs following a recently published CDSS [23]. Treatments were provided in a private referral clinic (Implantology Institute, Lisbon, Portugal) from November 2004 until March 2020 under a certified quality management system and standardized follow-up protocol. The patients were enrolled in a two-weekly recall regime during the first twelve weeks after surgery, followed by regular recalls for professional oral hygiene every 4 months. Postsurgical recall regimes included removing the prosthesis and evaluating the implants at the two-week and twelve-week timepoints and in case of implant or prosthetic complications. Comprehensive medical re-evaluation of the rehabilitation and implant health status were performed every 4 months with prosthesis removal yearly. In addition, the patients were instructed to immediately report any complications or adverse events related to their restoration. Follow-up information was recorded using dedicated software and employed to derive quality indicators for the operation and management of the center.
All the implants were placed conventionally according to the manufacturers' instructions by a single experienced surgeon (J.M.M.C.). Ancillary procedures like guided bone regeneration or sinus lift procedures were performed according to the predefined schemes of the CDSS [23]. Consistency in pretreatment diagnostics, patient assessment, and patient classification was supported by using identical CBCT device and device settings using a 0.20 mm voxel size, 80 kV, 15 mA, and an exposure time of 12 s (Planmeca Promax, Planmeca, Helsinki, Finland). The patients affected by systemic or local conditions that compromised postoperative healing or osseointegration were excluded from implant treatment.
All the implants were immediately restored with acrylic provisionals and finally restored with a porcelain-veneered zirconia, monolithic zirconia, metallo-ceramic or acrylicmetal hybrid prosthesis. An exception to the immediate loading protocol was found in nine patients involving a total of 21 implants. The sample size of this analysis was a convenience sample determined using patient records displaying adequate diagnostic information with identical presurgical CBCT and fulfilling identical treatment and follow-up criteria.
Definitions
The interval between loading and failure defined the time to implant failure. The implants were considered failed if they presented signs and symptoms that led to implant removal or if the implant was put into sleep [24]. Removed implants comprised the implants that failed due to the lack of osseointegration or due to mechanical failure. Early and late implant failures were defined as failures before or after six months post-placement, respectively [25].
The following nominal and categorical factors were considered in the analysis of implant failure: 1.
CDSS-related factors, i.e., anatomic category (CCs), treatment options (TOs) and treatment categories (TCs). The applied CDSS defined five different maxillary and mandibular CCs with three TOs per CC [23]. CCs were defined on a hemi-mandibular treatment unit, i.e., at the quadrant level and based on the vertical and horizontal dimensions at three different predefined positions of the alveolar process from CBCT scans at baseline. TOs (A, B, or C) were defined by the treating clinician based on the planned prosthetic design and under consideration of factors comprising risk factors like systemic conditions, smoking, bruxism, etc., socioeconomic factors, the ability for self-care and oral hygiene as well as the patient's preferences. Individual TOs defined the characteristics of implant restoration, i.e., number, type, position, and angulation of the placed implant, bone grafting, as well as the type of prosthetic restoration (fixed (A, B) vs. removable (C)). Treatment options as applied as a function of the anatomic classifications are schematically illustrated in Figure 1. Treatments deviated from the predefined scheme of the CDSS if considered necessary and included transitions from preexisting restorations under consideration and restoration of preexisting implants. Such preexisting implants were not considered in this analysis.
2.
Patient-related factors included gender, age at the time of implant placement, the presence and number of systemic comorbidities including cardiac arrhythmia, arthritis, diabetes type I or II, cardiovascular disease, hepatitis B, HIV, arterial hypertension, hyperthyroidism, osteoporosis and rheumatoid arthritis as well as the self-reported smoking habits and associated daily cigarette consumption.
3.
Implantation site-related factors, including jaw type (maxilla, mandible) and jaw location, as categorized into anterior (incisors and canine) or posterior positions (premolar and molar positions).
4.
Procedure-related factors: implant system by brand, type, diameter and length and the presence of regenerative bone graft procedures.
Data Collection and Statistical Analysis
A total of 882 patients and 6047 implants were included in the analysis, respectively. Data analysis was carried out in SPPS for statistical analysis (SPPS software, version 24, SPPS Inc., Chicago, IL, USA) by an independent statistician. Descriptive characteristics were reported as the means and standard deviations (SD), medians and interquartile ranges (IQR) and absolute ranges. The differences between descriptive values at the jaw, CC and TO levels were evaluated for statistical significance (p < 0.05) using Fisher's exact test.
by the treating clinician based on the planned prosthetic design and under consideration of factors comprising risk factors like systemic conditions, smoking, bruxism, etc., socioeconomic factors, the ability for self-care and oral hygiene as well as the patient's preferences. Individual TOs defined the characteristics of implant restoration, i.e., number, type, position, and angulation of the placed implant, bone grafting, as well as the type of prosthetic restoration (fixed (A, B) vs. removable (C)). Treatment options as applied as a function of the anatomic classifications are schematically illustrated in Figure 1. CSRs were determined by Kaplan-Meier analysis. Corresponding p-values for the comparison of survival curves were calculated using the Mantel-Cox test. Two-and fiveyear CSR values were statistically compared using the Mantel-Haenszel test. Hazard risk ratios for the implant loss outcome as a function of anatomic classification and treatment option were calculated using Cox regression using the effect of the patient as a random effect. The Firth correction was used when levels had zero events. Confounding factors were derived from individual Cox regression models using one factor as a fixed effect and the effect of the patient as a random effect. Mixed Cox regression models were used to identify the overall risk factors for implant loss after eliminating covariate factors using backward selection of factors that displayed a p < 0.20 in the one-to-one associations.
Distribution of Follow-Up Times
Follow-up times for maxillary and mandibular TCs ranged from 2.6 ± 2.1 (IV B) to 4.1 ± 2.6 (II B) years and from 1.4 ± 0.0 (II C) to 8.5 ± 0.4 (V B) years, respectively.
Risk Factor-Related Characteristics
Average patient ages at implantation ranged from 59.2 ± 10.5 (III A) to 69 ± 9.6 (V B) years for maxillary and from 64.6 ± 10.1 (III A) to 74.7 ± 8.2 (V A) years for mandibular treatments, respectively. Differences between individual maxillary and mandibular CCs reached statistical significance (p < 0.0001).
Mandibular CC V displayed the highest patient age (73.3 ± 8.3 years). Except for maxillary CC III and V and mandibular CC V, patient age was not statistically significantly different at the TO level. Interestingly, for the mandibular treatment subcohort, an increase in patient age tended to correlate with increasing CC classification and level of atrophy, respectively.
Individuals in the maxillary and mandibular TCs displayed from 0.2 ± 0.4 (I B) to 0.6 ± 0.8 (I A) (p = 0.4282) and from 0.0 ± 0.0 (V B) to 0.7 ± 0.8 (III B) (p = 0.0983) comorbidities per patient, respectively. Differences did not attain statistical significance at the CC level but were statistically significant at the TO level for maxillary AC III (p = 0.0094) and mandibular AC II (p = 0.0140), III (p = 0.0026) and V (p = 0.0320), respectively.
The percentage of smoking individuals per TC ranged from 8% (I B) to 48% (III A) in the maxillary group and between 0% (IV B and V B) and 41% (II A) in the mandibular group. Differences at the CC level reached statistical significance (p = 0.0139 and p = 0.0002, respectively). Differences between individual TOs reached statistical significance for maxillary AC III (p = 0.0154) as well as for mandibular AC II (p = 0.0061) and III (p = 0.0003).
CDSS-Related Characteristics
The distribution of study characteristics and implant failures as a function of CDSSrelated categories and p-values as derived using Fisher's exact test are listed in Table 4. Table 4. Descriptive statistics of study variables stratified by CCs and TOs of the CDSS. Number of treatments (n = 2596 treatment units) and number of implants (n = 6047) are reported as the total number and percentages relative to individual CCs. Follow-up times are reported at the implant level as the average values and SD. Gender (ratio of female and male patients), patient age (average values and SDs), the number of systemic conditions (average values and SDs) and smoking habits (total number of smokers and relative percentages of smokers in the TC) are reported at the treatment level. Bracketed % values reported at the TO level refer to the % relative to the individual CCs. Bracketed % values reported at the CC levels refer to the % relative to the total value on the maxillary or mandibular level; p-values related to differences at the TO and CC level were obtained using Fisher's exact test; p-values indicating statistical significance are marked in bold. Abbreviations: SD, standard deviation. I II III IV V I II III IV V I II III IV V I II III IV V
Characteristics of the CDSS-Related Treatment Provision
Treatment provision-related aspects of the CDSS, schematically illustrated in Figure 1, were investigated by analyzing the distribution of study characteristics as a function of CCs and CC/TO combinations as reported in Figure 2 and Table 4, respectively.
Characteristics of the CDSS-Related Treatment Provision
Treatment provision-related aspects of the CDSS, schematically illustrated in Figure 1, were investigated by analyzing the distribution of study characteristics as a function of CCs and CC/TO combinations as reported in Figure 2 and Table 4, respectively. As evidenced by the plot in Figure 2A, 521 (90%) and 534 (86%) of the treated maxillary and mandibular arches were classified as symmetric displaying one type of AC in both quadrants.
Maxillary CCs with high and intermediate bone quantity (AC I, II, III) were preferably treated with TOs A (89%), B (84%) and B (85%) (p < 0.0001 each), respectively. The cases with limited maxillary bone quantity CC IV were comparably treated with TO A (46%) and B (54%) (p = 0.3458), respectively. Treatments of the patients with strongly atrophied maxillae (CC V) were mainly provided as TO A (85%) (p < 0.0001). Except for the prominently used TO B in CC II with two implants per quadrant, maxillary restorations were usually provided with 3.0-3.2 implants per quadrant. As further evidenced by Figure 2F, bone grafting in CC IV and V was markedly increased (>80%) compared to CC I-III (<45%). Further, a constant increase in short (<8 mm) and zygomatic implants (>21 mm) in the direction of higher classed CC along with reduced use of long implants (15-21 mm) in CC IV and V was apparent ( Figure 2E). Mandibular treatments were characterized by a trend for configurations with fewer and shorter implants per quadrant in the direction of increasing alveolar atrophy. Specifically, CCs II, III, IV and V displayed a clear shift to one specific TO (p < 0.0001 each), i.e., II B (80%), III B (63%), IV A (95%) and V A (83%). Preferred mandibular TOs required 2.0 implants per quadrant on average compared to the corresponding alternative TOs in the corresponding CCs with 2.8-3.0 implants. TOs as part of CC I were comparably distributed between TO A (2.8 implants per quadrant) and B (2.1 implants) (p = 0.3924). Further mandibular treatments displayed a higher incidence of anterior implants (45% of the implants) compared to maxillary treatments (38% of the implants) ( Figure 2D). Compared to maxillary procedures, mandibular procedures also involved a relatively low percentage of bone grafting (<34%).
Straumann bone level (Straumann Group, Basel, Switzerland) and Zimmer Biomet external hex (Zimmer Biomet, Warsaw, IN, USA) represented the most frequent implant types, with corresponding brands accounting for up to 96% of the placed implants (Table 2). Compared to fixed prostheses (TOs A and B), removable options (TOs C) were only delivered in the total of four mandibular treatments in ACs II and III, respectively.
Implant Loss per Category of the CDSS
The distribution of failed implants in the patients displaying single and multiple (clustered) implant losses is illustrated in Figure 2G,H and reported in Table 4, respectively. A center-weighted distribution of absolute implant loss as a function of CCs that tailed towards higher CCs was identified in the maxilla. In contrast, mandibular absolute implant losses increased from CC I to CC IV. Differences in absolute numbers of lost implants between CCs in both arches were statistically significant (p < 0.0001).
Relative implant loss, i.e., the percentage of lost implants compared to the total number of placed implants in the maxilla (2.3%) was statistically higher compared to the mandible (1.3%) (p = 0.0106). This parameter also displayed an apparent trend for higher relative failure rates in the higher-ranked CCs (CC IV) in both jaws. Differences between CCs were only statistically significant in the maxilla (p = 0.0098) but not in the mandible (p = 0.1627) ( Table 4).
Except for mandibular CC III (p = 0.0044), no significant differences between individual TOs could be identified. Table 5 compare the Kaplan-Meier plots and the corresponding 2and 5-year cumulative implant survival rates (CSR) as a function of CCs and TO, respectively. The total cohort's overall 2-and 5-year CSRs were 98.2% and 97.9%, respectively. Maxillary CSRs were consistently lower (97.7% and 97.3%) compared to mandibular CSRs (99.8% and 98.6%) after 2 and 5 years, respectively (p = 0.030 and 0.0020).
Figures 3 and 4 and
The maxillary implants displayed statistically significant differences at 2 and 5-year CSRs between individual CCs (p = 0.0147 and 0.0111, respectively), while differences between the corresponding mandibular CCs failed to reach statistical significance (p = 0.1483 and 0.3014, respectively). Overall, CSRs in the maxilla tended to decrease in the direction of CCs with decreasing bone quantity (CC I to V). They ranged from 96.3% (CC III) to 99.1% (CC I) after 2 years and between 95.8% (CC III) and 99.1% (CC I) after 5 years, respectively. Individual values in the mandible ranged from 97.7% (CC IV) to 100% (CC V) and were identical for both endpoints.
At the TO level, none of the TOs within the individual CCs displayed statistically significant differences. Borderline differences were only identified between TO A (100%) and B (96.7%) in mandibular CC I (p = 0.0691 for both endpoints).
As reported in Table 6, hazard risk ratios for implant loss (HRR) of the mandibular implants were 0.59 (CI, 0.393-0.884) times lower compared to the maxillary implants (p = 0.0106). At the CC level, HRRs were further statistically significantly different between the maxillary CCs (p = 0.0441) but not between the mandibular CCs (p = 0.2765) ( Table 6). Maxillary HHRs tended to increase in the direction of CCs with decreasing bone quantity and were highest for AC III (HRR = 1). CC I (HRR = 0.27, p = 0.0397) and CC II (HRR = 0.438, p = 0.0118) displayed statistically significantly lower HRRs.
At the mandibular level, no clear trend for the HRR as a function of CC could be identified. CC II displayed the lowest (HRR = 0.834), CC IV-the highest HRR (HRR = 2.104), with CC IV being the only CC displaying borderline significant differences (p = 0.0719).
Factors Influencing Implant Loss
The factors influencing the risk of implant loss were analyzed using individual and mixed Cox regression models before and after eliminating covariate factors. As evidenced from the listing in Table 7, the factors CC (p < 0.0001), age (p = 0.0040), cigarettes per day (p = 0.0202), as well as the number of implants (p < 0.0001) and implant length (p = 0.0004) were identified as the main factors influencing the risk of implant loss. Table 5 compare the Kaplan-Meier plots and the corresponding 2-and 5-year cumulative implant survival rates (CSR) as a function of CCs and TO, respectively. The total cohort's overall 2-and 5-year CSRs were 98.2% and 97.9%, respectively. Maxillary CSRs were consistently lower (97.7% and 97.3%) compared to mandibular CSRs (99.8% and 98.6%) after 2 and 5 years, respectively (p = 0.030 and 0.0020). Specifically, the anatomic classifications III and IV (HRR = 1 and 0.806, respectively) displayed significantly higher risks for implant loss compared to CCs I (HRR = 0.187, p = 0.0013), II (HRR = 0.367, p = 0.367) and V (HRR = 0.384, p = 0.0063). Furthermore, the analysis revealed an increase in the HRR by a factor of 1.026 per year of age increase (p = 0.0047), 1.027 per each additional cigarette consumed (p = 0.0229), 2.105 per additionally placed implant per jaw (p < 0.0001) and 1.072 per mm increase in implant length (p = 0.0004).
Discussion
This retrospective study analyzed implant survival of immediately loaded full-arch reconstructions as part of a recently published clinical decision support system (CDSS). This classification system should be interpreted as an aiding tool for implant-supported full-arch reconstruction and not a clinical decision tree. As part of the analysis, implant survival was analyzed as a function of the degree of osseous atrophy, provided treatment schemes, and potential confounding risk factors. In the context of the classification, it should be considered that corresponding restorative schemes were primarily prosthetically driven and were always based on detailed presurgical digital prosthetic planning. It should further be considered that although the applied CDSS defined TOs for both fixed (TOs A and B) and removable (TO C) restorations, removable options were only provided in a neglectable portion of the analyzed sample (six implants in total). Consequently, the results presented in this analysis primarily apply to fixed implant-supported complete dentures.
To our knowledge, few publications have so far considered larger sample sizes for the retrospective analysis of implant survival of full-arch restorations [26,27]. This study was carried out at a private center focusing on oral rehabilitation with a nationwide referral basis. The long inclusion periods of this study allowed studying implant survival over a wide range of varying parameters, including implant components, surgical and prosthetic protocols, and temporal changes in patient characteristics. However, this aspect also increased the number of potential influencing factors, rendering the analyzed data sample more inhomogenous and its analysis more difficult. Despite these advantages and potential limitations, the study setup supports the extrinsic validity of the sample as being representative of the majority of edentulous patients.
The decision for individual treatment schemes was based on a patient-centered and risk-based approach. The oral health impact profile assesses oral health-related quality of life by a hierarchy of functional and psychological parameters. Patients' satisfaction and expectations towards an immediate fixed implant prosthesis delivered on the same day of the surgery was the main concern in the rehabilitation of this sample of patients. In a study on patient-centered outcomes of immediate full-arch screw-retained rehabilitations, Dierens et al., reported a significant increase in patient satisfaction and self-reported outcomes such as comfort, function and aesthetics compared to the baseline prior to the treatment [7]. Structured diagnosis and risk assessment represent crucial elements for a patient-centered concept and comprehensive patient-centered treatment plan. This plan requires a structured diagnostic process, and all the patients treated in this sample were treated according to this perspective. Based on this concept, it is understandable that individual alveolar anatomy profiles may require more than one treatment option.
This analysis revealed an overall 5-year CSR of 97.9%. Maxillary values (97.3%) were significantly lower than the corresponding mandibular values (98.6%). Considering that the values in this analysis were obtained over a range of different levels of atrophy, patient conditions and treatment options, they were in good agreement with the corresponding values of 97-97.9% and 98-98.9%, respectively, that were reported as part of recent systematic reviews for comparable follow-up periods [1,15,28,29].
Further, the identified higher CSRs of immediate IFCDs in the mandible compared to the maxilla are also in line with other reports and have been attributed to the reduced bone quality and bone quantity in the edentulous maxilla [28,[30][31][32].
To our knowledge, this is the first study that systematically investigated implant survival of immediate IFCDs as a function of the level of alveolar atrophy in a broader patient population. From this analysis, it became apparent that differences between the relative implant loss, CSRs and risk ratios between the maxilla and the mandible were most pronounced under moderately and severely atrophied conditions. This finding raises the question of whether the lower maxillary overall CSR values were associated with treatment provision and risk factor distribution, especially in the higher-ranked CCs (CCs III-V), which calls for a more thorough analysis.
Specifically, the 2-and 5-year mandibular and maxillary CSRs revealed a consistent decrease with an increasing level of atrophy, which reached significant levels only in the maxilla. This observation is in line with the general notion that implant placement in the maxilla is regarded as more challenging compared to the mandible [33]. This aspect may, at the same time, also suggest a potential limitation of the applied classification. Although bone quantity was the main criterion for defining the different levels of the classification system, bone quality (density) was not evaluated. Considering the lower bone density in the maxilla, this factor may help explain the identified differences [30].
Interestingly, in the mandible (except for CC V), CSR values in severely atrophic conditions were distinctly lower when compared to non-or moderately atrophied states. Implant losses and clustered losses in the corresponding CC IV were further mainly and exclusively associated with TO A. Chrcanovic et al., recently attributed such clustering to the presence of specific local and systemic risk factors [31]. Our analysis suggests that clustering is consistently associated with lower CSRs and higher risks for implant loss of specific TCs. However, for the specific mandibular CC IV, no potential bias associated with the disparate distribution of risk factors between the subcohorts TO A and B could be identified, suggesting a potential causal relationship with the corresponding therapeutic schemes. Specifically, TO A was based on four interforaminal implants, while TO B involved two additional distal short implants, resulting in a potentially more favorable anteroposterior load distribution.
Interestingly, mandibular 5-year CSRs tended to be higher or equivalent for alternative treatment options of one CC with six implants when compared to four implants irrespective of the level of mandibular atrophy. Despite this consistent pattern, differences between mandibular TOs failed to reach statistical significance, which is in line with a recently reported systematic analysis on the effect of implant numbers on long-term implant survival [15]. While the placement of four interforaminal implants is generally wellestablished, the placement of six implants comprising short molar implants has recently been proposed [8,15,34,35]. Such restorations have been reported to prevent posterior cantilevers and potentially reduce marginal bone loss [35][36][37]. On the other hand, sufficient evidence suggests that prosthetic cantilevers may result in successful treatment options for fully edentulous patients with high long-term survival rates [38]. Further in this context, it needs to be considered that possible additional risk factors like parafunctional habits (bruxism) or the nature of the antagonist were not evaluated as part of this analysis, which might be considered a limitation of the performed analysis.
In the maxilla implant survival, relative implant loss and associated risk levels were significantly associated with the level of alveolar atrophy. Specifically, CSRs distinctly decreased when comparing non-atrophic (99%) to moderately and severely atrophic conditions (96-97%). Regarding a potential limiting bias by associated risk factors, a distinct increase in the percentage of active smokers in the direction of the higher-ranked CCs was noted, which appeared to correlate with lower implant survival. This aspect was also reflected by identifying the level of cigarette consumption as a risk factor for implant loss from the mixed Cox regression model. This observation is in line with several systematic reviews that reported a significantly higher risk of implant failure in smoking patients than in non-smokers, with implant failure risk ranging between 1.87 and 2.38 [39,40].
Severe maxillary bone atrophy has been associated with a loss of available bone volume and increased level of sinus pneumatization, which impede implant placement [13]. In our analysis, this was reflected by a markedly increased rate of bone grafting, use of configurations with ≥ six implants and an elevated frequency of short and zygomatic implants in the highly atrophic CCs IV and V. Treatments in the moderately atrophied CC III were by contrast mainly provided as part of an all-on-four type restoration (TO B). This specific TC also involved a relatively moderate rate of bone augmentation, an average of 4.6 implants per jaw and a pronouncedly higher percentage of anterior implants. Interestingly, this specific TC also displayed the lowest CSR values of all the applied TCs, which was slightly lower than the 97.7% reported in the literature for this configuration [41]. For this comparison, however, it must be considered that this implant configuration has exclusively been applied in moderately atrophied conditions (CC III) as part of the applied treatment schemes.
From a biomechanical perspective, maxillary full-arch configurations with distal implants may distribute applied mechanical loads to the molar alveolar crest region more evenly when compared to all-on-four type configurations. However, their realization is often impeded by decreased trabecular bone density in the posterior atrophied maxilla [13,42]. In this context, it is interesting to note that the relative implant loss in the posterior maxilla was higher (2.6%) compared to anterior positions (1.7%) but appeared to be disproportionally high (5.2%) in CC III B when compared to CC III A (2.3%). This observation may suggest that the elevated biomechanical load on posterior implants in the four-implant configuration CC III B might not be adequately compensated by the anatomic osseous conditions of the CC. The specific use of TO B in CC III might need to be balanced against the corresponding alternative TO A to further improve this CC's CSR.
Implant loss in the severely atrophic maxillary CCs IV and V also displayed marked clustering of implant loss associated with lower CSRs of TO B and A, respectively. TC IV B was based on two distal pterygoid implants and displayed CSR values comparable to the literature-reported value [10,43]. Interestingly, in contrast to previous reports for this configuration, a disparity between anterior and posterior relative implant loss of 1.1% vs. 3.7% was noted [43]. Treatment in the severely atrophic maxillary CC V was mainly provided as TO A with six straight implants after sinus augmentation compared to TO B with zygomatic implants. Both TOs can be considered complex and required horizontal bone augmentation; still relatively high 5-year CSRs of 96.5% and 98.4%, respectively, were observed.
Finally, the risk factor analysis using mixed Cox regression models confirmed a significant and pronounced effect of anatomic classification on implant loss. Interestingly, the highest hazard ratios were reported for CC III, while other CCs with lower or higher levels of atrophy displayed lower hazard ratios. Furthermore, in line with previous reports, smoking habits had a pronounced and strong effect on implant loss [39,44]. This effect was previously associated with the potential alteration of physiological processes related to osseointegration and potential behavioral differences in oral health and maintenance habits between smokers and non-smokers [45][46][47].
Increasing patient age was found to increase implant loss modestly. Chrcanovic recently reported a decreased risk for implant loss with progressing age and related this to a lower incidence of bruxism and decreased masticatory forces in elderly patients [26,31]. By contrast, Sendyk et al., did not identify any significant difference for implant loss between younger and older patient populations, which might explain the weak effect of age on implant loss identified in this study [48]. Lastly, an increased implant length was also identified as a potential risk for implant loss, which may require further investigations.
Clinical studies have indicated that bone grafting can improve CSRs and implant stability in the edentulous atrophied maxilla [14]. As indicated in the treatment schemes in Figure 1, the applied CDSS suggested bone augmentation in the higher atrophied maxillary and mandibular CCs [23]. However, bone augmentation was not identified as a confounding factor affecting implant loss as part of the performed analysis. Hence, any direct influence of bone augmentation in the performed analysis remains speculative. Different bone substitutes may be applied for bone augmentation of the atrophied arches, including autologous bone, allografts or xenografts [49]. Following a patient-centered approach and recent reviews that indicate that CSRs may not be affected by the type of bone graft material used, regenerative procedures performed in the analyzed patient cohort were treated with demineralized bovine bone grafts (DBBM) [50].
Implant survival and success have been associated with biological and technical complications. Specifically, prosthetic screw loosening has been reported to result in bone resorption [51]. In this study, 100% of the implant-supported full-arch reconstructions were screw-retained with multiunit abutments, being particularly relevant as part of the applied treatment regimens to allow for a comprehensive routine medical evaluation of the rehabilitation, specifically regarding the implant status and prosthodontic complications once a year. Related to peri-implant soft tissue health, it is also important to mention that all the implants were immediately restored with multiunit abutments following the one abutment-one time concept. This concept supported undisturbed healing of the surrounding peri-implant mucosal soft tissue to minimize marginal bone loss [52,53].
Future research should consider evaluating this CDSS prospectively. Such research should also consider the patient's interests and expectations and evaluate patient-reported outcomes like, e.g., health-related quality of life (HRQL) measures before and after an immediate implant-supported fixed complete restoration.
Conclusions
Based on the analysis of implant survival rates as a function of the level of dimensional alveolar atrophy and applied treatment options presented here and under consideration of the limitations related to its retrospective nature and the lack of control over potential influencing factors, the following conclusions might be drawn:
•
The level of alveolar atrophy did not influence implant survival of IFCDs in the mandible, only in the maxilla. • Implant survival rates in the maxilla tended to decrease with increasing levels of alveolar atrophy and were lower under moderate-to-severe atrophic conditions. • Individual treatment options did not statistically differ in terms of survival rate for a given level of alveolar atrophy.
In conclusion, adopting the restorative implant scheme to the level of alveolar atrophy and patient risk factors according to the applied clinical decision support system delivered clinically acceptable implant survival rates for all alveolar atrophy levels. Informed Consent Statement: Informed consent was obtained from each patient that was considered for data analysis.
Data Availability Statement: Data cannot be shared due to data protection obligations. | 2021-11-06T15:15:14.292Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "33ef1fef53b047aeaae27c7505fa880ef846d25c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/10/21/5167/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f9aad88671615ccc236f85290fe6d49dd8815dfe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259095738 | pes2o/s2orc | v3-fos-license | A variational approach to the eigenvalue problem for complex Hessian operators
Let $1 \leq m \leq n$ two integers and $\Omega \Subset \C^n$ a bounded $m$-hyperconvex domain in $\C^n$. Using a variational approach, we prove the existence of the first eigenvalue and an associated eigenfunction which is $m$-subharmonic with finite energy for general twisted complex Hessian operators of order $m$. Under some extra assumption on the twist measure we prove H\"older continuity of the corresponding eigenfunction. Moreover we give applications to the solvability of more general degenerate complex Hessian equations with the right hand side depending on the unknown function.
Introduction
Let Ω ⋐ C n be bounded domain (open connected set) in C n and µ ≥ 0 a positive Borel measure on Ω with positive finite mass 0 < µ(Ω) < +∞ and m an integer such that 1 ≤ m ≤ n.
When m = 1 and µ is the Lebesgue measure this is the classical eigenvalue problem for the Laplace operator. When m = n and µ is a smooth positive volume form onΩ, this is the eigenvalue problem for the complex Monge-Ampère operator which we studied in our recent paper [BaZe23].
Let us recall some definition and notation in order to explain the general case 1 ≤ m ≤ n and state our main results.
Recall the usual operators d = ∂ +∂ and d c := i(∂ −∂) so that dd c = 2i∂∂. Given a real function u ∈ C 2 (Ω), for each integer 1 ≤ k ≤ n, we denote by σ k (u) the continuous function defined at each point z ∈ Ω as the k-th symmetric polynomial of the eigenvalues λ 1 (z), · · · , λ n (z) of the complex Hessian matrix ∂ 2 u ∂z j ∂z k (z) of u i.e.
A simple computation shows that pointwise on Ω for 1 ≤ k ≤ m, where β := dd c |z| 2 is the usual Kähler form on C n (up to a constant). We say that a real function u ∈ C 2 (Ω) is m-subharmonic on Ω if for any 1 ≤ k ≤ m, we have σ k (u) ≥ 0 pointwise on Ω. In particular such a function is subharmonic on Ω.
Observe that the function u is 1-subharmonic on Ω (m = 1) if and only if it is subharmonic on Ω and σ 1 (u) = (1/4)∆u, where ∆ is the Laplace operator associated to the standrad Kähler metric on C n , while u is nsubharmonic on Ω (m = n) if and only if u is plurisubharmonic on Ω and σ n (u) = det ∂ 2 u ∂z j ∂z k . It was shown by Z. B locki in [Bl05], that it is possible to extend the notion of m-subharmonic function to non smooth functions using the concept of mpositive currents (see Section 2).
We denote by SH m (Ω) the positive cone of m-subharmonic functions on Ω. Moreover, identifying positive (n, n)-currents with positive Radon measures, it is possible to define the k-Hessian measure (dd c u) k ∧ β n−k when 1 ≤ k ≤ m for any (locally) bounded m-subharmonic function u on Ω (see section 2).
We will use a variational approach to solve this problem. We define two functionals on the convex positive cone E 1 m (Ω) of m-subharmonic functions with finite energy (see Section 2 and Section 3).
The first one is the energy functional defined on E 1 m (Ω) by the formula: This functional (up to the − sign) is a primitive of the Hessian operator on E 1 m (Ω) i.e. E ′ m (φ) = −(dd c φ) m ∧ β n−m on E 1 m (Ω) in the sense of Lemma 2.4. We will see that it is convex (on affine lines) on E 1 m (Ω) (see Lemma 2.4). The second functional is attached to a positive Borel measure µ on Ω satisfying the following integrability condition: . The functional associated to µ is defined by the following formula This is again a convex functional on E 1 m (Ω) such that I ′ m (φ) = −(−φ) m µ for any φ ∈ E 1 m (Ω) in the sense of Lemma 2.4 below. This shows that the equation To state our main results, we make the following assumptions.
Assumptions (H)
• Ω ⋐ C n is m-hyperconvex i.e. it admits a continuous negative msubharmonic exhaution (see Definition 2.2); • µ is a positive Borel measure such that 0 < µ(Ω) < ∞ which is strongly diffuse with respect to the m-Hessian capacity (see Definition 2.5). An important example is when µ := gβ n , where 0 ≤ g ∈ L p (Ω) with p > n/ m and Ω f β n > 0 (see Section 2 for more examples).
Here is our first main result.
When m = n and µ = gβ n with g ∈ L p (Ω) and p > 1, this result was proved in [BaZe23].
Our second main result is the following.
Let Ω ⋐ C n be a bounded domain and µ a positive Borel measure on Ω such that (Ω, µ) satisfies the assumptions (H).
In the real case similar results were obtained by K. Tso for the real Monge-Ampère operator on a bounded convex domain in R n and the Lebesgue measure on R n (see [T90]). For the real Hessian operators the existence of the first eigenvalue was proved by X.J. Wang (see [W94]). These authors used a parabolic approach.
Preliminaries
2.1. The m-subharmonic functions. Let Ω ⋐ C n be bounded domain in C n . Recall the usual operators d = ∂ +∂ and d c := i(∂ − ∂) so that dd c = 2i∂∂.
As recalled in the introduction, a real function u ∈ C 2 (Ω) is m-subharmonic on Ω if for any 1 ≤ k ≤ m, we have σ k (u) ≥ 0 pointwise on Ω (see formula (1.3)). In particular such a function is subharmonic on Ω.
Let us recall the general definition of m-subharmonic functions following Z. B locki in [Bl05].
A smooth (1, 1)-form ω on Ω is said to be m-positive on Ω if for any z ∈ Ω, ω(z) ∈ Θ m . Definition 2.1. A function u : Ω → R ∪ {−∞} is said to be m-subharmonic on Ω if it is subharmonic on Ω (not identically −∞ in Ω ) and the current dd c u is m-positive on Ω i.e. for any collection of smooth m-positive (1, 1)forms ω 1 , · · · , ω m−1 on Ω, the following inequality holds in the sense of currents on Ω.
We denote by SH m (Ω) the positive convex cone of m-subharmonic functions which are not identically −∞ in Ω. These are the m-Hessian potentials.
2.2. Complex Hessian operators. Following [Bl05], we can define the Hessian operators acting on (locally) bounded m-subharmonic functions as follows. Given u 1 , · · · , u k ∈ SH m (Ω) ∩ L ∞ (Ω) (1 ≤ k ≤ m), one can define inductively the following positive (k, k)-current on Ω In particular, if u ∈ SH m (Ω) ∩ L ∞ loc (Ω), the positive (m, m)-current (dd c u) m ∧ β n−m can be identified to a positive Borel measure on Ω, called m-Hessian measure of u defined by: It is then possible to extend Bedford-Taylor theory to this context. In particular, Chern-Levine Nirenberg inequalities hold and the Hessian operators are continuous under local uniform convergence and monotone convergence pointwise a.e. on Ω of sequences of functions in SH m (Ω) ∩ L ∞ loc (Ω) (see [Bl05,Lu15]).
We will need the following definitions.
Definition 2.2. We say that the domain Ω ⋐ C n is m-hyperconvex if there exists a bounded continuous m-subharmonic function ρ : Ω →]−∞, 0[ which is exhaustive i.e. for any c < 0, {z ∈ Ω ; ρ(z) < c} ⋐ Ω. Following Cegrell [Ceg98], we define E 0 m (Ω) to be the positive convex cone of negative functions φ ∈ SH m (Ω) ∩ L ∞ (Ω) such that Then we define E 1 m (Ω) as the set of m-subharmonic functions u in Ω such that there exists a decreasing sequence (u j ) j∈N in the class E 0 m (Ω) satisfying u = lim j u j in Ω and sup j Ω (−u j )(dd c u j ) m ∧ β n−m < +∞.
For φ ∈ E 1 m (Ω), we define its m-energy by For each constant C > 0, we define the convex set Another concept which will be useful is defined for any Borel function h locally upper bounded on Ω by the formula where * means the upper-semi-continuous regularization. If This construction is classical in Convex Analysis as well as in Potential theory. It was introduced earlier in Pluripotential theory by J. Siciak (see [Si81] and the references therein) and considered in this context in [Lu15].
A particular case of this construction has been studied in this context in [Lu15] (see also [SaAb13]) is the so called the extremal function defined as follows. Let K ⊂ Ω be a compact set. The relative extremal m-subharmonic function of the condenser (K, Ω) is defined by It satisfies the following properties (see [Lu15]) : • The capacity of the condenser (K, Ω) is given by the formula If moreover h ≤ 0 and there The operator P m plays a fundamental role in the variational approach. This was highlighted by [BBGZ13] for the complex Monge-Ampère equation and extended to the Hessian equations in [Lu15].
Lemma 2.4. The following properties hold : (1) The functional E m is Gâteaux differentiable and −E m is a primitive of the complex m-Hessian operator i.e for any smooth path (2) we have is compact for the L 1 loc (Ω)-topology. 2.4. Stability theorems. To investigate the continuity properties of the functional I m,µ,Ω we will need to consider a more special class of Borel measures introduced in [K98] and considered in [CZ23].
The following definition was introduced in [CZ23] following a classical terminology in Potential Theory (see [Po16]).
Definition 2.5. 1) A positive Borel measure on Ω is said to be diffuse with respect to the capacity c m = c m,Ω if there exists a continuous increasing function Γ : R + → R + such that Γ(0) = 0 and for any compact subset K ⊂ Ω we have In this case we will say that µ is Γ-diffuse with respect to c m .
2) A positive Borel measure on Ω is said to be strongly Γ-diffuse with respect to the capacity c m if it's Γ-diffuse with Γ(t) = tγ(t), where γ is a non decreasing continuous function satisfying the following strong Dini type condition We define M m (Ω, Γ) to be the set of all positive Borel measure µ on Ω with finite mass which are strongly Γ-diffuse.
in the sense of currents on Ω.
Then sup ) > 0 is a uniform constant and h is a continuous function on R + such that h(0) = 0 depending only on µ and Γ.
We will also need the following result which is a consequence of a combination of Theorem 1 and Theorem 2.2 in [CZ23].
Non linear Sobolev-Poincaré type inequalities
There are several version of this type of inequalities (see [AC20] and [WZ22]). Here we will give different type of inequalities more suitable for our approach.
3.1. Integrability of finite energy potentials. To define the functional I m,µ,Ω we need to consider measures that staisfies the integrability condition (1.5). We give sufficient conditions on a Borel measures µ to insure integrability properties for functions in the class E 1 m (Ω). We will need the following result.
Then there exists a constant A > 0 such that for any φ ∈ E 1 m (Ω), we have In particular E 1 m (Ω) ⊂ L m+1 (Ω, µ). Observe that for m = 1 , we have Hence the inequality (3.1) in this case is the classical Poincaré inequality for functions in E 1 (Ω) ⊂ W 1,2 0 (Ω). Proof. To prove the inequality (3.1), we will need the following estimate due to Z. B locki for the complex Monge-Ampère operator. For This can be proved using integration by parts m times (see [Bl93]).
To prove the estimate (3.1), it is enough to assume that φ ∈ E 0 m (Ω). Then since we have which proves the required estimate with A := (m + 1)(m + 1)! v m L ∞ (Ω) .
We extend the previous result to a special class of diffuse measures.
Proposition 3.2. Let µ be a finite Γ-diffuse Borel measure i.e. for any Borel set K ⊂ Ω, Assume that there exists r > 0 such that Γ satisfies the following integrability condition Then there exists a constant C = C(m, r, Γ, µ(Ω)) > 0 such that for any φ ∈ E 1 m (Ω), In particular E 1 m (Ω) ⊂ L r (Ω, µ). Proof. Again the idea of the proof is classical (see [BJZ05]). Let φ ∈ E 1 m (Ω). Then by Cavalieri-Fubini principle we have From the domination property we get By [Lu15, Lemma 7.1], it follows that there exists a uniform constant B m > 0 such that for any s > 0, By (3.4), we get Thenφ ∈ E 1 m (Ω) and E m (φ) ≤ 1/B m . Applying the previous inequality tõ φ, we get by homogeneity which is a uniform constant. This proves the required inequality.
Remark 3.4. It is easy to see that the previous sufficient condition for integrability is almost optimal. Indeed, assume that E 1 m (Ω) ⊂ L r (Ω, µ). Then arguing by contradiction as in [GZ07], it's easy to deduce that there exists a constant B > 0 such that for any φ ∈ E 1 m (Ω), we have Let K ⊂ Ω be a compact set. Applying this inequality to the extremal function h K = h K,Ω = P m,Ω (−1 K ) and taking into account the properties of h K we deduce that The following corollary was proved in [AC20] by similar methods.
By Hölder inequality, it follows that
where q := p/(p − 1). This means that the domination condition (3.6) is satisfies with any τ < k(m, n, p) := n(p − 1)/p(n − m). The conclusion follows then from the previous Corollary.
In this case the functional
Now we investigate the continuity properties of I m,µ,Ω .
2) Let (u j ) be a sequence in E 1 m (Ω, C) converging to u ∈ E 1 m (Ω, C) in the L 1 loc (Ω)-topology. Assume first that (u j ) is uniformly bounded in Ω. Then since the sequence (u j ) j∈N converges to u in L 1 loc (Ω; dV 2n ), taking a subsequence if necessary, we can assume that u j → u a.e. in Ω with respect to the Lebesgue measure in Ω. It follows from the Lebesgue convergence theorem that u j → u in L p (Ω, dV 2n ) for any p > 1. Hence u j → u in L m+1 (Ω, dV 2n ) and by Theorem 2.8, it follows that u j → u in L m+1 (Ω, µ). This implies the formula (3.9).
We now consider the general case. For fixed k, j ∈ N, we define u (k) := sup{u, −k} and u (k) j := sup{u j , −k}. We also define for j, k ∈ N, h j := j ) m+1 and h := (−u) m+1 and h (k) := (−u (k) ) m+1 . These are Borel functions in L 1 (Ω, µ) and we have the following obvious inequalities : For fixed k, the sequence (u (k) j ) j∈N is a uniformly bounded sequence of m-subharmonic functions in Ω. Then applying the first step, we see that for each k ∈ N, the second term in (3.10) converges to 0 as j → +∞, while the third term converges to 0 by the monotone convergence theorem when k → +∞. It remains to show that the first term converges to 0 as k → +∞, uniformly in j. Indeed, for j, k ∈ N * we have the following obvious estimates We claim that the sequence k −→ {h j ≥k m+1 } h j dµ converges to 0 uniformly in j as k → +∞. Indeed for fixed j, k, we have (3.12) On the other hand, fix a Borel subset B ⊂ Ω. Since µ ∈ M m (Ω, Γ), with Γ satisfying the strong Dini condition (2.7), it follows from [CZ23, Theorem 1] that there exists a function φ B ∈ SH m (Ω) ∩ C 0 (Ω) such that φ B = 0 in ∂Ω and (dd c φ B ) m ∧ β n−m = 1 B µ in the sense of currents on Ω.
Therefore as before, Blocki's inequality (3.2) yields for any j ∈ N, By Lemma 3.7 below, we have that φ B L ∞ (Ω) → 0 as µ(B) → 0. This implies that sup j∈N B (−u j ) m+1 dµ → 0 as µ(B) → 0. We want to apply this result to the Borel sets B j,k := {u j ≤ −k}. To estimate the mass of µ on the sets B j,k , we first observe by using the inequalities (2.6) that for any j, k ∈ N, we have µ({u j ≤ −k}) ≤ Γ(c m ({u j ≤ −k})). By (3.5), we have for any j, k ∈ N * , This proves the claim and completes the proof of the Theorem. Now we prove the lemma used in the last proof.
Then for any Borel subset B ⊂ Ω, there exists a unique function φ B ∈ SH m (Ω) ∩ C 0 (Ω) such that φ B = 0 in ∂Ω and (dd c φ B ) m ∧ β n−m = 1 B µ in the sense of currents on Ω.
Proof. The existence of φ B follows from [CZ23, Theorem 1]. It remains to prove the second part of the lemma.
We argue by contradiction. Assume that there exists a sequence of Borel Without lost of generality we can assume that µ(B j ) ≤ 2 −j for any j ∈ N.
Assume first that (B j ) j is a decreasing sequence and let B := ∩ j B j . Then µ(B) = 0 and for any j ∈ N, we have 1 B j+1 µ ≤ 1 B j µ on Ω. This implies that Since the Hessian operator is continuous w.r.t the monotone convergence and µ(B) = 0, it follows that Since φ |∂Ω = 0, it follows from the comparison principle that φ = 0 in Ω. Now we prove that φ B j L ∞ (Ω) → 0 as j → +∞.
The general case is easily deduced from the first case by setting for any j ∈ N, A j := k≥j B k . Then (A j ) j is a decreasing sequence of Borel sets which decreases to a Borel set A. By sub-additivity we have µ( Applying the previous reasoning with A j instead of B j we obtain a contradiction.
The eigenvalue problem
4.1. The variational approach : Proof of Thorem 1.1. In this section we use a variational method to solve the eigenvalue problem stated in the introduction for a twisted complex Hessian operator.
By construction it follows that the sequence (w j ) j∈N is bounded in L m+1 (Ω, µ). Moreover from Lemma 3.1, it also follows that the sequence (w j ) j∈N is bounded in L m+1 (Ω, µ). Extracting a subsequence if necessary we can assume that (w j ) converges weakly to w ∈ SH m (Ω) and a.e. in Ω, hence in L 1 loc (Ω). By lower semi-continuity of the energy functional, it follows that w ∈ E 1 m (Ω) and Since sup j E m (w j ) =: C < +∞, it follows from Lemma 3.1 that Hence I m (w) = 1 and w ∈ E 1 m (Ω) is an "extremal" function for the eigenvalue problem i.e.
Since I m (w) = 1 it follows that u 1 := w ≡ 0 in Ω. To prove that (λ 1 , u 1 ) is a solution to the eigenvalue problem, consider the following functional defined for φ ∈ E 1 m (Ω), by the formula Φ m (φ) := E m (φ) − λ m 1 I m (φ), and observe that when φ is smooth then This means that the eigenvalue equation is the Euler-Lagrange equation of the functional Φ m = Φ m on E 1 m (Ω). Therefore it's enough to minimize the functional Φ on E 1 (Ω).
Let us prove the lemma used in the previous proof.
We define the corresponding functional on E 1 m (Ω) by the formula Formally the Euler-Lagrange equation of the functional Φ G,µ is precisely the Hessian equation (5.1) as we will see.
We will prove the following result using the variational approach.
Theorem 5.1. Assume that (Ω, µ, G) satisfies the assumptions (H 0 ), (H1) and (H2) on G. Then the functional Φ G,µ has the following properties : 1) the functional Φ G,µ is well defined and coercive on E 1 m (Ω) and achieves its minimum at a function ϕ ∈ E 1 m (Ω, C); 2) the function ϕ is a critical point of the functional Φ G,µ , hence it is a solution to the Hessian equation i.e.
Here G has polynomial growth of degree m means that there exists a constant M 0 > 0 such that for µ-a.e. z ∈ Ω and any t < 0, we have We first prove the following lemma.
In particular Φ G,µ is bounded from bellow.
Proof. We already know by Lemma 2.4 that E m is well defined and lower semi-continuous on E 1 m (Ω). It remains to prove that the functional defined as follows for any t < 0 and µ-a.e. z ∈ Ω. Since E 1 m (Ω) ⊂ L m+1 (Ω, µ), it follows that the functional L H,µ is well defined on E 1 m (Ω). Now we prove that L H,µ is continuous on E 1 m (Ω, C) for the L 1 loc (Ω)topology. Indeed let (φ j ) be a sequence of E 1 m (Ω, C) which converges to φ ∈ E 1 m (Ω, C) in the L 1 loc (Ω)-topology. By Theorem 3.6, the sequence (φ j ) converges to φ in L m+1 (Ω, µ). We want to show that lim j→+∞ L H,µ (φ j ) = L H,µ (φ).
By the same reasoning as above we see that any subsequence of (φ j ) satisfies the same property which means that the sequence (L H,µ (φ j )) has a unique limit point equal to L H,µ (φ). This proves the required statement.
Now we can prove Theorem 5.1.
Question 1 : Is the eigenfunction in Theorem 1.1 simple i.e. unique up to a positive multiplicative constant ?
Question 2 : Is the solution provided by Corollary 5.3 unique ?
Question 3 : Is the solution provided by Corollary 5.4 unique ?
When µ := gβ n with 0 < g ∈ C ∞ (Ω), it follows form [BaZe23] that the answer to the first and second question is positive when m = n. We believe that this is still true in general. | 2023-06-08T01:15:52.363Z | 2023-06-07T00:00:00.000 | {
"year": 2023,
"sha1": "e1130ab6a572afe22a4843ae6e32569919e7db06",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "e1130ab6a572afe22a4843ae6e32569919e7db06",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
251564702 | pes2o/s2orc | v3-fos-license | Ultraviolet completion of pseudo-Nambu-Goldstone dark matter with a hidden U(1) gauge symmetry
We propose an ultraviolet completion model for pseudo-Nambu-Goldstone dark matter with a hidden $\mathrm{U}(1)$ gauge symmetry. Compared to previous studies, this setup is simpler, introducing less interactions. Dark matter scattering off nucleons is highly suppressed by the ultraviolet scale and direct detection constraints can be easily evaded. The kinetic mixing between the hidden $\mathrm{U}(1)$ and the $\mathrm{U}(1)_\mathrm{Y}$ gauge fields would lead to dark matter decays. We find that the current bound on the dark matter lifetime implies that the ultraviolet scale should be higher than $10^{10}~\mathrm{GeV}$. The phenomenological constraints from the 125 GeV Higgs measurements, the dark matter relic density, and indirect detection of dark matter annihilation are also investigated.
I. INTRODUCTION
The cosmological abundance of dark matter (DM) can be naturally explained by neutral weakly interacting massive particles (WIMPs) which are thermally produced in the plasma and subsequently freeze out as the Universe expands [1][2][3]. The crucial ingredient in this argument is the weak interaction strength of WIMP annihilation into standard model (SM) particles at the freeze-out epoch, implying that WIMP dark matter is quite promising to be probed in current direct detection experiments. Although the direct detection sensitivity has been tremendously improved in the recent two decades, no robust signal has been found, suggesting severe constraints on the WIMP-nucleon scattering cross section [4,5]. This situation makes the WIMP paradigm questionable.
Nonetheless, an annihilation cross section of weak strength does not necessarily result in a WIMP-nucleon scattering cross section of the same strength. It is possible to greatly suppress the scattering process in direct detection without affecting the annihilation processes at the freeze-out epoch. An elegant way to realize it is to assume that the WIMP is a pseudo-Nambu-Goldstone boson (pNGB) whose interactions are momentum-suppressed . Since direct detection experiments basically operate at zero momentum transfer, the WIMP-nucleon scattering cross section totally vanishes at tree level [6], evading the direct detection constraints.
The original pNGB DM model [6] introduces a complex scalar S, which is a SM gauge singlet. The Lagrangian respects a U(1) global symmetry S → e iα S, except for a quadratic term µ 2 S (S 2 + S †2 )/4, which softly breaks the U(1) symmetry into a Z 2 symmetry. After the U(1) spontaneous breaking, the imaginary part of S becomes a stable pNGB, which has a mass µ S and acts as the WIMP with a vanishing tree-level WIMP-nucleon scattering amplitude. Such a soft breaking term is ad hoc. Other soft breaking terms, such as a trilinear term ∝ S 3 + S †3 , would spoil the vanishing scattering amplitude. Therefore, it demands an appropriate ultraviolet (UV) completion to realize only this quadratic soft breaking term [6].
A possible UV completion is to gauge the U(1) symmetry with B − L charges [17,18]. Such a U(1) B−L gauge symmetry would be free from gauge anomalies if three right-handed neutrinos are introduced. Consequently, the WIMP-nucleon amplitude would not exactly vanish at tree level, but be suppressed by a UV scale, i.e., the breaking scale of the U(1) B−L gauge symmetry. In addition, the pNGB WIMP becomes unstable, and the constraint on its lifetime leads to a UV scale typically exceeding O(10 11 -10 13 ) GeV [17,18]. Such a high scale UV completion can be embedded into a grand unified theory [22,23].
In this work, we would like to decrease the UV scale, because a lower scale may be easier to be probed in future indirect detection and collider experiments. For this purpose, we assume the pNGB WIMP arising from a hidden U(1) X gauge symmetry, where all the SM fields do not carry U(1) X charges. The gauge anomalies are canceled without introducing right-handed neutrinos, so less fields are involved in this setup. Since the U(1) X gauge boson does not couple to SM fermions via any U(1) X gauge interaction, the interactions inducing WIMP decays only come from the kinetic mixing between the U(1) X and U(1) Y gauge fields, relieving the lifetime constraint on the UV scale. This paper is organized as follows. In Section II, we construct a UV-complete model for pNGB WIMP by extending the SM by a hidden U(1) X gauge symmetry and two U(1) X -charged scalar fields. In Section III, we discuss the phenomenology of this model regarding the WIMPnucleon scattering in direct detection experiments, the WIMP lifetime, the related Higgs physics, and the WIMP annihilation relevant to the relic abundance and indirect detection experiments. Section IV presents results from a random scan in the parameter space. Section V gives a summary of the paper.
II. MODEL
We extend the SM with a U(1) X gauge symmetry accompanied with two complex scalar fields S and Φ, which are SM gauge singlets but carry U(1) X charges q S = 1 and q Φ = 2, respectively.
All the SM fields do not carry U(1) X charges. We assume that S and Φ develop nonzero vacuum expectation values (VEVs) v S and v Φ with a hierarchy v S v Φ . Thus, v Φ represents a UV scale that breaks the U(1) X gauge symmetry into an approximate U(1) X global symmetry. Beneath the lower scale v S , the U(1) X global symmetry is spontaneously broken, resulting in a pNGB WIMP.
A. Lagrangian
The SU(2) L × U(1) Y × U(1) X gauge-invariant Lagrangian involving S, Φ, the SM Higgs doublet H, the U(1) X gauge field X µ , and the U(1) Y gauge field B µ reads The covariant derivatives of the scalars are The field strengths of B µ and X µ are defined as The B µν X µν term implies a kinetic mixing between B µ and X µ with a mixing parameter s ε ≡ sin ε ∈ (−1, 1).
The parameter µ SΦ can be made real and positive by redefining the phase of S, and thus we adopt µ SΦ > 0 hereafter. The scalar fields can be decomposed as where the SM Higgs field is expressed in the unitary gauge and v = 246.22 GeV. When mini-mizing the scalar potential, three stationary point conditions are obtained as The v Φ contribution to the Φ † S 2 term leads to with µ 2 S = 2µ SΦ v Φ . This is the quadratic term directly introduced in the original pNGB DM model [6] to softly break the U(1) X global symmetry. In the limit v Φ → ∞ and µ SΦ → 0 with finite µ 2 S , the original model is recovered. For a finite v Φ , there should be some phenomenological deviations from the original model, which will be explored below.
After the scalar fields obtain the nonzero VEVs, the mass terms for the CP -even scalars (h, s, φ) and the CP -odd scalars (η S , η Φ ) become where the mass-squared matrices are given by [17] The two matrices can be diagonalized by two real orthogonal matrices U and V : V can be explicitly expressed as with a rotation angle β satisfying We have adopted the shorthand notations for the trigonometric functions, s β ≡ sin β, c β ≡ cos β, and t β ≡ tan β. Such notations are used throughout the paper.
The relations between the interaction and mass bases are We define h 1 to be the SM-like Higgs boson with m h 1 = 125.10 ± 0.14 GeV [33], whose dominant component should be h, i.e., |U 11 | > |U 21 |, |U 31 |. Similarly, the exotic Higgs bosons h 2 and h 3 with masses m h 2 and m h 3 are defined as s-like and φ-like, respectively. We further use positive U 11 , U 22 , and U 33 to fix the signs of the U matrix elements.χ is the massless Nambu-Goldstone boson associated with the spontaneous breaking of the U(1) X gauge symmetry, while χ is a pNGB WIMP with a mass squared of serving as a DM candidate. The typical range for m χ would be O(GeV)-O(TeV).
If µ SΦ = 0, the Lagrangian (1) respects two distinct U(1) global symmetries, one with S → e iα 1 S and the other one with Φ → e iα 2 Φ. Consequently, both η S and η Φ are massless Nambu-Goldstone bosons according to the Goldstone theorem [34,35]. Nonetheless, the existence of the µ SΦ term merges the two U(1) symmetries into the U(1) X global symmetry with q S = 1 and q Φ = 2. As a result, onlyχ remains massless, while χ obtains a mass proportional to √ µ SΦ .
After the spontaneous breaking of the SU(2) L × U(1) Y × U(1) X gauge symmetry, the gauge fields obtain the mass terms with m 2 W = g 2 v 2 /4 and Considering both such a B µ -W 3 µ mass mixing and the B µ -X µ kinetic mixing, the physical neutral gauge fields (A µ , Z µ , Z µ ) can be obtained through a linear transformation [36,37] Here we denoteŝ W ≡ sinθ W andĉ W ≡ cosθ W , whereθ W ≡ tan −1 (g /g) is the weak mixing angle. ε is the angle related to the kinetic mixing, and ξ is a rotation angle determined by the equation [36] t .
The gauge fields (A µ , Z µ , Z µ ) have canonical kinetic terms as well as diagonalized mass terms. The corresponding masses for the photon, Z, and Z bosons are given by m A = 0 and [38] respectively. Define r ≡ m 2 Z /m 2 Z , and we can further derive [37] If there is no kinetic mixing between B µ and X µ , we have ε = ξ = 0, m 2
B. Interactions
In the basis of the mass eigenstates χ and h i , the scalar trilinear couplings can be expressed as where The Yukawa couplings become where f denotes the SM fermions.
The neutral current interactions arising from the SU(2) L × U(1) Y × U(1) X gauge symmetry are given by with where T 3 f is the third component of the weak isospin for a SM fermion f , and Y f,L and Y f,R are the weak hypercharges for left-and right-handed fermions.
µ , the neutral current interactions can be expressed in a familiar form with where Q f is the electric charge of f in units of e = gg / g 2 + g 2 .
The relation between (Â µ ,Ẑ µ , X µ ) and the mass basis where Therefore, the neutral current interactions in the mass basis are [37,39] with where Note that the electromagnetic current interactions A µ j µ EM remain in the SM form. For ε = 0, we have A µ =Â µ , Z µ =Ẑ µ , and Z µ = X µ , and the Z couplings to the fermions are the same as in the SM, while the Z boson only couples to j µ X . The existence of the kinetic mixing makes Z couple to j µ X and Z couple to the SM fermions. where These couplings break the Z 2 symmetry χ → −χ, inducing decay processes of the pNGB WIMP χ. In order to be a viable DM candidate, χ should have a sufficiently long lifetime.
The best measured electroweak quantities are the fine-structure constant α(m Z ) in the MS scheme, the Fermi constant G F , and the Z boson pole mass m Z . From these quantities, it is conventional to define the "physical" weak mixing parameters s 2 W and c 2 W ≡ 1 − s 2 W via the tree-level SM relation [36,40] Then the hatted parametersŝ 2 W andĉ 2 W can be determined by [38] The values of g and g are settled by g = e/ŝ W and g = e/ĉ W with e = √ 4πα.
There are 10 free parameters in the model, which are chosen to be For a UV completion of the original pNGB DM model, we are particularly interested in the 1 and hence |ξ| |ε|. Therefore, the value ofŝ W would be very close to s W for any value of s ε .
The kinetic mixing contributes to the electroweak oblique parameters S, T , and U [41,42] at tree level. For a small ε, we have and [43] Because S, T , and U are all highly suppressed by r, we expect that the constraint on the oblique parameters from the global fit of electroweak precision measurements [44] would not constrain our interested parameter regions.
III. PHENOMENOLOGY
In this section, we discuss the phenomenological consequences of the model.
A. WIMP-nucleon Scattering
In the original pNGB DM model with a directly introduced soft breaking parameter µ 2 S , the WIMP-nucleon scattering amplitude exactly vanishes at tree level in the zero momentum transfer limit [6]. Our UV completion gives µ 2 S a dynamical origin, but inevitably introduces the χ-χ-φ coupling, leading to a nonvanishing χ-nucleon scattering amplitude. Nonetheless, we expect that the amplitude is significantly suppressed by a high UV scale v Φ , since the v Φ → ∞ limit recovers the original model.
The spin-independent (SI) χ-nucleon scattering is induced by the χ-quark scattering via tchannel exchanges of the CP -even Higgs bosons h 1 , h 2 , and h 3 . In the zero momentum transfer limit, the tree-level χ-quark scattering amplitude becomes where u(k 1 ) andū(k 2 ) denote the plane-wave spinor coefficients for the incoming and outgoing quarks q of 4-momenta k 1 and k 2 . It is equivalent [6,14] to express the amplitude in the interaction basis (h, s, φ), whose couplings to χ are given by G = Ä g hχ 2 g sχ 2 g φχ 2 ä with Expanding the scattering amplitude in orders of Thus, the amplitude is suppressed by v −2 Φ , as expected. Based on effective field theory [45], we derive the resulting SI χ-nucleon scattering cross section which is suppressed by v −4 Φ . Here, f N u,d,s are the nucleon form factors for light quarks [46]. In Fig. 1(a), we plot the χ-nucleon scattering cross section σ SI χN as functions of m χ for v Φ = 10 5 and 10 7 GeV, with the other related parameters fixed to be v S = 1 TeV, m h 2 = 300 GeV, m h 3 = 0.1v Φ , λ HS = 0.03, and λ HΦ = λ SΦ = 0.01. For m χ m N , Eq. (56) shows that σ SI χN is proportional to m 2 χ . Therefore, as m χ increases by one order of magnitudes, σ SI χN in Fig. 1(a) increases by two orders of magnitudes. v Φ = 10 7 GeV leads to cross sections smaller that those for v Φ = 10 5 GeV by eight orders of magnitudes, because of σ SI χN ∝ v −4 Φ . Note that v Φ = 10 5 GeV results in σ SI χN much smaller than the 90% confidence level (C.L.) upper limits from the recent LZ direct detection experiment [5], and even beyond the reach of the future DARWIN experiment with a 200 t · yr exposure [47]. Fig. 1(b) displays σ SI χN as functions of v Φ for m χ = 100 GeV and 1 TeV, demonstrating an obvious σ SI χN ∝ v −4 Φ behavior.
B. WIMP Lifetime
The Z-χ-h i and Z -χ-h i couplings ( Six years of Fermi-LAT γ-ray observations of dwarf galaxies imply a conservative constraint on the WIMP lifetime, τ χ 10 27 s [48], corresponding to a bound on the total WIMP decay width, Γ χ ≡ 1/τ χ 6.6 × 10 −52 GeV. In the v Φ → ∞ limit, all the χ decay channels are forbidden, and χ becomes stable. Thus, the constraint on the χ lifetime is expected to give a lower bound on the UV scale v Φ . Therefore, the total decay width of the pNGB WIMP χ should be carefully calculated. When m χ > m h i + m Z (i = 1, 2), the 2-body partial decay width of χ → h i Z is given by where the λ function is defined as λ(x, y, z) = x 2 + y 2 + z 2 − 2xy − 2xz − 2yz. When m χ > m h i + 2m f (i = 1, 2), we should consider the 3-body decays χ → h i ff . If m h i + 2m f < m χ < m h i + m Z , both the Feynman diagrams mediated by the off-shell Z and Z bosons contribute to χ → h i ff . However, once the 2-body decay channels χ → h i Z open, the χ → h i ff decay diagrams mediated by the Z bosons should be discarded for avoiding double counting. When m Z + 2m f < m χ < m h i + m Z (i = 1, 2), the 3-body decays χ → Zff mediated by h i should be involved. Since the fermion couplings to h i are commonly suppressed by m f /v, the dominant contributions to χ → Zff come from the heaviest SM fermions t, b, τ , and c. If 2m W + m Z < m χ < m h 2 + m Z , the 3-body decays χ → W + W − Z and χ → ZZZ mediated by off-shell h i bosons may happen. Nonetheless, our calculation shows that their contributions are negligible, compared to χ → Zff and χ → h i ff . If all 2-and 3-body decay channels are kinematically forbidden, the 4-body decays χ → ff f f should be taken into account.
For calculating the 3-body partial decay widths, we derive analytic expressions and perform numerical integrals. Since the Feynman diagrams and integrals for the 4-body decays are too complicated to be dealt with by hand, we utilize the Monte Carlo tool MadGraph5 aMG@NLO [49] to automatically evaluate the 4-body partial decay widths. In the latter approach, FeynRules [50] is used to implement the model.
C. Higgs Physics
In this model, the properties of the SM-like Higgs boson h 1 deviate from the SM prediction. The tree-level h 1 couplings to SM particles can be parametrized as where κ W , κ Z , and κ f are the modifiers to the couplings with W , Z, and fermions. The SM corresponds to κ W = κ Z = κ f = 1, while this model gives In addition, exotic Higgs decay channels may exist. If m h 1 > 2m χ , the invisible decay channel h 1 → χχ opens, leading to an invisible decay width If m h 1 > m χ + m Z , there is a semi-invisible decay channel h 1 → χZ with a partial decay width If m h 1 > 2m h 2 , the partial decay width of h 1 → h 2 h 2 is given by We utilize a numerical tool Lilith 2 [51,52] to constrain the model parameter space with the LHC Higgs measurements based on the Lilith database of version 19.09, including ATLAS and CMS Run 2 data of integrated luminosity 36 fb −1 . The important results sensitive to this model come from the measurements of h 1 → γγ [53], h 1 → ZZ [54], and h 1 → W + W − [55], and the search for invisible Higgs decays [56], and the combined measurements of the Higgs couplings from several channels [57]. For each parameter point, Lilith constructs an approximate likelihood function from the measurements of the Higgs signal strengths. The corresponding p-value larger than 0.05 is required, ensuring that each viable parameter point is consistent with the experimental results at 95% C.L.
D. WIMP Annihilation
The relic abundance of the pNGB WIMP χ is essentially determined by χχ annihilation cross section at the freeze-out epoch, σ ann v FO . Potential χχ annihilation channels include ff , W + W − , ZZ, and h i h j (i, j = 1, 2). The related Feynman diagrams are enormous. We make use of a MadGraph5 aMG@NLO plugin MadDM [58] to automatically generate and calculate all tree-level annihilation diagrams, and to solve the Boltzmann equation for predicting the relic abundance Ω χ h 2 .
χχ annihilation would still occur at the present day, inducing potential γ-ray signals in indirect detection experiments. A combined search for such γ-ray signals in the dwarf galaxies from the Fermi-LAT space experiment and the MAGIC Cherenkov telescopes [59] have given important constraints on the DM annihilation cross section. We further utilize MadGraph5 aMG@NLO [49] to evaluate the χχ annihilation cross section at a typical average WIMP velocity 2 × 10 −5 for dwarf galaxies, σ ann v D . Thus, the Fermi-MAGIC result can be used to constrain the model.
The induced couplings λ H , λ S , λ Φ , and g X are further required to range from 10 −3 to 1. We select the parameter points that satisfy the phenomenological requirements below.
• The signal strengths of the 125 GeV Higgs boson h 1 are consistent with the Higgs measurements after LHC Run 2 at 95% C.L. according to the Lilith calculation.
We project the selected parameter points onto the v S -m h 2 , v Φ -m h 3 , v Φ -m Z , and m Z -|ξ| planes in Figs. 4(a), 4(b), 4(c), and 4(d), with color axes corresponding to λ S , λ Φ , g X , and |s ε |, respectively. Since we focus on the parameter region with v Φ v, v S , m χ , the mass-squared , which is clearly shown in Fig. 4(b). More precisely, this plot demonstrates that m h 3 is proportional to v Φ and positively correlated to λ Φ . For |λ HS | 1, Eq. (6) leads to m 2 S . Nonetheless, such positive correlations of m h 2 to v S and λ S do not totally manifest in Fig. 4(a). The exceptions should be due to large |λ HS |.
According to Eqs. (18) and (20), Fig. 4(c) illustrates the positive correlations of m Z to v Φ and g X , while Fig. 4(d) displays the negative (positive) correlation of |ξ| to m Z (|s ε |). From Figs. 4(b) and 4(c), we find that the lower limit of the UV scale v Φ is down to ∼ 10 10 GeV, given by the Fermi-LAT constraint on the WIMP lifetime.
In Fig. 5(a), we demonstrate the selected parameter points in the Γ h 1 -(1 − κ Z ) plane, with colors denoting 1 − U 11 . For v Φ , m Z v, v S , Eq. (60) becomes κ Z U 11 = κ W = κ f , and thus the color axis is basically identical to the vertical axis in Fig. 5(a). There is an obvious curve constituted by parameter points in this plot, indicating the positive correlation between the h 1 total decay width Γ h 1 and κ Z (or U 11 , equivalently). The exceptional parameter points have larger Γ h 1 , which are contributed by the exotic Higgs decays h 1 → χχ, h 1 → χZ, and h 1 → h 2 h 2 .
We can see that the constraints from current LHC Higgs measurements give 1 − κ Z 0.1 (or U 11 0.9) and 3.3 MeV Γ h 1 5 MeV.
of the CEPC experiment on κ Z and Γ h 1 , and a large fraction of the selected parameter points would be properly tested.
Moreover, the CEPC project could also constrain the invisible Higgs decay branching ratio BR inv down to 0.3% at 95% C.L. [61]. In this model, we have nonzero BR inv = Γ(h 1 → χχ)/Γ h 1 for m χ < m h 1 /2. Figure 5(b) displays the selected parameter points projected onto the m χ -BR inv plane. Current LHC data allow the parameter points with BR inv 14%, while the CEPC experiment could probe most of the parameter points with m χ < m h 1 /2.
In Fig. 6(a), the selected parameter points are presented in the Ω χ h 2 -σ ann v FO plane, with a color axis indicating the pNGB WIMP mass m χ and colored regions corresponding to the 1σ, 2σ, and 3σ ranges of the relic abundance Ω DM h 2 = 0.1200 ± 0.0012 measured by the Planck experiment [60]. The majority of the parameter points gather around the standard annihilation cross section σ ann v FO ∼ 2 × 10 −26 cm 3 /s. The rest points with nonstandard freeze-out annihilation cross sections should arise from resonance or threshold effects of specific annihilation channels that leads to velocity-dependent cross sections [64]. , and 3σ ranges of the Planck measured relic abundance Ω DM h 2 = 0.1200 ± 0.0012 [60]. In the right panel, the green dashed line denotes the upper limits from the Fermi-MAGIC γ-ray observations of dwarf galaxies at 95% C.L. [59], while the blue dotted line corresponds to m χ = m h1 /2.
The selected parameter points are further shown in the m χ -σ ann v D plane in Fig. 6(b), where the color axis denotes the ratio of σ ann v D to σ ann v FO . σ ann v D σ ann v FO means that χχ annihilation is s-wave dominated, corresponding to the standard case. The velocity dependence induced by the resonance or threshold effects would make σ ann v D different from σ ann v FO . It is obvious that the parameter points with σ ann v D = σ ann v FO around the m χ = m h 1 /2 line in Fig. 6(b) are caused by the h 1 resonance effect, while the h 2 resonance effect leads to σ ann v D = σ ann v FO for some of the rest parameter points. The green dashed line in Fig. 6(b) indicates the 95% C.L. upper limits of σ ann v D from the Fermi-MAGIC observations of dwarf galaxies assuming a bb annihilation channel. These limits can be approximately used to constrain the model. We find that only a small fraction of the selected parameter points have been excluded.
V. CONCLUSIONS AND DISCUSSIONS
In this paper, we have constructed a UV-complete model for pNGB dark matter with a hidden U(1) X gauge symmetry. Two complex scalar fields S and Φ carrying U(1) X charges of 1 unit and 2 units are introduced. The development of the Φ VEV v Φ at a high scale breaks the U(1) X gauge symmetry into an approximate U(1) X global symmetry, which is softly broken by the µ SΦ term, leading to the desired pNGB WIMP DM setup. As a result, the tree-level WIMP-nucleon scattering is suppressed by the UV scale v Φ . We have found a scaling relation σ SI χN ∝ v −4 Φ , and hence v Φ 10 5 is high enough to escape direct detection. Compared to the UV-completion with the U(1) B−L gauge symmetry [17,18], our model do not need to introduce right-handed neutrinos for anomaly cancellation. Moreover, since the SM fermions do not carry U(1) X charges, the interactions leading to WIMP decays are reduced. Specifically, the interactions that induce WIMP decays only originate from the kinetic mixing between the U(1) X and U(1) Y gauge fields. This would relatively relieve the WIMP lifetime constraint on the UV scale v Φ .
A random scan in the parameter space has been carried out to obtain the parameter points satisfying the phenomenological constraints from the WIMP lifetime, the 125 GeV Higgs measurements, the observed DM relic abundance, and indirect detection of WIMP annihilation. We have found that the WIMP lifetime bound from the Fermi-LAT γ-ray observations has set a lower limit on the UV scale, v Φ 10 10 GeV, which is indeed looser than v Φ O(10 11 -10 13 ) GeV in the U(1) B−L case estimated in Refs. [17,18]. The parameter points satisfying current LHC Higgs measurements have U 11 0.9, 3.3 MeV Γ h 1 5 MeV, and BR inv 14%. A large fraction of these parameter points could be properly tested by future Higgs factories.
Additional constraints on this model come from direct searches for the h 2 boson at the 13 TeV LHC from decay channels such as h 2 → ZZ [65,66], h 2 → W + W − [67,68], h 2 → tt [69], and h 2 → h 1 h 1 [70]. By reinterpreting these constraints, some parameter points remaining in our scan may have been excluded. Nonetheless, the h 2 couplings to the W and Z bosons and to the top quark are highly suppressed by the mixing parameter U 12 . Thus, we expect most of the parameter points are still available. | 2022-08-16T01:16:00.432Z | 2022-08-13T00:00:00.000 | {
"year": 2022,
"sha1": "832095a36dbeda8854d48fbbd88476df95e1c7ba",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "832095a36dbeda8854d48fbbd88476df95e1c7ba",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
67347210 | pes2o/s2orc | v3-fos-license | Smart Garage Implementation and Design Using Whatsapp Communication Media
,
Introduction
Research into the use of Raspberry PI and Arduino on front doors has been done especially for guests whenever the owner is not home, where the guest will provide a message or voice messaging to the owner of the said house.The system is completed by an LCD (liquid crystal to display) to identify the guest, to display the voice message, and other important information regarding the house.With the LCD, the house owner can also leave messages that can be displayed should he/she please.As long as the voice message has been recorded, the Raspberry Pi camera can be used for security and to provide information such as images of guests that arrive to the house's owner [1].
The utilization of IPv6 internet protocol for house automation has started to be done with the taken object as a single network layer and all relevant aspects of house automation.All of this is as a basic knowledge that supporting infrastructure eases the development in making applications and in creating interactions with a beneficial concept for the vendor and to the user.Clearly, internet technology is an option for a solution to a automated home system [2].
Differently with a smart home system based on Raspberry Pi and android smartphone that is equipped with wireless router access that has given comfort and convenience in monitoring as well as house and environment situation control and also to raise electrical energy efficiency in accordance to the needs of the user [3].Meanwhile, according to [4] with the new standard in wireless communication with higher coverage area provide low energy consumption.
Different power patterns can be created by wireless and mobile data communication.The use of maximal sparse linear arrays has been done to create a new approach [5] to the synthesis of shaped patterns.Here, the Compressive Sensing (CS) theory is used with optimization in parameters that may affect the CS performance.The initial constraints may be fulfilled by different power patterns.Several other applications of interest can use the same procedure.Examples include the design of factorable pattern in the shaped zone or onedimensional simple reconfigurable array.Radiation performance that is near the ultimate physical limitation is achieved in such a scenario.Moreover, based on [6], the rigorous delay specification, made the performance results on the communications scheme difficult to achieve.Here, a fast deterministic procedure is presented to fulfill the requirements of fixed power pattern and directivity by implementing the feeds' size as a degree of freedom of the problem.As a result, different field levels are produced on their aperture even though the feeds have the same power but different sizes.A smaller field will produce a larger field level on its aperture [7].
By taking advantage of the rapid development in electronical and telecommunication technology, added to the use of Raspberry Pi, Arduino, and wireless routers as well as motor drivers, we are design and implementation a Home Automation system that uses a androidbased smartphone.The purpose in this research is using an android smartphone with WhatsApp which can be implemented to help drivers open and close garage doors without having to come into physical contact with it.
The Raspberry Pi and the transmitter component, as Single Board Computer (SBC) or local server, and the microcontroller will be able to communicate, or to exchange data with each other, through a wireless communication.In this paper we are design and implementation system, we used a wireless and mobile data communication as the communication media.the Single Board Computer will be placed inside a house and the microcontroller will be placed near the garage door.When the SBC receives data from the user, the SBC will communicate with the microcontroller and data will be processed by the microcontroller.
Research Method
In an effort to implementation smart garage as part of Internet of Things, the research methodology steps begin from the system design that includes Raspberry Pi, Access Point, Arduino and smart phone and End devices.Then the second step is realization that consist of source code and prototyte design.The third step is performance test with attention of distance, protocol, words length and connection type.The final step is results analysis with several parameter performances like as delay, CPU load and response time.The research methodology is as shown in Figure 1.
System design
The initial stage of this research is the design phase or system design.At this stage, used several important components, such as: Raspberry Pi, Access Point, and Arduino.Raspberry Pi is a small single-board computer that can be used to run computer programs similar to a computer.On the Raspberry Pi, we can configure anything according to our needs.This eases users in doing configuration and in controlling it [8].Raspberry Pi serves as a server to receive commands from Whatsapp.Access Point serves to transmit information obtained from Raspberry Pi to Arduino.
Arduino Uno is a microcontroller-based board at Atmega328.This series has 14 digital input/output pins (where 6 pins can be used as a PWM output), 6 analog inputs, 16 MHz crystal ocillators, USB connection, and an electrical jack reset button.These pins has all that is needed to support a microcontroller.It only requires to be connected to a computer through the USB cable or the power source can be obtained from a power source through an AC-DC adaptor or also a battery [9].Arduino serves to receive commands from Raspberry Pi for later executed directly by motor driver.
Motor Driver L298 N is a driver module that uses a ST L298N chip that can control two motors simultaneously [10].NRF24LO1 wireless module is a wireless communication module that uses the 2.4 GHz band.This module uses the SPI interface to communicate and runs on a voltage of 5V [11].In this design, motor drivers are used as modules to drive DC motors and smartphones as executing commands through the WhatsApp application.
Realization
In the realization stage, prototypes of system designs have been made.In addition, also configure the system and create the source code.In Raspberry Pi, the authors make the source code for the process of receiving messages WhatsApp for the next sent to Arduino as a command.Meanwhile, the Arduino generated source code to receive commands from Raspberry Pi, to then be executed by motor drivers it is described in the block diagram of the system as in Figure 2.
Detailed process description of Figure 2, started by smartphones or user devices (laptops, gatgets) that can connect to the internet and can send messages determined by the number of words sent according to need, at least one word such as opening or closing.Besides, it takes the server whatsapp as a container of the number of messages made by the user.Local Server through Access Point can communicate with other subsystems that is smart controller device, where Raspberry Pi function as interface that runs programs both for configuration and control needs like a computer.So every message that comes from the smartphone by Raspberry Pi will be processed and then will be forwarded to Arduino Uno R3 via wireless module media that is NRF24LO1.Power supply 220V AC through power module distribute power especially 9 volt DC voltage on Arduino Controller.The existence of motor drivers in order to move the wheels on the smart wheel in accordance with the Arduino Controller command either to open or close the garage door.In the source code for message delivery WhatsApp to Raspberry Pi serves as an interface with WhatsApp server, process handles ACK, initialization layer, process to start the program and process for message delivery.While the source code for message reception WhatsApp to Raspberry Pi serves as a process to check messages received from WhatsApp and then save them into variables including the process to handle ACK, to then do the process of decrypt from received messages.In the source code of the process of sending data from Raspberry Pi to Arduino there is a command that can set the payload to be sent, the channel to be used, the length of CRC and Data Rate to be used.
Performance test
At the performance test stage, several system testing schemes are performed.In the distance testing scheme, measurements are made in several distance samples to determine how much effect of distance to the system.In the connection type test scheme, the comparison between the use of WiFi with the use of Mobile Data and its effect on the delay.In the word length test scheme, measurements of the number of characters in words sent in some samples are taken.In order to measure how much resources used by the system.In the protocol testing scheme, comparisons of MQTT protocols and HTTP protocols are used to determine which protocols are most efficient in the use of IoT.
Results and Analysis
In the analysis stage, analyzed the change of distance to the value of delay that occurs, protocol usage, connection type and character length analysis of Raspberry Pi resource usage.
Delay measurement
Delay measurement will performed through different media, that are wireless communication media and mobile data communication media.
Using a wireless communication media
During the first measurement, the authors used a wireless communication medium in the form of a wireless router that is under one network that is used by yowsup service.A smartphone that had WhatsApp installed is connected to a wireless router inside a home under the assumption that the signal from the wireless router can reach the smartphone.Figure 3. Delay measurement vs distance by using wireless communication medium Figure 3 shows the result of delay measurement with wireshark software and by using a stopwatch towards the change in distance in meters.This measurement had the purpose to see how fast the response from Raspberry Pi is as the controller towards a smartphone user who provides messages to it.From the figure, it can be observed that up to 15 meters, the delay is relatively constant at around 6 to 7 seconds.The delay in measuring by stopwatch is larger than the result from wireshark.The delay will continue to increase at distances above 15 meters closer to 8 seconds.Using a wireless communication medium through a wireless router, the 1111 signal received by the smartphone will increasingly decrease if the distance is more than 15 meters and will affect the performance of the system.
Using mobile data communication media
In this measurement, the authors used the network connection that is in smartphones such as GSM and LTE and does not use the connection from wireless routers.There is a difference in the network used by the user and the one used by yowsup service.Figure 4 is a graph of the result of delay trials by using a mobile data communication medium.Based on the acquired result, using the wireshark software and the stopwatch as a measuring tool does not always result in an increasing delay.This means that by using mobile data, the signal recived by the smartphone will affect how fast or how slow the object's response is, in this case the motor driver wheels towards the Raspberry Pi controlled.The minimum delay is 6 seconds while the maximum is 7,5 seconds.The Measurement by stopwatch is larger than the wireshark.Therefore, by using mobile data communication medium, the delay is not linearly affected by the change in distance but is instead affected by the mobile data signal strength or power patterns reach towards the smartphone.
CPU and RAM Load Measurement
The next measurement is towards the CPU and RAM load from an idle position to a processing data sent from a smartphone.The rise in data from the smartphone is done in phases in order to acquire the CPU and RAM characteristic towards the Raspberry PI used other than to understand the load increase process experienced by the CPU and the RAM. 5 is a graph depicting the result of CPU load measurement towards the amount of data sent.In an idle condition, CPU usage is 19% while RAM usage is 32%.Meanwhile, when receiving data one word large with a length of 4 characters, CPU usage experienced a rise becoming 33% while RAM usage becomes 38%.When receiving data with 11 words that has 50 characters, CPU usage rose by1% to 32% and RAM usage stays at 38%.While receiving data 82 words long with 500 characters, CPU usage rose by 1% becoming 33% and RAM usage is 38%.When receiving data 196 words long with 1431 characters, CPU usage rose by 6% to become 39% while RAM usage stays at 38%.A signigicant rise happened why receiving data 518 words long that has 3858 characters.CPU usage rose by 7% to become 46% while RAM usage stays at 38%.This shows that when data has been sent through WhatsApp from a smartphone, there will only be a raise in load on the CPU while the RAM will not experience a rise in line with the function of RAM as a multitasking medium.
Protocol comparison between HTTP and MQTT
Figure 6 is a graph that states the comparison of response time to distance on MQTT protocol and HTTP protocol.The data were taken ten times.The data were taken at distance 1,5,10,15 and 20 meter respectively.Based on the above graph on the results of this research implementation, the HTTP protocol has a smaller reponse time value compared to the MQTT protocol.This is because the micro-controller used only communicate with 1 device.Therefore, the use of HTTP protocol is more efficient.
Conclusion
This research designed and implemented a smart garage by using WhatsApp communication media through system design, realization and performance measurement metrics.The result shows that when using a wireless communication medium, the largest delay was at 20 meters and the smallest delay was at 1 meter.When using mobile data as the communication medium, the largest delay was at 10 meters and the smallest delay was at 1 meter.The use HTTP protocol has a smaller reponse time value compared to the MQTT protocol.Therefore, the use of HTTP protocol is more efficient, and generally more familiar in implementation especially in the field of internet.
The CPU measurement result shows that the largest CPU load happened when sending data 518 words long with 3858 characters and the smallest rise in CPU load hapened when sending data 11 words long with 50 characters.During the RAM measurement, a rise in RAM usage happened only when switching from an idle condition to a running condition.After the program is running, no amount of data received affected the RAM performance.This is in line with the purpose of RAM as a media for multitasking.
Figure 4 .
Figure 4. Delay measurement vs ditance by using mobile data communication medium
Figure 5 .
Figure 5. CPU and RAM Load Characteristic with Change of Data Received
Figure 6 .
Figure 6.Response time vs distance between HTTP protocol and MQTT protocol | 2019-07-31T20:14:13.244Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "e5cb750011fecf64734b5f4c596df5f85c2c320e",
"oa_license": "CCBYSA",
"oa_url": "http://journal.uad.ac.id/index.php/TELKOMNIKA/article/download/8063/4835",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "e5cb750011fecf64734b5f4c596df5f85c2c320e",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
198480393 | pes2o/s2orc | v3-fos-license | Conceptual Framework of Antecedents to Trends on Permanent Magnet Synchronous Generators for Wind Energy Conversion Systems
: Wind Energy Conversion System (WECS) plays an inevitable role across the world. WECS consist of many components and equipment’s such as turbines, hub assembly, yaw mechanism, electrical machines; power electronics based power conditioning units, protection devices, rotor, blades, main shaft, gear-box, mainframe, transmission systems and etc. These machinery and devices technologies have been developed on gradually and steadily. The electrical machine used to convert mechanical rotational energy into electrical energy is the core of any WECS. Many electrical machines (generator) has been used in WECS, among the generators the Permanent Magnet Synchronous Generators (PMSGs) have gained special focus, been connected with wind farms to become the most desirable due to its enhanced e ffi ciency in power conversion from wind energy turbine. This article provides a review of literatures and highlights the updates, progresses, and revolutionary trends observed in WECS-based PMSGs. The study also compares the geared and direct-driven conversion systems. Further, the classifications of electrical machines that are utilized in WECS are also discussed. The literature review covers the analysis of design aspects by taking various topologies of PMSGs into consideration. In the final sections, the PMSGs are reviewed and compared for further investigations. This review article predominantly emphasizes the conceptual framework that shed insights on the research challenges present in conducting the proposed works such as analysis, suitability, design, and control of PMSGs for WECS.
Introduction
Energy is predominantly the driving factor of human life and the economy of global countries.Henceforth, the research investigation in this area is highly critical and the need lot of time to invest for in-depth study [1,2].Due to the fast depletion of the natural conventional resources, sustainable alternative energy sources, for instance tidal wave, solar, wind, biogas/biomass and hydro energy, must be tap together for developmental activities.Therefore, there is currently a tremendous increase in the lookout for sustainable and alternative energy sources to generate electricity.Wind energy seems to be a promising and potential alternative renewable energy source with its enhanced sustainability and eco-friendly nature.According to 'Global Energy Outlook and the Increasing Role of India', in the year 2040, the electricity generation capacity of India will be equivalent to what is produced by today's European Union [3]. Figure 1 shows a summary of electricity generation by selected region and its electricity generation by 2040.The Global Wind Report (GWP, 2018) mentioned that the wind energy is one of the cheapest forms of electricity in a number of markets.Has it is a cost-effective option for countries which have ever-growing power demands and distribution challenges with centralized grid system [3].Energy is predominantly the driving factor of human life and the economy of global countries.Henceforth, the research investigation in this area is highly critical and the need lot of time to invest for in-depth study [1][2].Due to the fast depletion of the natural conventional resources, sustainable alternative energy sources, for instance tidal wave, solar, wind, biogas/biomass and hydro energy, must be tap together for developmental activities.Therefore, there is currently a tremendous increase in the lookout for sustainable and alternative energy sources to generate electricity.Wind energy seems to be a promising and potential alternative renewable energy source with its enhanced sustainability and ecofriendly nature.According to 'Global Energy Outlook and the Increasing Role of India', in the year 2040, the electricity generation capacity of India will be equivalent to what is produced by today's European Union [3]. Figure 1 shows a summary of electricity generation by selected region and its electricity generation by 2040.The Global Wind Report (GWP, 2018) mentioned that the wind energy is one of the cheapest forms of electricity in a number of markets.Has it is a cost-effective option for countries which have ever-growing power demands and distribution challenges with centralized grid system [3].The Global Wind Energy Council (GWEC) suggested that wind energy sector (both the on-shore and off-shore) supplies 300 GW of wind power capacity to come online by 2024 for global consumption.The global wind energy capacity increased with 51.3 GW in 2018.In spite of the fact, it is less than 2017 in about 4.0%; it is still a good achievement in wind energy capacity addition.From the year 2014, there is a 50 GW capacity addition occurring for every year though some markets behave differently.Thus, wind energy may contribute to electricity generation in India about 34,046 MW, which was 49.3) compared with all other renewable energy mix in the end of year 2018.By the year 2030, the wind power capacity is expected to generate 2300 GW power, fulfilling 22% of the global electricity demands.The report published by Global Wind Energy Outlook 2018 [4] predicted the future of the wind energy industry until 2050.In 2018, 50,100 MW was added, which was lesser than that of the 2017's capacity addition 52,552 M).It is viewed in 2018 as the consecutive year with increased new installations accounting to 9.1%, but this is lesser than the previous year's data i.e., 10.8% growth in 2017.The global electricity demand met by 6% of the wind turbines installed in 2018.In Figure 2, the cumulative production based on wind sources for the year 2018 shown along with the newly added capacity for the year 2018 [4].The Global Wind Energy Council (GWEC) suggested that wind energy sector (both the on-shore and off-shore) supplies 300 GW of wind power capacity to come online by 2024 for global consumption.The global wind energy capacity increased with 51.3 GW in 2018.In spite of the fact, it is less than 2017 in about 4.0%; it is still a good achievement in wind energy capacity addition.From the year 2014, there is a 50 GW capacity addition occurring for every year though some markets behave differently.Thus, wind energy may contribute to electricity generation in India about 34,046 MW, which was 49.3) compared with all other renewable energy mix in the end of year 2018.By the year 2030, the wind power capacity is expected to generate 2300 GW power, fulfilling 22% of the global electricity demands.The report published by Global Wind Energy Outlook 2018 [4] predicted the future of the wind energy industry until 2050.In 2018, 50,100 MW was added, which was lesser than that of the 2017's capacity addition 52,552 M).It is viewed in 2018 as the consecutive year with increased new installations accounting to 9.1%, but this is lesser than the previous year's data i.e., 10.8% growth in 2017.The global electricity demand met by 6% of the wind turbines installed in 2018.In Figure 2, the cumulative production based on wind sources for the year 2018 shown along with the newly added capacity for the year 2018 [4].Figure 3 presents the overall baseline information of various settings, such as the new polices, moderate, and advanced scenarios.A global status report, published at the end of 2018, reported that global installed wind capacity was approximately 590 GW, which meant that Asia topped the regional market scale for the 9th consecutive year.It accounts for a whopping 48% of the added capacity (a total that exceeds 235 GW by the end of the year 2019) followed by Europe (over 30%), North America (14%), and Latin America and the Caribbean (almost 6%).In case of new installations, China retained the top position, though there was a contraction for two years.This was followed by US, Germany, UK, and India in respective positions.Figure 3 presents the overall baseline information of various settings, such as the new polices, moderate, and advanced scenarios.A global status report, published at the end of 2018, reported that global installed wind capacity was approximately 590 GW, which meant that Asia topped the regional market scale for the 9th consecutive year.It accounts for a whopping 48% of the added capacity (a total that exceeds 235 GW by the end of the year 2019) followed by Europe (over 30%), North America (14%), and Latin America and the Caribbean (almost 6%).In case of new installations, China retained the top position, though there was a contraction for two years.This was followed by US, Germany, UK, and India in respective positions.Globally, the energy demands were 282.5 GW and 318.105GW in the years 2012 and 2013, respectively.This denotes that there was a strong market growth of more than 19% and 12.5% in the years 2012 and 2013, respectively.However, this seems to be the lowest growth rate i.e., 22% and 21% of global electricity, when compared with annual average growth rate in the past decade.This is predicted to increase in the range of 8%-12% by the year 2020.The wind penetration level increased up to 10% in the year 2016, in alignment with the guidelines for international agreements on environmental commitment.By the years 2030 to 2035, the predicted saturation level is about 1.9 × 10 9 kW.The work by International Renewable Energy Agency (IRENA) titled 'Global energy transformation: A roadmap to 2050 (2019 edition)' inferred that by the year 2050, electricity would be the central energy carrier with growth up to 50% share from its current 20% share on final consumption.This would make the consumption of gross electricity double.The power demand across the globe (accounting to 86%) will be met by renewable resources-based power.Overall, the final energy will have two-thirds of contribution from renewable energy [5].According to the literature [6], the current study focuses on the hypothesis subjects such as Wind Energy Conversion System (WECS) history, transformation of Permanent Magnet Synchronous Generators (PMSG), Finite Element Method (FEM) leveraging, Soft Computing (SC) applications, and the upgradation of Computer Aided Design (CAD) which looks to be a novel perspective as the first step.Generally, the wind turbine is moved by the wind pressure as in step-like method, though its design is different.In wind energy production, low (cut-in) and abundant (cut-out) wind speeds are labelled as risk potentials.On the basis of size and design parameters, the risk potential of every turbine is decided.Generally, the electricity yield of a wind turbine ranges from 3 to 25 m/s whereas high generation is examined once it crosses 10-15 m/s values.Each turbine has cut-in as well as cut-out values that are contingent on size as well as design parameters [7].Therefore, the wind turbine design plays an important role in energy production.Dai et al. (2019) stressed that, in recent years, the incorporation of wind turbine generators, such as Permanent Magnet Synchronous Generator (PMSG), and Doubly Fed Induction Generator (DFIG), in which the former is predominantly utilized in wind energy conversion system's has been commonly seen, since it is cost-effective, highly reliable, and has flexibility in control [7].This paper aims to address the technical issues and fitness of WECS components and integration with electrical grid.Furthermore, it will explore the study of PMSG comprehensive comparisons with other topologies of generator.In addition, this paper will also shed insights on the gaps in research and areas to further enhance research, in the context of WECS.Globally, the energy demands were 282.5 GW and 318.105GW in the years 2012 and 2013, respectively.This denotes that there was a strong market growth of more than 19% and 12.5% in the years 2012 and 2013, respectively.However, this seems to be the lowest growth rate i.e., 22% and 21% of global electricity, when compared with annual average growth rate in the past decade.This is predicted to increase in the range of 8%-12% by the year 2020.The wind penetration level increased up to 10% in the year 2016, in alignment with the guidelines for international agreements on environmental commitment.By the years 2030 to 2035, the predicted saturation level is about 1.9 × 10 9 kW.The work by International Renewable Energy Agency (IRENA) titled 'Global energy transformation: A roadmap to 2050 (2019 edition)' inferred that by the year 2050, electricity would be the central energy carrier with growth up to 50% share from its current 20% share on final consumption.This would make the consumption of gross electricity double.The power demand across the globe (accounting to 86%) will be met by renewable resources-based power.Overall, the final energy will have two-thirds of contribution from renewable energy [5].According to the literature [6], the current study focuses on the hypothesis subjects such as Wind Energy Conversion System (WECS) history, transformation of Permanent Magnet Synchronous Generators (PMSG), Finite Element Method (FEM) leveraging, Soft Computing (SC) applications, and the upgradation of Computer Aided Design (CAD) which looks to be a novel perspective as the first step.Generally, the wind turbine is moved by the wind pressure as in step-like method, though its design is different.In wind energy production, low (cut-in) and abundant (cut-out) wind speeds are labelled as risk potentials.On the basis of size and design parameters, the risk potential of every turbine is decided.Generally, the electricity yield of a wind turbine ranges from 3 to 25 m/s whereas high generation is examined once it crosses 10-15 m/s values.Each turbine has cut-in as well as cut-out values that are contingent on size as well as design parameters [7].Therefore, the wind turbine design plays an important role in energy production.Dai et al. (2019) stressed that, in recent years, the incorporation of wind turbine generators, such as Permanent Magnet Synchronous Generator (PMSG), and Doubly Fed Induction Generator (DFIG), in which the former is predominantly utilized in wind energy conversion system's has been commonly seen, since it is cost-effective, highly reliable, and has flexibility in control [7].This paper aims to address the technical issues and fitness of WECS components and integration with electrical grid.Furthermore, it will explore the study of PMSG comprehensive comparisons with other topologies of generator.In addition, this paper will also shed insights on the gaps in research and areas to further enhance research, in the context of WECS.
A Brief Review of WECS
In 2004 article discussed wind engineering in general and wind power meteorology with special reference to turbine and generator technology.Further, they discussed the economics, which are involved in this regard [1].In a study conducted in 2007, the researchers stressed that the conversion of wind electricity is currently a green technology factor due to (1) structural design improvements, (2) design and manufacturing of blades, and (3) efficient power processing techniques, on the bases of power-electronics followed by new generator design, to achieve variable-speed operation [8].In 2013, [9] discussed a list of possible changes in the methodology towards the implementation of utility-scale wind energy into the power grid and follow up in accordance to the updated research with their obtainable alleviation techniques.Figure 4 disseminates the growth in size of wind turbines since 1980 and for predicted future prospects.The scaling up of turbines to lower cost has been effective so far, but it is not clear that the trend can continue forever [10].
A Brief Review of WECS
In 2004 article discussed wind engineering in general and wind power meteorology with special reference to turbine and generator technology.Further, they discussed the economics, which are involved in this regard [1].In a study conducted in 2007, the researchers stressed that the conversion of wind electricity is currently a green technology factor due to (1) structural design improvements, (2) design and manufacturing of blades, and (3) efficient power processing techniques, on the bases of power-electronics followed by new generator design, to achieve variable-speed operation [8].In 2013, [9] discussed a list of possible changes in the methodology towards the implementation of utility-scale wind energy into the power grid and follow up in accordance to the updated research with their obtainable alleviation techniques.Figure 4 disseminates the growth in size of wind turbines since 1980 and for predicted future prospects.The scaling up of turbines to lower cost has been effective so far, but it is not clear that the trend can continue forever [10].In 2012, [11] developed a 5 MW baseline design in deep wind concept with more than 150 deep Darrieus-type floating wind turbine systems.In this research article, the technology used in previous works employing various generator types and manufacturers of large power direct drive wind turbines were detailed.In Figure 4, the developments that occurred in the tower, blades, rotor diameter, power rating, and wind turbine hubs heights are illustrated.Amongst the available turbines, the 7.5 MW turbine seems to be the most powerful one with a 126 m rotor diameter.The global wind report published in 2012 cited the new Alston Haliade 6 MW turbine to be the world's large turbine with a 150.8 m rotor diameter [12].In the future, the next-generation wind turbines are predicted to hold 20,000 kW capacity with a 250 m rotor diameter.
In 2010, [13] investigated the power output density functions of different WECS for a variety of operating wind regimes with the help of a probabilistic approach.In 2007, [14] conducted a review of information regarding global wind energy scenarios, performance, and stability of wind turbines, sizes of wind turbine, wake effects, evaluation of wind resourced, site selection, wind turbine aerodynamics, and challenges faced in wind turbines followed by wind turbine technology.Which is inclusive of control system, design, loads, blade behavior, generators, transformers, and grid connection.In 2014, a review of notable technical as well as environmental impacts of wind farms, wind power resource assessment techniques, control strategies, and grid integration techniques, were conducted [15].A comparative investigation was conducted using a Maximum Power Point Tracking (MPPT) control In 2012, [11] developed a 5 MW baseline design in deep wind concept with more than 150 deep Darrieus-type floating wind turbine systems.In this research article, the technology used in previous works employing various generator types and manufacturers of large power direct drive wind turbines were detailed.In Figure 4, the developments that occurred in the tower, blades, rotor diameter, power rating, and wind turbine hubs heights are illustrated.Amongst the available turbines, the 7.5 MW turbine seems to be the most powerful one with a 126 m rotor diameter.The global wind report published in 2012 cited the new Alston Haliade 6 MW turbine to be the world's large turbine with a 150.8 m rotor diameter [12].In the future, the next-generation wind turbines are predicted to hold 20,000 kW capacity with a 250 m rotor diameter.
In 2010, [13] investigated the power output density functions of different WECS for a variety of operating wind regimes with the help of a probabilistic approach.In 2007, [14] conducted a review of information regarding global wind energy scenarios, performance, and stability of wind turbines, sizes of wind turbine, wake effects, evaluation of wind resourced, site selection, wind turbine aerodynamics, and challenges faced in wind turbines followed by wind turbine technology.Which is inclusive of control system, design, loads, blade behavior, generators, transformers, and grid connection.In 2014, a review of notable technical as well as environmental impacts of wind farms, wind power resource assessment techniques, control strategies, and grid integration techniques, were conducted [15].A comparative investigation was conducted using a Maximum Power Point Tracking (MPPT) control [16] between the optimized configurations of passive wind turbine generators with that of the active ones that operate at optimal wind power.
Wind Turbine, Types, and Generator Technologies
In the past decade, there has been a tremendous growth observed in wind turbine technologies and that have resulted in the development of new-age wind turbine concepts.With developments in wind generator systems, cost-effectiveness of the systems has become the new mandate.In a wind power generator system, there is a tower which supports rotating as well as the stationary parts.The nacelle that has the generator in it, power converter, grid side step-up transformer, monitoring and control equipment are present in the stationary part.In 2014, [17] developed a summary about compact and lightweight wind turbines along with the technical hindrances with special reference to Horizontal Axis Wind Turbines (HAWT).There are two broad categories of wind turbine technology at present; such has the HAWT and the Vertical Axis Wind Turbines (VAWT).The HAWT main rotor shaft rotates in alignment with the wind direction, whereas it is perpendicular to the ground, generator, transformer, converters, and other equipment in the case of the VAWT rotor shaft.
In HAWT, the nacelle is placed at the top position in the tower.The HAWT showcase better aerodynamic performance when compared to VAWT, due to which the former is largely deployed in large-sized offshore wind farms [17].According to [18], there are approximately 8000 different components present in a typical wind turbine.This information is based on a RE power MM92 turbine with the blades' lengths being 45.3 m and the tower height being 100 m.
Figure 5 shows the major components in a wind turbine and the share of the overall wind energy system parts cost.A direct-drive radial flux permanent magnet generator was checked for its suitability [19] to act as a drive-train runner.FEM software was used to test the generator fitness, based on structural design (or in other terms the stability of the air-gap present between the rotor and the stator) as per PMSG.So as to deduce the differences in flux density and force along the periphery of the rotor.In this study, the researchers used a simple analytical model.Further, 2D magneto-static simulations where also used to check the validity of the analytical model by making use of FEM software carried out [19].[16] between the optimized configurations of passive wind turbine generators with that of the active ones that operate at optimal wind power.
Wind Turbine, Types, and Generator Technologies
In the past decade, there has been a tremendous growth observed in wind turbine technologies and that have resulted in the development of new-age wind turbine concepts.With developments in wind generator systems, cost-effectiveness of the systems has become the new mandate.In a wind power generator system, there is a tower which supports rotating as well as the stationary parts.The nacelle that has the generator in it, power converter, grid side step-up transformer, monitoring and control equipment are present in the stationary part.In 2014, [17] developed a summary about compact and lightweight wind turbines along with the technical hindrances with special reference to Horizontal Axis Wind Turbines (HAWT).There are two broad categories of wind turbine technology at present; such has the HAWT and the Vertical Axis Wind Turbines (VAWT).The HAWT main rotor shaft rotates in alignment with the wind direction, whereas it is perpendicular to the ground, generator, transformer, converters, and other equipment in the case of the VAWT rotor shaft.
In HAWT, the nacelle is placed at the top position in the tower.The HAWT showcase better aerodynamic performance when compared to VAWT, due to which the former is largely deployed in large-sized offshore wind farms [17].According to [18], there are approximately 8000 different components present in a typical wind turbine.This information is based on a RE power MM92 turbine with the blades' lengths being 45.3 m and the tower height being 100 m.
Figure 5 shows the major components in a wind turbine and the share of the overall wind energy system parts cost.A direct-drive radial flux permanent magnet generator was checked for its suitability [19] to act as a drive-train runner.FEM software was used to test the generator fitness, based on structural design (or in other terms the stability of the air-gap present between the rotor and the stator) as per PMSG.So as to deduce the differences in flux density and force along the periphery of the rotor.In this study, the researchers used a simple analytical model.Further, 2D magneto-static simulations where also used to check the validity of the analytical model by making use of FEM software carried out [19].According to the literature [20], the induction and synchronous generator models are general candidates used to convert wind energy to electrical energy.In 2009, [20] listed Danish wind power status and various topologies of other wind farm configurations.A classification was done by [21] to differentiate the wind turbine technology schemes.To be specific, the different categories are Full Rate According to the literature [20], the induction and synchronous generator models are general candidates used to convert wind energy to electrical energy.In 2009, [20] listed Danish wind power status and various topologies of other wind farm configurations.A classification was done by [21] to differentiate the wind turbine technology schemes.To be specific, the different categories are Full Rate Converter Wind Turbine (FRCWT), PMSG, Fixed Speed Wind Turbine-Squirrel Cage Induction Generator (FSWT-SCIG), Variable Speed Wind Turbine-Direct Drive Synchronous Generator (VSWT-DDSG), Squirrel Cage Induction Generator-Wind Turbine (SCIG-WT), Full Rate Converter Induction Generator (FRCIG), Direct Drive Synchronous Generator (DDSG), Variable Speed Wind Turbine-Doubly Fed Induction Generator (VSWT-DFIG), Squirrel Cage Induction Generator (SCIG), Fixed Speed Wind Turbine-Permanent Magnet Synchronous Generator (FSWT-PMSG), Fixed Speed Wind Turbine (FSWT), Doubly Fed Induction Generator (DFIG) and Variable Speed Wind Turbine-Full Rate Converter Induction Generator (VSWT-FRCIG) [21].
This segregation is done on the basis of power level, working principle, application type, and the usage in a number of commercial applications.The research and development in this area is still happening, and various novel configurations and advanced applications are in the testing stage.In 2006 compared different classification types and explained them in detail [22].In general, based on the working principle, three electric generators are considered as main types: induction, synchronous machines.Parametric which are associated with magnetic anisotropy and permanent magnets.The study further mentioned that the parametric generators in most cases be called as doubly salient electric generators [22].Since they are mostly equipped with doubly salient magnetic circuit structures.When classified according to the magnetic flux penetration, there are three types of permanent magnet generators present: transversal-flux, axial flux, and radial-flux machines [22].
Since the efficiency provided is better, most of the high-power direct-driven wind power applications prefer low-speed and high-torque PMSGs [23].These are generally applied in a wide range of applications due to cost-effective Permanent Magnets (PM).According to the literature [23], Permanent Magnets can provide high-power densities, higher efficiency, and chances of compactness which eventually results in the reduction of turbine size.The advantages of Permanent Magnet generators are when it excludes the exciter field winding, slip rings, and brushes in association with the capability to self-excite making option, so as to achieve good efficiency as well as the high power factor.In a standalone system, the PMSG has overloading and full torque capability, a highly competitive feature, due to which it is unique when compared to other traditional electrical machines.The PMSG is capable of self-excitation, another exciting feature which makes it the best option for operating at higher power factors and efficiencies.Further, PM machines possess the ability of overloading and full torque at zero speed, as well as at lower speeds [24].To be specific, the standalone power systems are utilized in the isolated areas.When compared with the traditional electrical machines, this is inevitably effective.
In 2009, [25] studied the prospective site matching of direct-drive wind turbine models on the basis of electromagnetic design optimization of PM generator systems.In this study, a three-phase radial-flux PM generator was developed with a back-to-back power convertor.The study had a total of 45 PM generator systems which were designed, optimized, and grouped as a collage of five-rated rotor speeds in the 10-30 rpm range and nine-power ratings in the range of 100 kW to 10 MW, respectively.Following this, the study also determined the rotor diameter and the rated wind speed of a direct-drive wind turbine under optimum PM generator on the basis of the maximum wind energy capture design principle.This study also calculated the Annual Energy Output (AEO) with the help of the Weibull density function.At last, at eight potential sites, the maximum AEO Per Cost (AEOPC) of the optimized wind generator systems was calculated along with yearly mean wind speeds ranging between 3 and 10 m/s [25].
In 2008, [26] developed a concept of Permanent Magnet Generators Design.In this study, the researcher discussed the geared as well as direct-driven PM generators.Further, they also classified the direct-driven PM generators and the researchers dealt with various topologies of design aspects and unique nature in PM generators [26].In 2012, [27] conducted a techno-economic evaluation of the basic assembly and magnetic topographies of the Salient Pole Synchronous Machine and Permanent Magnet Synchronous Machine.The study also provided the economic analyses of the machines that accompanied wind turbines.
Various Aspects of Comparison for PMSG's
The design of electrical machines is important for any kind of applications.The basic design of an electrical machine involves certain procedures and analytical strategies.For calculation of magnetic circuit, electrical circuit, efficiency, insulation type, number of slots/poles combinations, winding dimensions, cogging torque analysis, control strategies, usage of materials, cost of products, thermal and structural design of electric machines, and manufacturing techniques etc. Finite Element Analysis (FEA) software can provide support for design and optimization tools to determine the best performance parameters.In 2008, [28] elaborately briefed and further used a deterministic global mathematical optimization which became a vital tool in the processes of design.Several mathematical models and optimization techniques could handle such problems associated with multi-faceted design.Figure 6 describes a complex range of ideas and significances of parameters for electrical machine design, analysis and characteristics studies, it has been simplified with partial adoption [28].The studies conducted so far in this research areas, and various viewpoints have been established [28].
Various Aspects of Comparison for PMSG's
The design of electrical machines is important for any kind of applications.The basic design of an electrical machine involves certain procedures and analytical strategies.For calculation of magnetic circuit, electrical circuit, efficiency, insulation type, number of slots/poles combinations, winding dimensions, cogging torque analysis, control strategies, usage of materials, cost of products, thermal and structural design of electric machines, and manufacturing techniques etc. Finite Element Analysis (FEA) software can provide support for design and optimization tools to determine the best performance parameters.In 2008, [28] elaborately briefed and further used a deterministic global mathematical optimization which became a vital tool in the processes of design.Several mathematical models and optimization techniques could handle such problems associated with multi-faceted design.Figure 6 describes a complex range of ideas and significances of parameters for electrical machine design, analysis and characteristics studies, it has been simplified with partial adoption [28].The studies conducted so far in this research areas, and various viewpoints have been established [28].In 2012, [29] conducted a general, as well as magnetic, analysis of various parameters, such as size, topology, voltage, magnetic field air-gap flux, weight, torque, losses, and efficiency between Permanent Magnet Synchronous Machines (PMSMs) and Conventional Salient Pole Synchronous Machine (CSPSMs) with the help of FEM. Figure 7, the weights of active material and costs are compared, and analyzed.Based on the comparison, it is observed that the total weight of the active material in the PMSM is reduced by 6.55% more than the conventional salient pole machine.In Figure 8, the losses at full load are presented [27].In 2012, [29] conducted a general, as well as magnetic, analysis of various parameters, such as size, topology, voltage, magnetic field air-gap flux, weight, torque, losses, and efficiency between Permanent Magnet Synchronous Machines (PMSMs) and Conventional Salient Pole Synchronous Machine (CSPSMs) with the help of FEM. Figure 7, the weights of active material and costs are compared, and analyzed.Based on the comparison, it is observed that the total weight of the active material in the PMSM is reduced by 6.55% more than the conventional salient pole machine.In Figure 8, the losses at full load are presented [27].
With the same output power generated by the Permanent Magnet used in the machine, there will be reduction in machine weight which eventually becomes lighter to produce and so it increases the efficiency.Once the investigation was complete, it was observed that the CSPSM expressed less efficiency when compared to PMSM's.Further, when it comes to enhancement of magnet and semi-conductor expertise, the PMSMs reaped a cost-based benefit.Therefore, at the time of designing electrical machines, it is advised to follow their strategy in terms of machine efficiency and efficient use of energy [29].With the same output power generated by the Permanent Magnet used in the machine, there will be reduction in machine weight which eventually becomes lighter to produce and so it increases the efficiency.Once the investigation was complete, it was observed that the CSPSM expressed less efficiency when compared to PMSM's.Further, when it comes to enhancement of magnet and semiconductor expertise, the PMSMs reaped a cost-based benefit.Therefore, at the time of designing electrical machines, it is advised to follow their strategy in terms of machine efficiency and efficient use of energy [29].
A seven type of systems such as variable-speed constant frequency (VSCF) wind generator system, PMSGDD, PMSG1G, PMSG3G, DFIG3G, DFIG1G, EESG_DD (Electricity-Excited Synchronous Generator with direct-driven), and SCIG_3G (Squirrel Cage Induction Generator with three-stage gearbox) has been compared.In this comparative study, the researcher made optimization designs for With the same output power generated by the Permanent Magnet used in the machine, there will be reduction in machine weight which eventually becomes lighter to produce and so it increases the efficiency.Once the investigation was complete, it was observed that the CSPSM expressed less efficiency when compared to PMSM's.Further, when it comes to enhancement of magnet and semiconductor expertise, the PMSMs reaped a cost-based benefit.Therefore, at the time of designing electrical machines, it is advised to follow their strategy in terms of machine efficiency and efficient use of energy [29].
A seven type of systems such as variable-speed constant frequency (VSCF) wind generator system, PMSGDD, PMSG1G, PMSG3G, DFIG3G, DFIG1G, EESG_DD (Electricity-Excited Synchronous Generator with direct-driven), and SCIG_3G (Squirrel Cage Induction Generator with three-stage gearbox) has been compared.In this comparative study, the researcher made optimization designs for A seven type of systems such as variable-speed constant frequency (VSCF) wind generator system, PMSGDD, PMSG1G, PMSG3G, DFIG3G, DFIG1G, EESG_DD (Electricity-Excited Synchronous Generator with direct-driven), and SCIG_3G (Squirrel Cage Induction Generator with three-stage gearbox) has been compared.In this comparative study, the researcher made optimization designs for different wind generator systems in the range of 0.75, 1.5, 3.0, 5.0, and 10 MW [30,31].The results inferred that the PMSG_DD was cost-effective when compared to EESG_DD systems due to the cost incurred in lower generator system and enhanced Annual Energy Production (AEP) per cost.When there is an increase in wind turbine, the cost spent on direct-drive wind generator seems to be reduced.However, when there is an increase in the rated power, there is an enhanced performance exhibited by the PMSG_DD system when compared to the EESG_DD system.
Following is the description for a single-stage gearbox drive train concept.Due to the low-cost generator system and high AEP per cost, the focus shifted to the DFIG_1G system which seems to be the best alternative.Further, when viewed from AEP per cost perspective, the DFIG_1G system seems to be the most cost-effective and is close to 1.5 MW.Following is the concept behind three-stage behavior Energies 2019, 12, 2616 10 of 39 drive-train.Due to the least cost generator system and high AEP per cost, the DFIG_3G system was considered as the best solution among other three wind generator systems.Additionally, in terms of AEP per cost aspect, more emphasis is given to the PMSG_3G system compared to the SCIG_3G system [31].Figure 9 compares all five various wind generator systems of respective manufacturers in a wide range of aspects.
Energies 2019, 12, x; doi: FOR PEER REVIEW www.mdpi.com/journal/energiesthat the PMSG_DD was cost-effective when compared to EESG_DD systems due to the cost incurred in lower generator system and enhanced Annual Energy Production (AEP) per cost.When there is an increase in wind turbine, the cost spent on direct-drive wind generator seems to be reduced.However, when there is an increase in the rated power, there is an enhanced performance exhibited by the PMSG_DD system when compared to the EESG_DD system.Following is the description for a single-stage gearbox drive train concept.Due to the low-cost generator system and high AEP per cost, the focus shifted to the DFIG_1G system which seems to be the best alternative.Further, when viewed from AEP per cost perspective, the DFIG_1G system seems to be the most cost-effective and is close to 1.5 MW.Following is the concept behind three-stage behavior drive-train.Due to the least cost generator system and high AEP per cost, the DFIG_3G system was considered as the best solution among other three wind generator systems.Additionally, in terms of AEP per cost aspect, more emphasis is given to the PMSG_3G system compared to the SCIG_3G system [31].Figure 9 compares all five various wind generator systems of respective manufacturers in a wide range of aspects.When compared in terms of cost between a multi-hybrid PM wind generator system loaded in single-stage behavior and the direct-drive concept, the former seems to be cost-effective.When there is an increase in the size of the wind turbine, then the adoption of gear ratios may also widely vary.Based on the rated power levels, the optimum gear ratio may vary from 4:1 to 10:1.In the case of larger power ratings, the literature [17] suggests making use of higher gear ratios would be better performance.In 2014 mentioned that PMSGs are predominantly employed by giants such as the manufactures as follows GE energy, Vestas, Siemens, Gamesa and Goldwind.The stator of the PMSG is wound where the rotor is present with the PM pole system and may possess salient cylindrical poles.At most of the time, the low-speed synchronous machines project the salient-poly type with predominantly numerous poles.When compared in terms of cost between a multi-hybrid PM wind generator system loaded in single-stage behavior and the direct-drive concept, the former seems to be cost-effective.When there is an increase in the size of the wind turbine, then the adoption of gear ratios may also widely vary.Based on the rated power levels, the optimum gear ratio may vary from 4:1 to 10:1.In the case of larger power ratings, the literature [17] suggests making use of higher gear ratios would be better performance.In 2014 mentioned that PMSGs are predominantly employed by giants such as the manufactures as follows GE energy, Vestas, Siemens, Gamesa and Goldwind.The stator of the PMSG is wound where the rotor is present with the PM pole system and may possess salient cylindrical poles.At most of the time, the low-speed synchronous machines project the salient-poly type with predominantly numerous poles.One can develop a direct drive system based on a synchronous generator with an ideal number of poles (a multi-pole PMSG).Some common types are transversal flux machine, axial flux machine, and the radial flux machine.The PMSG machine expressed highest the efficiency in an induction machine since the excitation was supplied excluding any energy flow.However, it is difficult to manufacture the PMs, whereas its inventory is cost-consuming too [17].
The long-term-unaddressed issue comes with the mandate to maintain the rotor temperature less than the magnet's threshold temperature.This may further be influenced by the magnetic material's Curie point and the binding material's thermal criterion in the case of power metallurgy composites.In turn, the synchronous process generates the issue according to the start-up, synchronization, and voltage regulation [32].In 2011, Sandra Eriksson et al. performed an excellent comparison of direct-driven PMSGs.A total of six different-range generators were compared among each other [33].radial flux machine.The PMSG machine expressed highest the efficiency in an induction machine since the excitation was supplied excluding any energy flow.However, it is difficult to manufacture the PMs, whereas its inventory is cost-consuming too [17].
The long-term-unaddressed issue comes with the mandate to maintain the rotor temperature less than the magnet's threshold temperature.This may further be influenced by the magnetic material's Curie point and the binding material's thermal criterion in the case of power metallurgy composites.In turn, the synchronous process generates the issue according to the start-up, synchronization, and voltage regulation [32].In 2011, Sandra Eriksson et al. performed an excellent comparison of directdriven PMSGs.A total of six different-range generators were compared among each other [33].Figure 10 clearly depicts the considerations of various factors with respect to the fixed and variable parameters for different ranges of generators.In 2013, [34] compared three configurations such as gearless-drive Permanent magnet induction generator PMIG-WECS, gearless-drive PMSG, geared-drive squirrel-cage induction generator (SCIG), and in the index every system was allocated with a number such as 1, 2, and 3 to position itself in the rank in accordance to other two systems.According to Table 1, the geared-SCIG system seems to be prominent in 61.5% of the indexes, while at the same time 38.5% of indices where dominated by the gearless-PMSG system.There was a 60% similarity in advantages between gearless-PMIG and gearless-PMSG.Therefore, the geared-SCIG system exists in alignment with the number of indexes.However, there is a domination of gearless-PMSG in the three top priority indexes such as generation efficiency, Operation & Maintenance (O&M) cost, and the duration of failure behavior.Further, there was a domination of geared-SCIG in the four top priority indexes such as kWh production at low speed, frequency of failure, generator O&M cost, and capital cost.In order to achieve the results with best accuracy, the weight of an index should be considered as per the order.Among the different configurations considered for the study, the results concluded that the gearless-drive PMSG-based and geared-drive SCIG-based systems seem to be the most desirable solutions.From Table 1, it is identified that the gear less PMSG is the only machine, which has the best option in efficiency, as there is no gearbox and copper loss [34].In 2010, [35] with the field-circuit method for rapid calculation of load characteristics for stand-alone PM synchronous generators (PMSGs) that were developed with various rotor structures.The study results were compared with load characteristic calculations and results.The field-circuit method was defined, and utilized to determine the load characteristics of PMSGs with surface-mounted, inset or interior mounted permanent magnets and with inner or outer rotors [35].In a comparative study conducted 2013, two PM generator types such as radial flux PM (RFPMG) and axial flux PM (AFPMG) generators were compared.To compare the generator performance during mechanical energy storage, the study measured the output powers of both RFPMG and AFPMG [36].Results shown in Figures 11 and 12, concludes that there was a better performance exhibited by RFPMG when the machine's electrical parameters were in very similar condition and in relatively small power applications.It was inferring that the RFPMG has fewer copper, core, and rotor losses with respect to the varying generator and wind speed when compared to AFPMG [36].
Energies 2019, 12, x; doi: FOR PEER REVIEW www.mdpi.com/journal/energiesseems to be prominent in 61.5% of the indexes, while at the same time 38.5% of indices where dominated by the gearless-PMSG system.There was a 60% similarity in advantages between gearless-PMIG and gearless-PMSG.Therefore, the geared-SCIG system exists in alignment with the number of indexes.However, there is a domination of gearless-PMSG in the three top priority indexes such as generation efficiency, Operation & Maintenance (O&M) cost, and the duration of failure behavior.Further, there was a domination of geared-SCIG in the four top priority indexes such as kWh production at low speed, frequency of failure, generator O&M cost, and capital cost.In order to achieve the results with best accuracy, the weight of an index should be considered as per the order.Among the different configurations considered for the study, the results concluded that the gearlessdrive PMSG-based and geared-drive SCIG-based systems seem to be the most desirable solutions.
From Table 1, it is identified that the gear less PMSG is the only machine, which has the best option in efficiency, as there is no gearbox and copper loss [34].In 2010, [35] with the field-circuit method for rapid calculation of load characteristics for standalone PM synchronous generators (PMSGs) that were developed with various rotor structures.The study results were compared with load characteristic calculations and results.The field-circuit method was defined, and utilized to determine the load characteristics of PMSGs with surfacemounted, inset or interior mounted permanent magnets and with inner or outer rotors [35].In a comparative study conducted 2013, two PM generator types such as radial flux PM (RFPMG) and axial flux PM (AFPMG) generators were compared.To compare the generator performance during mechanical energy storage, the study measured the output powers of both RFPMG and AFPMG [36].Results shown in Figures 11 and 12, concludes that there was a better performance exhibited by RFPMG when the machine's electrical parameters were in very similar condition and in relatively small power applications.It was inferring that the RFPMG has fewer copper, core, and rotor losses with respect to the varying generator and wind speed when compared to AFPMG [36].
Different Design Perspectives
Designing PMSG has several challenges, which make it complicated when compared to conventional machine design procedure.The combination called 'Slot and Pole' poses various other challenges, which include reducing eddy current losses and cogging on permanent magnets.In 2013, [37] a technique was proposed to improve the air-power gap apparently transferred under the Figure 12.Electromagnetic loss according to generator speed [33].R: Radial Flux PM Generator, A: Axial Flux PM Generator.
Different Design Perspectives
Designing PMSG has several challenges, which make it complicated when compared to conventional machine design procedure.The combination called 'Slot and Pole' poses various other challenges, which include reducing eddy current losses and cogging on permanent magnets.In 2013, [37] a technique was proposed to improve the air-power gap apparently transferred under the constraint of tangential stress using analytical optimization algorithms.The processes of optimization have been optimized for expressions that are relevant for the design of main variables, external derivations, and operational restrictions for the formulation of mathematical derivations.
In general, terms, during PMSG design, the optimum design includes various mandatory requisites, which are to improve profitability, and mitigate utilization of material to reduce cost and weight [38].In addition, the design considerations should also take into account availability, high reliability, and low serviceability and maintainability for TC Ia that is wind class [38].Furthermore, the utilization of gearless or semi-geared drive machines improves efficiency and reliability of wind power generators.Additionally, such requisites are associated characterization of compactness in terms of weight and dimensions.In addition, during the design of PMSG, the mechanical forces and voltage waveforms are quite imperative in several applications [38].
The design of machines is generally concerned with the electric and magnetic circuits; however, there are several losses which are measured using empirical equations [39].In 2011, [39] explored the various design aspects concerned with the radial and axial field of synchronous machines with permanent magnets.In addition, the analysis of three fractional-slot and concentrated winding permanent magnet synchronous machine topologies are suited especially for specific applications [39].According to a study [40], which explored the performance of wind power generators fitted with external permanent magnet rotors.The authors analyzed the FEM and electromagnetic results that examined the turbine characteristics and variations of the nominal wind speeds; various systematic methods were employed in previous research.For the calculation of the electrical characteristics, such as synchronous inductance, Electromotive force (EMF) constant, and phase resistance, an electromagnetic analytical and magnetic field distribution method was applied.In this study, a d-q model coordinate transformation theorem was employed for the analysis of performances.In addition, FEMs and curve fitting are used for the analysis of core losses [40].Furthermore, a dissertation [41] presented a transformation theorem that developed a technique for the optimization and design of machines mounted with Surface Mount Permanent Magnet (SMPM), as impacted by mechanical loads, energy source, thermal effects, and state-of-the-art developments in manufacturing and material capabilities.A method was proposed for the design and development of cage rotor induction machines that can be optimized for better performance.Both genetic algorithm GA and particle swarm optimization (PSO) were used for optimization of the machines.Different integrated methods were applied and the Electromagnetic-Thermo-Mechanical method was used for the fabrication of Surface Mount Permanent Magnet (SMPM) machines [41].An iron-less brushless permanent magnet machine was proposed and designed in 2013 [42] for the design and optimization of generator applications.The proposed approach constituted a dimensioning technique that involves comprehensive geometric techniques; both electrical and magnetic methods were used followed by the use of a detailed 3-D finite element (FE).In addition, the machine configurations used were both circular and rectangular designs, and were compared against each other.Furthermore, the performance of ironless stator designs configurations and the effectiveness of materials used were compared [42].Tangential magnetic flux and stator concentric windings were incorporated in wind power generators in 2009 [43] with the rotation frequencies of 75-300 rpm.The parameters associated with the developed generators were depicted in the research.The intention of the previous research was to analyze the working of synchronous generators fitted with permanent magnets, which is in line with the concept of mitigating the problem of magnetic field distribution that was studied separately using FEM.During the development of such models, as given below in Tables 2 and 3 the following parameters to acquire synchronous machines should be considered and varied: In addition, the following parameters should be considered for mathematical simulation.In 2012, [44] examined and designed PMSG using FEM simulation software that involves low speed three phase generators associated with external rotors.The aim of the research paper was to obtain sinusoidal voltages that are induced in stator windings which are espoused magnetization and arrangement path of permanent magnets within the rotor structure [44].Again in 2012 [45], used the multi-physics approach for the design and development of a 10-MW doubly fed induction generator (DFIG).The optimal design and analyses were considered for the operation of direct drive of wind turbines with a conversion that has reduced size.In 2005, [45] performed a study that comprised of PMSGs that were used in wind power generation systems that are small.The output voltage was examined using FEM wherein both no-load and load conditions were considered.The influence of shapes and magnetic dimensions was examined.The previous research is a novel study wherein the outcomes of FEM were analyzed that revealed the PMSG's cogging torque frequency was influenced by number of poles and stator slots.However, the performance was influenced by factors such as magnet dimension, air-gap length, and cogging torque magnitude [46].Research conducted by [47] (2008) depicted the design, prototyping, and analysis of relatively small and cheap axial-flux three-phase coreless permanent magnet generator.In the previous research, the FEM approach was used for the measurement of equivalent circuit inductances.In addition, the end winding inductance calculation and equivalent resistance of eddy-current loss where calculated using traditional methods.In 2002, [48] proposed a method for performance improvement using soft magnetic composite inter poles in drive permanent magnet machines.Several factors such as suitable pole arc shapes, magnet dimensions' influence, material usage efficiency, and labor costs where considered.In 2011, [49] examined the design considerations of double rotor radial flux permanent-magnet wind generators in terms of the mechanical and electromagnetic non-overlap air-cored (ironless) stator windings.The developed model was examined using finite-element analysis.The results of the analysis revealed that the electromagnetic design determines the mass, cost, rotor yoke dimensions, and leakage flux paths.In 2012, [50] examined the axial flux PM generator performances using wind turbine characteristics and electromagnetic field.The analytical approach could mitigate the analysis time required when compared with the FEM that is three-dimensional, which could use for the calculation of performances in the preliminary design phase.In 2010, [51] proposed and developed an optimal design high-speed DC generation system that uses a slot-less PM machine.In the previous research, the researcher used soft magnetic composite (SMC) stator yoke and a controlled rectified fitted to the stator winding [51].In [52] (1997) further examined the multi pole PMSG with the radial field.PMSG machines have been used as direct-coupled grid-connected generators with ratings between 100 KW and 1 MW.However, the previous research revealed that the poles that are between 100 and 300 are found to render better performance in terms of efficiency and reactance.The stator and rotor section design present the suitable pole and power number.Standard ferrite magnet wedges are used in the rotor sections.The stator sections however are made up of E cores with a single rectangular coil in each core.The researcher also developed a lumped-parameter magnetic model that permits the calculation of machine parameters in a rapid manner [52].In [53] (2007) examined the direct-coupled an Axial Flux PMSG (AFPMSG) that is appropriate for a wind turbine system.Furthermore, the researcher used horizontal-axis and vertical-axis wind turbine generator systems.FEM analysis was undertaken for the analysis of the AFPMSG magnetic flux density distribution.The results analyzed were compared with the proposed machine configuration wherein the voltage from the output line was found to be of sinusoidal pattern.AFPMSG design feasibility was confirmed using a prototype generator [53].In 2010, [54] further displayed an Axial-Flux Permanent-Magnet Generator for Induction Heating Gensets whereas ( [55], 1997) and ( [56], 1994) proposed a straightforward approach for the design of brushless permanent-magnet machines; the results are supported by several analytical results.The main difference between sine wave and square wave motors are detailed and described in terms of EMF, self-inductance, flux density and so on.A stage by stage method is involved with the design of computer-aided systems which are elaborated in detail.The previous research detailed the information such as torque, shape, magnet poles and phases, slots, poles, teeth, energy and co-energy, magnetic circuit concepts, yokes, basic relationships, magnetic materials, flux linkage and inductance, influence of stator slots, tooth flux, back-EMF, need for the field analysis based design FEM, cogging torque, series and parallel connections, and loss modeling [56].Though machines achieve infinitely, the core of the machine that operates under unsaturated conditions and deep rectangular slots are not appropriate and not suitable for the design of today electrical machines with non-linear materials.The machine's performance should be predicted with great accuracy to solve non-linear equations which is expressed in terms of the Magnetic vector potential.The irregular machine geometry confirmation makes the analytic method configuration challenging.Hence, there is a need to use appropriate field computation, and modeling techniques utilizing electromagnetic fields such as the energy minimization.Includes, differential/integral functions, variational method, discretization, shape functions, stiffness matrix, 1D and 2D planar and axial symmetry problem and computation of electric and magnetic field intensities, capacitance and inductance, force, torque, and energy for basic configurations of electrical machines [57].
In Figure 13, various electromagnetic analytical methods are illustrated.Every method contains a set of advantages as well as disadvantages.In this scenario, the finite elements were found to be robust in nature to conduct general electromagnetic analyses [57].
functions, stiffness matrix, 1D and 2D planar and axial symmetry problem and computation of electric and magnetic field intensities, capacitance and inductance, force, torque, and energy for basic configurations of electrical machines [57].
In Figure 13, various electromagnetic analytical methods are illustrated.Every method contains a set of advantages as well as disadvantages.In this scenario, the finite elements were found to be robust in nature to conduct general electromagnetic analyses [57].
Consideration of Losses Calculation for PMSGs
One of the important design factors discussed in this study is the determination of losses in PMSGs.In 2010, published a model with an elaborate loss computation and calculation method with updated analytical loss calculation.In this model, conventional losses, for instance, stator core iron losses, ventilation losses, I 2 R losses, and other detailed losses like stator end region losses, were discussed.However, being a separate engine, the cooling caused by the bearing friction and the losses incurred via excitation system were not considered.The components which were lost are discussed here in detail [58].
(a). Iron Losses
Excluding the stator and rotor windings, there seemed to be losses in eddy current as well as some more losses in entire metallic parts, which can be segregated as follows.
1. Iron losses in (teeth and yoke) stator core which included the impact of rotating fields and harmonics.2. Eddy current losses on pole shoe surface because of the tooth ripple pulsation and stator winding armature reaction magneto motive force.3. Eddy current losses in the stator clamping plates.4. Eddy current losses in the stator clamping fingers. 5. Eddy current losses in the stator core end laminations.6. Eddy current losses in external metallic air guides.
Consideration of Losses Calculation for PMSGs
One of the important design factors discussed in this study is the determination of losses in PMSGs.In 2010, published a model with an elaborate loss computation and calculation method with updated analytical loss calculation.In this model, conventional losses, for instance, stator core iron losses, ventilation losses, I 2 R losses, and other detailed losses like stator end region losses, were discussed.However, being a separate engine, the cooling caused by the bearing friction and the losses incurred via excitation system were not considered.The components which were lost are discussed here in detail [58].
(a). Iron Losses
Excluding the stator and rotor windings, there seemed to be losses in eddy current as well as some more losses in entire metallic parts, which can be segregated as follows.
1.
Iron losses in (teeth and yoke) stator core which included the impact of rotating fields and harmonics.2.
Eddy current losses on pole shoe surface because of the tooth ripple pulsation and stator winding armature reaction magneto motive force.
3.
Eddy current losses in the stator clamping plates.
4.
Eddy current losses in the stator clamping fingers.
5.
Eddy current losses in the stator core end laminations.6.
Eddy current losses in external metallic air guides.
(b). Winding Losses
Various types of losses in stator, rotor, and damper windings are inclusive in winding losses 1.
Stator winding copper I 2 R losses.
3.
Due to the tangential slot leakage field, the occurrence of eddy current losses in the stator winding.4.
Due to the radial slot leakage field, the occurrence of eddy current losses in the stator winding.
5.
Due to end leakage field, the overhanging of the circulating current and eddy current losses.6.
Damper winding losses due to tooth-ripple pulsation and the stator winding armature reaction magneto-motive force.7.
On the basis of statistical measurements, the rest of the losses were calculated with basic equations.
(c). Ventilation Losses
Ventilation losses segmented further into following parts 1.
Friction losses of rotating parts.
2.
Air friction losses of the forced cooling airflow.
In 2011, a study [59] of extreme excellence was conducted by on electromagnetic losses which were incurred in direct-driven PMSGs.Using electromagnetic model, the solutions were obtained from FEM.By utilizing a MATLAB-driven model, the researchers performed the simulations.The results obtained inferred that the iron and copper losses were completely based on the rated voltage and rated current.In terms of a fixed output power, the experiment achieved larger machine volume with an increase in rated voltage.Further, higher frequency and increased iron loss were observed in parallel to decrease rated current and reduce copper losses.At the time of simulations, the generator losses were determined for various wind speeds, using which the loss distribution was calculated.Furthermore, they tested an analytical model to predict the eddy current losses in PMSG rotor magnets by feeding a rectifier load.The eddy current loss achieved during time stepping resulted in the coherence of 2-D FEM and coupled-circuit when performing the investigation.In 1997, conducted an experiment and designed losses for the model of a 1 MW machine design prepared in alignment with the parasitic losses.These were stator back-iron reluctance, rotor and stator slotting, rotor reluctance, stator back-iron reluctance, stator module weld loss, rotor eddy-current loss, stator beam loss, the polygon effect, and stator structure cage loss [60].In 2014, [61] experimented on eddy current losses in PMs of surface-mounted magnet synchronous machines.This study introduced a true analytical method on the basis of magneto-dynamic problem of a conductive ring.The results were obtained and compared with the information retrieved from 3-D FEM analysis.In the analytical model, the effect of the width on magnet loss was considered.The axial effect was considered via a correction coefficient.In the comparison executed, the researchers included impact of the circumferential segmentation, instantaneous losses, effect of the frequency on magnet losses, and induced current density.Through stressing the criticality of the skin effect and magnetic reaction due to magnet currents, this analytical model yielded an accurate measurement of magnet eddy current losses [61].
Faults and Protection
At the time of designing PMSGs, researchers must be considered for the chance of fault occurrences and protection schemes methods.In 2013, [62] listed the influence of asymmetrical magnet faults upon PMSG rotors.Mechanical looseness, eccentricity, and damage in any one magnet are the most commonly found attributes that result in rotor faults.Further, the rotor eccentricity is caused by unequal distribution of static, dynamic, or mixed air-gap.In the presence of static eccentricity, the air-gap seems to be the least and positioned as per the stator.On the contrary, in the case of dynamic eccentricity, there seems to be no coincidence between the rotor's centers and the center of rotation.Therefore, the minimum air-gap position rotates in line with the rotor.There are notable reasons behind the cause of eccentricities such as looseness, incorrect assembly, load unbalances, misalignment, and sometimes the bending of the rotor.At the time of analysis, the study conducted series as well as parallel-connected windings.In order to quantify the demagnetization in a single magnet, the study defined the faulty severity factor.As per the study investigations, one can conclude that, for a generator where all windings are series-connected, the induced EMF value gets decreased due to the demagnetization of a single magnet.Likewise, if the load is a resistive type, the current also may decrease.Therefore, one may not be able to identify the frequency components which are in association with the fault whereas one can observe only the decreased total flux linked to the windings [62].In 2012, Rodrigues et al expressed his ideas on direct or indirect lightning strokes after thoroughly reviewing the over-voltages and electromagnetic transients [63].The transient behavior can easily be explained via the lightning protection of the wind turbines accurately, for which the modified version of EMTP (Electro-Magnetic Transient Programme) was utilized.In this study, the researcher adopted a case study model in which two interconnected wind turbines were used so as to study the direct lightning stroke to the blade or the lightning strikes which happens in the soil near a building.Further, this study also conducted a holistic computer simulation in addition to EMTP-RV [63].Investigation in 2011 [64] which evaluated the fault conditions and identified efficient fault ride-through and protections schemes in electrical systems of both small-scale (land) and large-scale (offshore) wind farms.In their study, the researcher considered two variable-speed generation systems such as PMSGs and DFIGs.After discussing the protection issues associated with DFIGs, the research proposed a new protection scheme as well.Following this proposal, the protection scheme options for fully rated converter and direct-driven PMSGs were analyzed and simulation results were compared.
The development in magnetic materials and its impact on the electric machine design investigated (2007) [65].In addition to that, few potential faults were also selected using a fault-tolerant system design.Two fault types may occur in the system, of which the electromagnetic faults are as follows: 1.
Winding short circuit at terminals; 4.
Turn-to-turn fault in a phase.
The power converter faults are listed herewith
DC link capacitor failure.
One should focus on development of a fault-tolerant system, if the operation needs to be continued even in the presence of faults, if any.In this design, every phase should have a stand-alone single-phase PWM inverter that has a modular system in which the modules are isolated by every phase fault.
When a module has less thermal interaction or electrical/magnetic interactions, then the system is likely to proceed with the operations excluding the faulty phase [65].By 2012, inducted a rotor core design and FEA simulation, to diminish the mechanical stress put upon the core bridge.After considering rotor speed variations, the researcher performed the mechanical transient analysis.The experimental result was presented for the S-N curve (S-N curve is deduced from material test data) of rotor care material so as to assure the validity of the model against fatigue failure [66].
Damping and Oscillation
In order to handle damping and oscillation, the PMSG-based stability issues in WECS should be taken into consideration.In 2011, a torque compensation strategy was devised [67] based on DC-link current determination of the converters, after the stability challenges faced in PMSG-WECS were studied.In general, the instability issue is caused at the time when generators are in direct connection with the wind turbine during which the speed oscillations occur because of the lack of damper in design, and torsional vibration.With the purpose of reducing the oscillation amplitude and enhancement of the system stability, one can make use of generator torque controller.However, due to limited ability, it may impact the WECS' power response.The torque compensation strategy, when deployed with the sole purpose of enabling positive damping of the oscillations, may lead to enhancements in small signal and transient stabilities of the WECS [67].In 2018, [68] opined the influence on system oscillations because of the grid-connected wind farms.The study focused on the contributions made by the damping of power system oscillations and the assistance rendered by the inner wind turbine oscillations upon the changes in several aspects of power system behavior, which is inclusive of stability.The stability of the power systems gets connected with electro-mechanical interactions and the generator's behavior which is already in connection with the grid.Therefore, the influence of wind power penetration over the power system becomes a key challenge to tackle.In the literature [68], an elaborate investigation was conducted about the oscillations in power systems and its influence and control schemes in the wind farms for various wind turbine technologies [68].The growing technologies that focus on magnetic gears were the primary theme of the study conducted [69] (2011).The concept of magnetic gears has an advantage of dealing with inherent overload capability surpassing the mechanical gears.However, there is a less amount of torsional stiffness found in magnetic gears than their counterparts i.e., mechanical gears.This leads to oscillations at the time of transient changes in load and speed alike, and the damper windings utilized in synchronous generators to alleviate the oscillations that occur due to transients [69].In a study conducted in 1996, the researchers display the damping of PMSG power-angle oscillations in terms of wind turbine applications [70].The small pole pitch present in the generator allows it to work in every low speed and this is conjoined with the wind turbine, thus a direct electrical grid connection is maintained.In this research paper, an alternative damping system was proposed in which the stator is allowed to confine the rotational movement through a connection with the wind turbine that is located near a spring and mechanical damper.This proposed method enables high damping of power-angle oscillations when compared to conventional damper windings.The design's efficiency can be illustrated via the generator's response to initiate the changes in driving torque.In order to showcase the new design's vibrant nature and viability, the generator's behavior on synchronization as well as on operation front, where there is a difference in wind occurs, is described [70].
Short Circuit
In 2011, [68] stressed the occurrence of sudden short circuit when applied in large PMG machines thus denoting the differences in short-circuit behavior amongst the would-field generator and the PMG.With the help of FEM analytics, the researcher calculated the sub-transient reactance and time constants of the PMG and utilized it at the typical circuit theory simulation in short-circuit fault.The FEM was then used in the risk evaluation of magnetization loss in magnets.Being complex, the transient magnetic field looks for transient non-linear circuit-coupled FEA in 3-D in association with voltage-source excitation.Various calculation methods where summarized in this research paper with further discussions on implications of futuristic design and PMG application after considering the attributes that are relevant to application of standard tests and specifications [71].
Several Aspects of Cost Factor
In the study conducted by Salem Alshibani et al. (2014) [72], the high CAPEX (Capital Expenditure) issue was taken into account since, at the beginning of a project, it is always a hindrance for such techniques, especially in case of PMSGs.The study proposed a method, which utilized to assess typical PMSGs designed and reported in this article.The results of the proposed method were compared with the results of the traditional methods.The results inferred that the lifetime cycle assessment (LCA) seemed to favor the gearless PMSGs that incur high CAPEX.Further, in the case of inclusion of lifetime cost in the design optimization, the scenario develops machines which can yield significantly higher lifetime revenues than the extra CAPEX required [72].Figures 14 and 15 compare the CAPEX values of geared as well as gearless PMSGs in a range of power ratings with percentage.
compared with the results of the traditional methods.The results inferred that the lifetime cycle assessment (LCA) seemed to favor the gearless PMSGs that incur high CAPEX.Further, in the case of inclusion of lifetime cost in the design optimization, the scenario develops machines which can yield significantly higher lifetime revenues than the extra CAPEX required [72].Figure 14 and Figure 15 compare the CAPEX values of geared as well as gearless PMSGs in a range of power ratings with percentage.To conclude, it can be inferred that higher power rating-based wind turbines are the most preferred ones in reducing the development and maintenance time and eventually increasing the energy yield [72].
Soft Computing Technique Based Optimization Used for PMSGs
There are two critical issues that influence an electrical machine's optimal design considering the usage of FEM, the computation time from FEM simulation, and the different parameters concerned with the electrical machine.In the present day scenario, the use of soft computing techniques-based optimization has gained momentum owing to the use of the statistical analysis with multiple correlation coefficients and moving least squares (MLS) approximation as proposed (2007) which are compatible with the electrical machines [73].In general parlance, the process of optimization includes several computations which are all dependent on parameters; the effort of assessment (LCA) seemed to favor the gearless PMSGs that incur high CAPEX.Further, in the case of inclusion of lifetime cost in the design optimization, the scenario develops machines which can yield significantly higher lifetime revenues than the extra CAPEX required [72].Figure 14 and Figure 15 compare the CAPEX values of geared as well as gearless PMSGs in a range of power ratings with percentage.To conclude, it can be inferred that higher power rating-based wind turbines are the most preferred ones in reducing the development and maintenance time and eventually increasing the energy yield [72].
Soft Computing Technique Based Optimization Used for PMSGs
There are two critical issues that influence an electrical machine's optimal design considering the usage of FEM, the computation time from FEM simulation, and the different parameters concerned with the electrical machine.In the present day scenario, the use of soft computing techniques-based optimization has gained momentum owing to the use of the statistical analysis with multiple correlation coefficients and moving least squares (MLS) approximation as proposed (2007) which are compatible with the electrical machines [73].In general parlance, the process of optimization includes several computations which are all dependent on parameters; the effort of To conclude, it can be inferred that higher power rating-based wind turbines are the most preferred ones in reducing the development and maintenance time and eventually increasing the energy yield [72].
Soft Computing Technique Based Optimization Used for PMSGs
There are two critical issues that influence an electrical machine's optimal design considering the usage of FEM, the computation time from FEM simulation, and the different parameters concerned with the electrical machine.In the present day scenario, the use of soft computing techniques-based optimization has gained momentum owing to the use of the statistical analysis with multiple correlation coefficients and moving least squares (MLS) approximation as proposed (2007) which are compatible with the electrical machines [73].In general parlance, the process of optimization includes several computations which are all dependent on parameters; the effort of computation is very minimal when compared to the time that is saved.Such a method is assessed by the same application to synchronous machine's optimal design.The results of such analysis reveal the increase in the torque per weight ratio by 13% when compared with the results that are acquired from traditional optimization techniques [73].In 2010, [74] used the Fuzzy and FEM method for the analysis of the comparison that includes leakage field analysis witnessed in the electrical generator.The process of leakage field analysis is performed by developing a fuzzy model of the generator with the technology called adaptive neuro-fuzzy inference system (ANFIS).In this regard, the researcher performed a comparative evaluation on fuzzy model and FEM model wherein a good correlation was found to be present between them [74].Furthermore, a study (2008) [75] revealed the new and novel approaches towards automating optimization processes that are manual, and examined the implementation obstacles that are witnessed by the engineering community.Based on the effort for design evaluation and the degrees of freedom viewpoints, engineering design optimization was subjected to classification.In the previous research, the researchers presented a holistic view on the various design optimization approaches.Furthermore, the major challenge witnessed was scalability for the techniques of design optimization considered in the study.Large-scale optimization requires effective algorithms such as swarm intelligence and a considerable computing power [75].However, 2001, [76] proposed the use of a neural network in comparison with the Finite Element Technique (FET) based sensitivity analysis for the optimization of permanent magnet generators.In 2012, [77] further identified the challenges that were witnessed during design optimization for minimizing or maximizing the fitness function which positively influence the design purpose.Genetic algorithm which is incorporated in the optimization technique that is population based does not consider certain inferences, such as the magnet, copper, and magnetic laminations, and raw active materials.The main intention towards the reduction of the fitness function is based on the cost of energy that is generated by the system which further accounts for the variables that are uncertain in nature [77].In 2009, [25] further explored the use of direct-drive PM wind generation system optimum design models wherein the PM was designed and developed using enhanced genetic algorithm with a PM generator fitted with 500 kW direct drive wherein the minimization of the active material cost tends to improve the design optimization effectiveness.
In 2009 used the concept of direct-drive PM wind generation system optimum design models in which the PM is developed using an improvised genetic algorithm along with a 500 kW direct drive PM generator; this actually reduces the cost of generator active material which further illustrates design optimization effectiveness [25].Furthermore, [78] (2007) proposed a novel approach for the design of electrical rotating machines wherein a rational solution of predesign was done by integrating exact global optimization algorithms and analytical model.However, prior to developing an extensive prototype, validation of previous solutions should be performed using FEM.The purpose of the previous research was to extend the accurate global optimization algorithm through the introduction of an automatic numeric tool.Such a novel technique is used in resolving rationally the design problems.Furthermore, several examples were evaluated to examine the effectiveness of the novel technique [78].In 2008, [79] further established a new hybrid machine with 36/24 pole outer rotor permanent magnet (PM) that is directly coupled with a wind power generator.For effective control of the flux control, two excitation (PMs and DC field windings) hybridization in the double-layer stator is utilized.This result in constant output with wide range of speeds and a load varying where examined.In 2001, [80] further used genetic algorithms wherein a new algorithm called orthogonal genetic algorithm along with quantization/quantification for global numerical optimization was used with continuous variables.Furthermore, a quantization technique and orthogonal design were used for the development of a new crossover operator; this crossover operator generates representative sample points which are small, however are a potential offspring.Such a proposed algorithm solves 15 benchmark problems with 30-100 dimensions belonging to the local minima [80].It was 2005 arrived at new dimensions in this research area of evolutionary computation and structural design [81].Furthermore, [82] (2008) examined soft computing (SC) techniques associated with the design of engineering concepts.Through the inspection of soft computing methods, techniques, and their competence, to further address the high complexity issues and design tasks, the researcher reviewed Fuzzy logic (FL), artificial neural networks (ANN), and Genetic Algorithms (GA) [82].In 2012, [83] further made an overview to compare research that was conducted to optimize the parametrization of machining process of modern and conventional machining.Following are the most important techniques used: genetic algorithm (GA), particle swarm optimization (PSO), simulated annealing (SA), artificial bee colony (ABC) algorithm, and ant colony optimization (ACO).Amongst the aforementioned algorithms, GA is widely applied in the literature [83].In 2004, [84] proposed a new solution called the multi-agent genetic algorithm (MAGA) which is an integration of the genetic algorithms and multi-agent systems to solve the problem Energies 2019, 12, 2616 of global numerical optimization.In 2008, [85] further proposed a disagreement versus randomness in the various SC techniques.In 2011, [86] further reviewed the state-of-the-art research developments associated with the use of soft computing techniques used for the optimization of problems associated with design, planning, and control in the field of sustainable and renewable energy.Furthermore, several soft computing methods were reviewed and presented regarding the current state of the art in computational optimization methods applied to renewable and sustainable energy, wherein a vibrant visualization of the state-of-the-art research progresses was proposed [86].It is important to generate random numbers using soft computing methods, as random numbers are used during the beginning of the estimation or during the processes of learning and searching.When compared between simultaneous randomness consideration and opposition and pure randomness, it was revealed that the former is better than the recent results acquired from evolutionary algorithms, neural networks, and reinforcement learning.To further increase the performance of soft computing algorithms, it was revealed that opposition-based learning provides an inclining effect.This was experimentally and mathematically proven that SC has better merits when applied to improve the differential evolution (DE) [86].In 2010, [87] also presented the Genetic algorithm (GA) with memetic algorithm and MADS (Mesh Adaptive Direct Search) for the optimal design of an electric machine.To acquire an effective optimal design of an electric machine with longer computation time and many local optima, the previous research proposed a hybrid algorithm to acquire global optimum.To maximize further Annual Energy Production (AEP), the prospective algorithm was referred.By 2006, [88] classified the modelling and optimization techniques for process problems shows in Figure 16, which displays the conventional and non-conventional optimization techniques and tools used in this regard.
conducted to optimize the parametrization of machining process of modern and conventional machining.Following are the most important techniques used: genetic algorithm (GA), particle swarm optimization (PSO), simulated annealing (SA), artificial bee colony (ABC) algorithm, and ant colony optimization (ACO).Amongst the aforementioned algorithms, GA is widely applied in the literature [83].In 2004, [84] proposed a new solution called the multi-agent genetic algorithm (MAGA) which is an integration of the genetic algorithms and multi-agent systems to solve the problem of global numerical optimization.In 2008, [85] further proposed a disagreement versus randomness in the various SC techniques.In 2011, [86] further reviewed the state-of-the-art research developments associated with the use of soft computing techniques used for the optimization of problems associated with design, planning, and control in the field of sustainable and renewable energy.Furthermore, several soft computing methods were reviewed and presented regarding the current state of the art in computational optimization methods applied to renewable and sustainable energy, wherein a vibrant visualization of the state-of-the-art research progresses was proposed [86].It is important to generate random numbers using soft computing methods, as random numbers are used during the beginning of the estimation or during the processes of learning and searching.When compared between simultaneous randomness consideration and opposition and pure randomness, it was revealed that the former is better than the recent results acquired from evolutionary algorithms, neural networks, and reinforcement learning.To further increase the performance of soft computing algorithms, it was revealed that opposition-based learning provides an inclining effect.This was experimentally and mathematically proven that SC has better merits when applied to improve the differential evolution (DE) [86].In 2010, [87] also presented the Genetic algorithm (GA) with memetic algorithm and MADS (Mesh Adaptive Direct Search) for the optimal design of an electric machine.
To acquire an effective optimal design of an electric machine with longer computation time and many local optima, the previous research proposed a hybrid algorithm to acquire global optimum.To maximize further Annual Energy Production (AEP), the prospective algorithm was referred.By 2006, [88] classified the modelling and optimization techniques for process problems shows in Figure 16, which displays the conventional and non-conventional optimization techniques and tools used in this regard.To further conclude, it was deemed that MADS is combined with GA as an effective computation time reduction method for optimal PM wind generator design and is considered over other parallel computing methods [89].Further offered a type of multidisciplinary design and optimization (MDO) To further conclude, it was deemed that MADS is combined with GA as an effective computation time reduction method for optimal PM wind generator design and is considered over other parallel computing methods [89].Further offered a type of multidisciplinary design and optimization (MDO) of a diffuser for an incompressible and steady magneto-hydrodynamic (MHD) method.The design problem can be resolved using GA-based programme that is optimized with the FEM based MHD simulation technique for which least-square FEM was used and developed in later research [89].In 2017, [90] presented about Multiple Criteria Decision Making (MCDM) concepts and has been used for economics analysis.Similarly, this concept can be used for lifecycle cost analysis of machine design [90].In (2002) [91] presented the non-dominated sorting GA to mitigate the performance related problems wherein the performance was analyzed through the comparison of the results from the other four algorithms.Further discussion was performed on the multi-objective optimization process solution using evolutionary algorithms, wherein the findings of the research revealed effectiveness against analytical and electro-magnetic problems [91].Furthermore, [92] (2001) displayed an approach that was used to design PM for wind power applications wherein the approach was made up of two phases: preliminary design stage and optimization stage.In 2008, [93] further examined the use of Differential Evolution (DE) and Particle Swarm Optimization (POS) Algorithms with technical analysis.It was ascertained that the Artificial Bee Colony (ABC) algorithm could be used as an innovative swarm optimization algorithm with fine results of numerical optimization.
Furthermore, [94] (2011) proposed an enhanced algorithm called the fast mutation artificial bee colony algorithm (FMABC).In 2012 proposed an improved ABC algorithm, which was used to solve numerical optimization issues, which further improved the capability of the ABC algorithm's exploitation feature.An alternate search mechanism and a varying probability function were proposed by the previous researcher.Seven numerical optimization problems were tested on the enhanced ABC algorithm [95].
In 2012, further utilized genetic algorithm (GA) for the achievement of an optimal design for an axial-flux PMSG (AFPMSG) [96].In 2009, [97] proposed an approach based on a numerical optimization algorithm wherein a generalized receding horizon control of fuzzy systems was proposed.To further resolve generic fuzzy dynamic systems' optimal control problem, a numerical method was developed.Fine optimization was developed in the previous research.
The researcher made a thoughtful understanding of soft computation techniques in the electrical engineering field applications, with integrated pseudo-code operational summaries [98].In 2010, [99] considered population-based algorithm and its application to solve numerical optimization problems.In certain cases, there are complexities in computing search problems which is associated with high dimensionality of search spaces.Until there is an employment of appropriate approaches, a search process could reduce effectiveness and increase cost.The use of nature inspired algorithms could tackle such difficulties.For example, fish schools tend to increase the mutual survivability since a large number of constituent individuals are deployed.
In 2008, first to introduce a method that searches high dimensional spaces that consider account behaviors that are obtained from fish schools.The derived algorithm-Fish-School Search (FSS) was made up of three operators: feeding, breeding, and swimming.In a cumulative scale, these operators tend to afford the evoked computation: (i) wide-ranging search abilities, (ii) automatic capability to switch between exploitation and exploration, and (iii) self-adaptable search process for global guidance [100].S.L. Ho et al. (2006) examined the use of particle swarm optimization (PSO) methods wherein the previous research considered several variables such as age; new strategies were figured out to examine the optimum particle solutions, the original formula for velocity updating, and intensified search phase integration with enhanced PSO method.The findings of the previous research revealed that the proposed method contains a refined ability to perform a pinpointing search and the overall global ability improved when compared to traditional PSOs [101].
It was [102] offered the use of support vector machine (SVM) classifier for the detection of broken electrical induction machines.Furthermore, the previous researchers also considered the analysis of Gaussian, linear and quadratic kernel function as opposed to the error rate and the support vector numbers.The findings of the previous researchers revealed the successful detection of broken bars in different situations wherein there also evidences fast, precise, and robust load changes which tend to qualify for the right use of such techniques in real-time online applications in industrial drives.
Furthermore, in 2002 proposed a tabu-search algorithm to identify multi objective optimal design problems' pareto solutions from which there is a utilization of the contact algorithm to assess the previous aspects.During the initiation of the iteration cycle, identification of the new current points, fitness sharing function, and ranking selection approaches are introduced.A more detailed explanation of the numerical results is displayed in the previous research to highlight the power of the proposed algorithm to ensure that there is uniform sampling performed which yields Pareto optimal front of the multi-objective design problems.Furthermore, effective execution strategies for the proposed algorithms were also displayed [103].
In 2011, further displayed the Improved Discrete Particle Swarm Optimization (IDPSO) searching technique, which is applied on the head of an electromagnet and for the optimization of the magnetic field gradient.For the previous research, COMSOL software was used for the measurement of the magnetic forces and field.The aim of the optimization algorithm is the search of optimal pole shape geometry in a refined manner, which results in the distribution of the homogeneous magnetic field with the desired holding force in the specific area of interest [104].
Furthermore, [105] (2007) displayed an innovative recursive fuzzy logic categorizing (R-FL-C) strategy the PM generators design approach that is utilized to mitigate search space and for expelling the local minima in due course of the process of optimization.In the previous research, finite element state space models are used to examine the space database with the knowledge that is acquired from off-line.
In 2012, [106] assessed the numerical functional optimization wherein the use of artificial bee colony optimization led the researcher to derive the use of the same bee swarm foraging behavior in their approaches.Furthermore, the ABS's efficacy was found to be high when compared with the genetic algorithm (GA), ant colony optimization (ACO), and the Particle swarm optimization (PSO).Though the ABC technique is found to be pretty important and efficient during exploration, the capacities associated with exploitation are found to be poor with issues regarding convergence speed in several instances.To mitigate this, the researcher further introduced the improved ABC algorithm or the I-ABC, which during the process of search with refining used the acceleration and inertia weight as the fitness functions.
In addition, [107] (2012) provided a heuristic structural optimization for the Surface Mounted PMSG.The use of structural optimization is the process of identifying the material distribution in an optimal way in every machine part; this technique is very prevalent in the field of mechanical engineering.Similarly, the use of structural optimization can also be witnessed in the field of electrical engineering.When compared to the other methods reported to deploy the continuous models for the elaboration of the material properties with Heuristic Search Algorithm [107], it gives a solution to the structural optimization issues.
By 2019, [108] proposed an identification method on K-means-singular value decomposition and least squares support vector machine which the simulations were proposed for voltage sags based upon an annealing algorithm for multi-objective optimization.To gain the pareto solutions in a significant manner.This is completely dependent on Pareto and can successfully be introduced in addition to parameter and objective space strings.The novel method proposed in this study questions the stop criterion, new rank formula, fitness sharing functions, and other such enhancements.For the purpose of validating, the proposed method's robustness, the study validated two numerical examples [108].
In 2001, [109] proposed an enhanced tabu-search algorithm to practically applied, it when finding optimal designs for electromagnetic devices.In parallel, the study also conducted team workshops and mathematical test functions.Based on the numerical results, it was inferred that there is a less significant iteration number achieved for the proposed method when compared with simulated annealing and other such algorithms.
In 2008, proposed a novel methodology with reference to PSO in order to find out the parametrically non-linear model structure.In this study, an existing method used in PMSM's dq-model to identify the parameters.Both the disturbed load torque as well as the motor stator resistance was established for PMSM variable-frequency drive system application.In order to question the efficiency of the identification method, the study conducted a simulation and the experimental results were provided.The results inferred excellent precision in terms of time-varying parameters when the PSO algorithm was used [110].
In the study conducted 2000), an auto-learning simulated annealing algorithm was proposed.This algorithm was developed by collaborating simulation annealing as well as the characteristics of the domain elimination method.This study utilized the standard mathematical function to assess the algorithm in addition to optimization of the power transformer practical end region [111].
In 2005, [112] demonstrated single as well as multi-objective optimizations by experimenting with a PMSM with rotor feedback with the help of a Genetic Algorithm.This artefact's extensions are nothing but the implementation of core losses cited with the help of the Steinmetz approach.A few other up-front changes are the modifications in tooth shape (especially the base), addition of voltage drawbacks, and changes in the volume expression for addition of end turns.
In the study conducted by [113] (2005), an improved Ant Colony Optimization Algorithm was proposed to be used in Electro-Magnetic Device Designs.The experiment deployed the algorithm in an inverse problem along with a mathematical function where its performance was contrasted with other better-designed methods.
A comparative study was conducted (2007) between the performance of ABCs upon the optimization of numerical function with swarm intelligence and population-based algorithms such as PSO, GA, and Particle-Swarm Inspired Evolutionary Algorithm (PS-EA) [114].In order to explore the performance of the ABC, a total of five high dimensional benchmark functions that consisted of multi-modality were deployed.From the simulation results, the authors made a strong recommendation that the proposed algorithm is capable of expelling local minimum and can be used well in multi-variable multi-modal function optimization.The scope for future researchers in this study was the investigation of influence exerted by the control parameters in the convergence speed and performance of ABC [114].
In 2009, a comparative study conducted to assess the performance of ABC algorithm with Evolution Strategy (ES), DE, GA, and PSO using a large set of unconstrained test functions.From the results, it was concluded that there was an excellent performance exhibited by the ABC algorithm when compared to other algorithms, though the study made use of only less-control parameters thereby efficiently solving multi-dimensional as well as multi-model optimization problems [115].The results further inferred that the performance of the ABC algorithm is superior compared to other such algorithms.
A beneficial design procedure was proposed (2012) for the controller utilized in the frequency converter of a variable speed wind turbine (VSWT)-driven PMSG with GA and RSM [116].A mess-less technique was recommend by the study conducted in 2004, which focused on connecting the radial basis functions (RBFs) as well as wavelets.This new method proposed in this study leveraged the advantages of RBFs as well as the wavelets.In order to maintain the linear independence as well as consistency, the bridging scales were utilized so as to safeguard the mathematical properties.With the purpose of validating the proposed method, a numerical example was utilized [117].
A hybrid Genetic Algorithm (GA) was proposed in 2003 [118], in order to optimize the electromagnetic topology.After taking a 2-D encoding technique into account, the geometrical topology was at first applied to electromagnetic topology.In the later stages for the crossover operator, the study utilized a 2-D geographic crossover.In order to enhance the convergence features, the study used a novel local optimization algorithm, otherwise called an on/off sensitivity method, which is hybridized with 2-D encoded GA.Once the algorithm was verified with different case studies, the results were published [118].
Novel Topology Development in PMSGs
In 2012 stated the assessment of low maintenance slip-synchronous, PM wind generator, which was developed using the concept of PM induction generator [119].In 1926 introduced the PMIG (Permanent Magnet Induction Generator) concept upon which the slip-synchronous permanent magnet generator (SS-PMG) was constructed.In generator design, there exists an induction machine cage-rotor, traditional stator winding along with an add-on of second free-rotating PM-rotor.The second PM-rotor runs synchronous speed while the cage-rotor operates at a relative slip speed in accordance to the PM rotor and rotating synchronous stator field.This is a gearless wind turbine generator that is connected with the grid directly i.e., no power electronic convertor or such behavior is required in the drive train.In the summary developed by [17] (2014), a comparison was performed between large-sized wind turbines which can produce more electricity at less cost with small-sized turbines.This comparison was executed since the costs involved in experimental set-up and maintenance do not impact the size of the machine.Therefore, more than 7 MW output power is being achieved from today's wind generators.For instance, from 2011, Enercon manufacturing an E-126/7500 wind turbine with 7.5 MW power capacity.At present, Sway Turbine and Windtec Solutions are in the process of developing 10 MW wind turbine generators which might hit the commercial markets in 2015 [17].Figure 17 shows the voltage ratings of seven various models of common wind turbine generators with respect to the turbine power which clearly depicts the model performance.
Energies 2019, 12, x FOR PEER REVIEW 15 of 39 (Permanent Magnet Induction Generator) concept upon which the slip-synchronous permanent magnet generator (SS-PMG) was constructed.In generator design, there exists an induction machine cage-rotor, traditional stator winding along with an add-on of second free-rotating PM-rotor.The second PM-rotor runs synchronous speed while the cage-rotor operates at a relative slip speed in accordance to the PM rotor and rotating synchronous stator field.This is a gearless wind turbine generator that is connected with the grid directly i.e., no power electronic convertor or such behavior is required in the drive train.In the summary developed by [17] (2014), a comparison was performed between large-sized wind turbines which can produce more electricity at less cost with small-sized turbines.This comparison was executed since the costs involved in experimental set-up and maintenance do not impact the size of the machine.Therefore, more than 7 MW output power is being achieved from today's wind generators.For instance, from 2011, Enercon manufacturing an E-126/7500 wind turbine with 7.5 MW power capacity.At present, Sway Turbine and Windtec Solutions are in the process of developing 10 MW wind turbine generators which might hit the commercial markets in 2015 [17].Figure 17 shows the voltage ratings of seven various models of common wind turbine generators with respect to the turbine power which clearly depicts the model performance.An innovative model with a Surface-Inserted Permanent Magnets Synchronous Generator was proposed in 2011, with air slots in the rotor that can be adjusted.This model removes the disadvantage present in PMSGs i.e. fluctuation of regulating voltage.When a comparison was performed between conventional machines and superconducting machines, it was found that the latter exhibited novel advantages such as efficiency, compactness, lightweight and significant stable operation in power systems [120].
In 2007 proposed an eccentricity topology with a promise to enhance the power density and made use of it in the design, development, and testing on an eight-pole superconducting rotating machine.Further, the study discussed the results retrieved from the magnetic scalar potential from a Coulomb formulation by Markov Chain Monte Carlo (MCMC) method.Additionally, the flux density was calculated using derivation from the regularization method.With the purpose of reducing the computation time, the MCMC method was deployed which in turn perform the magnetic scalar calculations in specific regions of discrete geometry.By using YBaCuO hightemperature superconducting (HTS) bulk plates and low temperature superconducting NbTi wires, a high magnetic field was generated.In order to increase the cooling operation, there is a stationary superconducting inductor and a rotating armature coiled with copper wires present in the superconducting machine [121].An innovative model with a Surface-Inserted Permanent Magnets Synchronous Generator was proposed in 2011, with air slots in the rotor that can be adjusted.This model removes the disadvantage present in PMSGs i.e. fluctuation of regulating voltage.When a comparison was performed between conventional machines and superconducting machines, it was found that the latter exhibited novel advantages such as efficiency, compactness, lightweight and significant stable operation in power systems [120].
In 2007 proposed an eccentricity topology with a promise to enhance the power density and made use of it in the design, development, and testing on an eight-pole superconducting rotating machine.Further, the study discussed the results retrieved from the magnetic scalar potential from a Coulomb formulation by Markov Chain Monte Carlo (MCMC) method.Additionally, the flux density was calculated using derivation from the regularization method.With the purpose of reducing the computation time, the MCMC method was deployed which in turn perform the magnetic scalar calculations in specific regions of discrete geometry.By using YBaCuO high-temperature superconducting (HTS) bulk plates and low temperature superconducting NbTi wires, a high magnetic field was generated.In order to increase the cooling operation, there is a stationary superconducting inductor and a rotating armature coiled with copper wires present in the superconducting machine [121].
A detailed differentiation study was conducted [122] (2012) on the differences in development and settlement of active materials for transversal-flux machines from radial and axial ones.Lower stator copper losses were gained by increased windings space in the absence of any impact from the available space for flux in the transversal flux.As the electromagnetic structure is sophisticated, the transversal-flux machines seemed to be costly [122].
A novel low-fare methodology was proposed in 2012 to develop wind turbine electric generators from the generator from the burnt-out squirrel cage induction motors.The author first detailed the list of properties generally required for a wind turbine generator following which the methodology described the PMG, workability, multi-pole, and low-speed.The study conducted a cost comparative analysis and performance comparative analysis based on the test results achieved from a 500 W generator run at 900 RPM and a 1500 W generator at 650 RPM [123].
The efficiency of an air-cored PMSG was estimated in the study conducted [124] (2011) using finite elements and equivalent circuit modelling.The emerging trends showcase that the air-cored machines are predominantly used in wind energy systems.Instead of iron, the magnets which are captivated between the mild-steel-based rotors are present.At zero-load, the two-sided, axial-flux, air-cored machine's flux path can be seen as a stable magnetic flux that crosses axially from a magnet on one rotor to the opposite rotor which is a facing magnet.Further, the study stated that the coil is held by the stator on a plane in the middle of two magnetic sets [124].
In 2012 [125], an alternative viable solution was proposed the traditional PMSGs at MW level in direct-drive wind turbine applications via a Halbach array.It is a must to optimize the machine dimensions in order to achieve the maximum benefit of the Halbach array.This research article provides an overview of calculating the Halbach array application using analytical equations which are prevalent in the studies published earlier.The study recorded extraordinary performance by making few modifications in the existing PMSG design in which a constant magnet volume is maintained.When compared, the conventional array seemed to be more valued than the Halbach array at the time of considering the critical rotor radius.When the number of poles were increased, the critical radius got shifted to larger sizes and thus it allowed a positive leverage of the Halbach array at MW level.The analytical equation findings were verified using FEA simulation [125].
In 2008 [126], Halbach magnet array with the help of the numerical optimization method, which in turn relied upon finite element analysis.The magnetization direction of every element was designated as the design variable.In order to enhance the repulsive, attractive, and tangential magnetic forces present between the magnetic layers, the researcher investigated the optimal magnet arrays composed of two and three linear magnet layers.Two and three magnet rings altogether are present in a torsional spring and it receives the tangential force maximized by the magnet array.In this study, the researcher employed few optimization techniques such as adjoint variable methods and sequential linear programming in 2-D finite element analysis [126].
In 2005 developed a theoretical study about the magnetic circuit for a longitudinal flux PM synchronous linear generator.In order to assess the machine performance, the researcher used a coupled field and circuit model which was solved using the time-stepping finite-element technique [127].
In 2008 [128], and 2010 [129], conducted a comparison of different configurations in an axial-flux nine-phase concentrated-winding PMSG for a direct-drive wind turbine.
Various prototypes where investigated by [130] (2012) in which one of the prototypes demonstrated that the active mass of a PMG unit in a SS-PMG curtailed in a considerable fashion.For different slip-PMG concepts, the evaluation was also performed.To be specific, it is feasible to have a notable amount of minimization in active and PM mass for the new brushless-DC winding slip-PMG in comparison to existing non-overlap winding configurations.Further, it can be projected that the copper can be replaced by aluminum and there is no need to increase the mass of slip-PMG without changing the machine cost performance [130].
A low-speed three-phase generator was considered in 2014 with high induced voltage, low harmonic distortion as well as high generator efficiency, optimal generator parameters such as pole-arc to pole-pitch ratio and stator-slot-shoes dimension topology for investigation.For the purpose of obtaining sinusoidal induced voltages in stator windings, the researcher arranged the PMs in rotor structure and adopted the magnetization direction in an appropriate manner [131].An insight was published (2006) about the basis behind the development of PMSG, a novel hybrid in Hybrid Excitation Permanent Magnet Synchronous Generator (HEPMSG).It was developed through the insertion of an exciting winding in rotor or stator [132].
In 2008, [133] developed the Flux Reversal Machine (FRM) coupled with a doubly salient stator permanent magnet machine in addition to flux linkage reversal present in the stator concentrated winding.The study conducted a comparative analysis on Full Pitch Winding Flux Reversal Machine (FPFRM) and Conventional Concentrated Stator Pole Winding FRM (CSPFRM) on the design.The results revealed that FPFRM exhibited high power density than CSPFRM [133].
In order to shuffle the standard claw pole alternator in the place of automobile application, a single-phase FRM was introduced.It has few advantages, such as it has a simple construction process, expresses high-power density, low in inertia etc. Reference [134] (2010) investigated and proposed a distributed winding for FRM (Flux Removal Machine).A high-power density is provided by FPFRM and it enhances the efficiency as well.Being a doubly-salient Permanent Magnet machine with concentrated windings, FRM has advantages of both Switched Reluctance Machine and Permanent Magnet (PM) machines.FEM analysis was carried out in order to achieve the induced EMF, winding inductances, and flux linkages.The winding function strategy received the inductance of both the machines and it was compared with FEM results.On the basis of fabricated 'electrical gear', the power densities of both CSPFRM and FPFRM with PMSM were compared.The gear ratios were provided from various FRM configurations.As the design of CSPFRM, FPFRM, and PMSM are similar with respect to outer dimensions, the volume of the magnet, and the rotor speed, the comparison of those three machines were graphically represented in Figure 18.From the graphical representation, it is observed that the machine FPFRM requires very high compensating kVAr, when compared to CSPFRM and PMSM.However, as far as active weight/kVA is concerned the FPFRM is less when compared with other two [134].
Energies 2019, 12, x FOR PEER REVIEW 17 of 39 structure and adopted the magnetization direction in an appropriate manner [131].An insight was published (2006) about the basis behind the development of PMSG, a novel hybrid in Hybrid Excitation Permanent Magnet Synchronous Generator (HEPMSG).It was developed through the insertion of an exciting winding in rotor or stator [132].In 2008, [133] developed the Flux Reversal Machine (FRM) coupled with a doubly salient stator permanent magnet machine in addition to flux linkage reversal present in the stator concentrated winding.The study conducted a comparative analysis on Full Pitch Winding Flux Reversal Machine (FPFRM) and Conventional Concentrated Stator Pole Winding FRM (CSPFRM) on the design.The results revealed that FPFRM exhibited high power density than CSPFRM [133].
In order to shuffle the standard claw pole alternator in the place of automobile application, a single-phase FRM was introduced.It has few advantages, such as it has a simple construction process, expresses high-power density, low in inertia etc. Reference [134] (2010) investigated and proposed a distributed winding for FRM (Flux Removal Machine).A high-power density is provided by FPFRM and it enhances the efficiency as well.Being a doubly-salient Permanent Magnet machine with concentrated windings, FRM has advantages of both Switched Reluctance Machine and Permanent Magnet (PM) machines.FEM analysis was carried out in order to achieve the induced EMF, winding inductances, and flux linkages.The winding function strategy received the inductance of both the machines and it was compared with FEM results.On the basis of fabricated 'electrical gear', the power densities of both CSPFRM and FPFRM with PMSM were compared.The gear ratios were provided from various FRM configurations.As the design of CSPFRM, FPFRM, and PMSM are similar with respect to outer dimensions, the volume of the magnet, and the rotor speed, the comparison of those three machines were graphically represented in Figure 18.From the graphical representation, it is observed that the machine FPFRM requires very high compensating kVAr, when compared to CSPFRM and PMSM.However, as far as active weight/kVA is concerned the FPFRM is less when compared with other two [134].In 2007, [135] developed low-revolution magneto-electric generators which are custom designed for wind power engineering applications.The best and efficient way to diminish the own drag torque is to incorporate a magnetic rake so that the EMF do not exhibit a significant decrease and the adaptability of the magneto electric machine design is preserved as per the manufacturing.Among the available ones, the best alternative is the one, which is found in the magnetic rakes situated outside and inside of the rotor inductors, which is equivalent to the width of the spline way slots that were found inside and outside of the stator.In 2007, [135] developed low-revolution magneto-electric generators which are custom designed for wind power engineering applications.The best and efficient way to diminish the own drag torque is to incorporate a magnetic rake so that the EMF do not exhibit a significant decrease and the adaptability of the magneto electric machine design is preserved as per the manufacturing.Among the available ones, the best alternative is the one, which is found in the magnetic rakes situated outside and inside of the rotor inductors, which is equivalent to the width of the spline way slots that were found inside and outside of the stator.
An optimal design method was proposed in 2009 [136], in which a double-layer permanent magnet (PM) Dual Mechanical Port (DMP) machine was present for wind power application with random low-wind turbine speed input and stable steep synchronous speed output.The torque was compared between the outer-rotor and inner-rotor.Further, they also compared the THD variations with a pole arc co-efficient for inner-rotor and stator winding [136].
With the purpose of overcoming the potential barriers of dimension, cost, and reliability, in 2011 [137], a multi-generator architecture was recommended.They suggested that a total of two PMSGs should be shared with one turbine-driven shaft.The outputs need to be recorded from the two PMSGs, and then rectified in order to be connected in series with an intermediate DC chopper, whereas the back-end inverter is provided with similar option [137].
In 2012, [138] developed an investigation about the novel form of transvers and axial-flux magnetic fields of the PMSG.With novel machine configuration such as rotation, the flow of the main flux would be in the transverse direction.A novel Outer Rotor-Permanent Magnet (PM) Vernier (OR-PMV) machine was introduced [139] (2010) for direct-driven wind power generation that comes packed with low speed.This is because the wind power can be easily capture, and it triggers the high-speed rotating field design in order to enhance the power density [139].
In 2003, [140] developed an operating principle called Consequent-Pole Permanent-Magnet Machine.In addition to the finite element analysis and sizing analysis, the experimental results were achieved for the prototype machine.There are many advantages associated with CPPM machine, one of which is the control on air-gap flux level excluding the demagnetization risk from magnetic pieces.In terms of low-reluctance iron poles, it is possible to execute the control action.In addition to the low field AT requirement, a wide range of air-gap flux control was also yield and this could be leveraged to either increase or diminish the air-gap flux.
In 2012, [141] sought the winding functional theory as the basis and detailed the inductance of a multi-phase synchronous machine supported with a PM or a wound field rotor.Due to the three magneto-static simulation results produced from simple machine-based geometric models, it is easy to determine the permeance function.For the inductance of stator phase, inductance of phase-to-field, and PM flux linkage calculations, the existing method was used.In order to have an accurate incorporation in numerical machine models that pertain to dynamic simulations, the study proposed the machine inductances, which are relevant to Fourier-series expansion.
A novel interpolating strategy was proposed in 2011 [142] for air-gaps by antiperiodic boundary condition when applied to AFPMSG.With the help of coupled-circuit, element analysis, and time-stepping, the performance of AFPMSG in case of isolated load was investigated.The investigation was also conducted upon the performance of short circuit.In order to produce the results of the accurate analysis, the researcher used second-order serendipity quadrilateral elements [142].
A novel three-phase 12/8-pole doubly salient permanent magnet (DSPM) machine was investigated in 2006 [143] so that it can be used in wind power generation.This in turn can be utilized to design and study the recommended DSPM generator i.e., a novel machine structure design which yields high efficiency, high power density, and high robustness in the device of system operation.In order to obtain static characteristics of the proposed generated, the researcher used FEM [143].
In 2017, [144] executed a complete model, design, and development of a novel slip-synchronous Permanent Magnet (PM) wind generator for direct-drive direct-grid connection.There is variation present in the proposed generator with that of the traditional PM induction generators mentioned in the literature.The non-overlap winding was used in the proposed model for the very first time.For the generator design to be effective, a mixture of analytical, finite element calculation and optimized design methods were employed by the researcher.The researcher minimized the critical design parameters such as load torque ripple and no-load cogging torque to the best possible minimum level in the design optimization stage.The model was verified, and the design was completed with measurements of wind generator system prototype.
Under linear condition (2007) [145] compared the predictions of the two methods listed herewith in the calculation of electro-magnetic torque with inductance of a synchronous reluctance machine.
In the methods mentioned in the literature, the stator winding connections, stator slot effects, as well as the rotor geometry, were considered.The WFA simulation results were contrasted with that of the 2-D FEA whereas the results were the same.In case of magnetic linear condition, the winding function method seems to be quick, incur less computational costs, and quite simple.
In 2009, [146] analyzed the Hybrid Exciting Synchronous Generator (HESG) in addition to unique operating principles and structure.The upkeep required for the main output is generally taken out by PM generator and, however, the terminal voltage is regulated by a homo-planar inductor alternator.In order to execute the computation of EMF and analyze the performance of HESG, 3-D FEM was utilized.
Control Mechanism for WECSs
Numerous control mechanisms are employed in WECS, when designing a generator, it is important to consider the vital parameters such as aerodynamic efficacy, statistical wind distribution, and control system, since these decisive factors can be used in performance evaluation.A study considered high overload capacity generators for this specific application.It was concluded that the optimization of generator is a must to diminish the losses and achieve the highest overload capability.The wind power generation systems act to achieve the sole aim of harnessing the maximum amount of wind energy and consequently converting it to electrical energy.One can achieve this easily through the help of a control structure which allows the operation range as well as the ideal algorithm of stable system with MPPT (Maximum Power Point Tracking).The objective of MPPT presents the harnessing of maximum energy by making few changes in operating point of the system in order to tap the full energy from wind.A control structure was proposed [147] (2013) with specific reference of wind energy systems on the bases of PMSG.With the purpose of enhancing the reliability and the robustness, the study determined the optimum structure using speed and torque control.This retrieved from the analysis of conventional control structures, which used variable speed, and fixed pitch wind energy generation systems [147].
The time-stepping FEA was offered (2006) [148] pertaining to a variable speed synchronous generator in addition to the rectifier.The bi-directional alternator speeds are maintained by this model, the application is a linear generator in terms of the ocean wave energy conversion [148].
In the study conducted in 2014 [149], the authors described the output power fluctuations faced in a wind farm and the relevant problems created in the power system.The study compared the fluctuations that occurred in the output power of conventional schemes with the proposed methods.However, this study's proposed scheme had the tracking of optimal rotational speed in such a manner that the output power was smoothened.A fuzzy PID controller used instead of traditional vector control, which resulted in the tracking of the turbine's optimal rotational speed and the smoothening of wind farm's output power [149].
In 2009, [150] presented a system design model and its control approaches for a 2 MW direct-driven PMSG fed through parallel-connected full power back-to-back PWM converters.Both the electromagnetic FE analysis as well as the optimal generator design was executed in this application in terms of wind generation [150].
An elaborate design and an experimental approach proposed in 2010 [151] for a completely passive wind turbine system without the active electronic part (power and control).The efficiency of the devices predominantly relied upon the condition where the system's design parameters were reciprocally adapted using an integrated optimal design methodology.This methodology ensures simultaneous optimization of wind power extraction and losses because of the global system in a specific wind speed profile.This way, the weight of the wind turbine generator was decreased.Based on the approaches discussed in earlier studies, optimal PMSG was obtain for critical features of passive wind turbines like geometric and energetic features [151].Figure 19 illustrates the general representation of a typical variable-speed direct-driven PMSG wind turbine connected to the grid distribution.
The study conducted by [152] developed a holistic modelling of direct-driven PMSG-based grid-connected wind turbines in addition to control schemes for the interface converters.There were two distinguishable control schemes designed in this configuration for generator and grid-side converters.The study conducted by [152] developed a holistic modelling of direct-driven PMSG-based gridconnected wind turbines in addition to control schemes for the interface converters.There were two distinguishable control schemes designed in this configuration for generator and grid-side converters.
Conclusions
This review article developed a conceptual framework with an overview of research challenges, using which the proposed work of analysis, suitability, design, and control of PMSGs for Wind Energy Conversion Systems (WECS) to be carried out in future needs and development in the wind sector.The predicted influence and the preliminary results will further the progress beyond the threshold level set by the state-of-the-art envisioned research.In the literature, the WECSs and its classification as per wind turbine and generator schemes, the types of PMSG technologies where discussed elaborately.In addition, the study also suggested the solution found for optimization problems using the field computation technique.This study related to WECS development provided an advanced inter-disciplinary approach on technical parts, and compared with the pros cons of as previous studies.Information provided in this article can also be helpful in improving the WECS.It also reviewed the soft computing (SC) techniques that where applied for the optimal design methodologies of the PMSGs.When exploring the literature, unraveled mysteries and unexplored areas from the developmental perspective to take forward in future.
Conclusions
This review article developed a conceptual framework with an overview of research challenges, using which the proposed work of analysis, suitability, design, and control of PMSGs for Wind Energy Conversion Systems (WECS) to be carried out in future needs and development in the wind sector.The predicted influence and the preliminary results will further the progress beyond the threshold level set by the state-of-the-art envisioned research.In the literature, the WECSs and its classification as per wind turbine and generator schemes, the types of PMSG technologies where discussed elaborately.In addition, the study also suggested the solution found for optimization problems using the field computation technique.This study related to WECS development provided an advanced inter-disciplinary approach on technical parts, and compared with the pros cons of as previous studies.Information provided in this article can also be helpful in improving the WECS.It also reviewed the soft computing (SC) techniques that where applied for the optimal design methodologies of the PMSGs.When exploring the literature, unraveled mysteries and unexplored areas from the developmental perspective to take forward in future.
Figure 1 .
Figure 1.Electricity generation by selected region up to 2040.Source: International Energy Agency [3].
Figure 1 .
Figure 1.Electricity generation by selected region up to 2040.Source: International Energy Agency [3].
Figure 2 .
Figure 2. Cumulative installed capacity of wind energy in the world end-of-year by 2018 and newly added capacity by different country in 2018 [4].
Figure 2 .
Figure 2. Cumulative installed capacity of wind energy in the world end-of-year by 2018 and newly added capacity by different country in 2018 [4].
Figure 3 .
Figure 3. Global total breakdown of cumulative capacity up to 2030.Source: Global Wind Energy Outlook [4].
Figure 3 .
Figure 3. Global total breakdown of cumulative capacity up to 2030.Source: Global Wind Energy Outlook [4].
Figure 4 .
Figure 4. Growth in size of wind turbines since 1980 and future prospects [10].
Figure 4 .
Figure 4. Growth in size of wind turbines since 1980 and future prospects [10].
Figure 5 .
Figure 5. Main components of a wind turbine and their share of the overall cost [9].
Figure 5 .
Figure 5. Main components of a wind turbine and their share of the overall cost [9].
Figure 6 .
Figure 6.Electrical machine design parameters for analysis and characteristics studies.
Figure 6 .
Figure 6.Electrical machine design parameters for analysis and characteristics studies.
Figure 7 .
Figure 7. Active material weights and Cost comparison of PMSM and conventional machines [29].
Figure 8 .
Figure 8. Losses comparison for PMSM and conventional machines at full load conditions [27].
Figure 8 .
Figure 8. Losses comparison for PMSM and conventional machines at full load conditions [27].
Figure 8 .
Figure 8. Losses comparison for PMSM and conventional machines at full load conditions [27].
Figure 10
Figure 10 clearly depicts the considerations of various factors with respect to the fixed and variable parameters for different ranges of generators.
Figure 10 .
Figure10.Characteristics of rated speed and power for stationary simulations[33].
Figure 10 .
Figure10.Characteristics of rated speed and power for stationary simulations[33].
Figure 11 . 39 Figure 12 .
Figure 11.Measured generator efficiency comparison [34].Energies 2019, 12, x FOR PEER REVIEW 2 of 39 of losses in a generator 2 Diagram of magnets and reserve coefficient over magnets 3 Static overloading in generators 4 Inductive scattering resistances along longitudinal and cross axes 5 Generator's parameters under loading and short circuit 6 Generator's performance (external and no load)
Figure 14 .
Figure 14.Depicted CAPEX (Capital Expenditure) comparison of geared and gearless PMSGs at a range of power ratings with percentage difference in cost shown at each power level [72].
Figure 15 .
Figure 15.Cost comparisons of the machines with lifetime losses cost added and gear cost calculated twice.The percentage difference in cost is shown at each power level [72].
Figure 14 .
Figure 14.Depicted CAPEX (Capital Expenditure) comparison of geared and gearless PMSGs at a range of power ratings with percentage difference in cost shown at each power level [72].
Figure 14 .
Figure 14.Depicted CAPEX (Capital Expenditure) comparison of geared and gearless PMSGs at a range of power ratings with percentage difference in cost shown at each power level [72].
Figure 15 .
Figure 15.Cost comparisons of the machines with lifetime losses cost added and gear cost calculated twice.The percentage difference in cost is shown at each power level [72].
Figure 15 .
Figure 15.Cost comparisons of the machines with lifetime losses cost added and gear cost calculated twice.The percentage difference in cost is shown at each power level [72].
Figure 17 .
Figure 17.Summary of the voltage ratings of a few common wind turbine generators [15].
Figure 17 .
Figure 17.Summary of the voltage ratings of a few common wind turbine generators [15].
Table 2 .
Parameters need to be considered for acquiring synchronous machines[43].
Table 3 .
Parameters need to be considered for mathematical simulation. | 2019-04-16T13:22:19.113Z | 2019-07-08T00:00:00.000 | {
"year": 2019,
"sha1": "b2b57f2c6495b9947760bc3b656711e055e9c6cb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1073/12/13/2616/pdf?version=1562585854",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "987cbaa713b7f7c87a0e8736931a523f63772edb",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
150384549 | pes2o/s2orc | v3-fos-license | Fundamentals of Sulfate Species in Methane Combustion Catalyst Operation and Regeneration — A Simulated Exhaust Gas Study
Emission regulations and legislation inside the European Union (EU) have a target to reduce tailpipe emissions in the transportation sector. Exhaust gas aftertreatment systems play a key role in low emission vehicles, particularly when natural gas or bio-methane is used as the fuel. The main question for methane operating vehicles is the durability of the palladium-rich aftertreatment system. To improve the durability of the catalysts, a regeneration method involving an efficient removal of sulfur species needs to be developed and implemented on the vehicle. This paper tackles the topic and its issues from a fundamental point of view. This study showed that Al2(SO4)3 over Al2O3 support material inhibits re-oxidation of Pd to PdO, and thus hinders the formation of the low-temperature active phase, PdOx. The presence of Al2(SO4)3 increases light-off temperature, which may be due to a blocking of active sites. Overall, this study showed that research should also focus on support material development, not only active phase inspection. An active catalyst can always be developed, but the catalyst should have the ability to be regenerated.
Introduction
Current and future emission regulations and legislation of fossil fuels inside the European Union aspire to decrease the tailpipe emissions of transportation.The exhaust gas aftertreatment system plays a key role in low emission vehicles, and one of the main issues is its durability.Exhaust gas aftertreatment systems of heavy-duty applications in Europe already have a durability requirement, that is, 700,000 km, or seven years maximum [1].
Natural gas and bio-methane will be the next generation alternative fuels in the transportation sector, generating low overall emissions.The fuels can be stored in liquid form, thus increasing their energy capacity, which generates interest in the sector.In addition, the gases emit less CO 2 per energy equivalent compared to regular diesel fuel, which decreases their carbon footprint.However, a small amount of un-burnt methane, the main constituent of natural and bio-gas, always slips from an engine to its exhaust gas.Due to the higher global warming potential of methane, compared to CO 2 , its emissions must be converted with a catalyst.
The catalyst, at a reasonably low operation temperature, in an aftertreatment system of natural gas or bio-methane application, is palladium rich, and supported on Al 2 O 3 , if high methane conversion activity is needed [2][3][4][5][6].However, palladium-rich lean-burn methane combustion catalysts are known to be sensitive to sulfur poisoning.Sulfur originates from lubricant oil and natural gas, and it oxidizes further during the burning process to SO 2 , and on a catalyst, it further reacts with oxygen from SO 2 to SO 3 .It accumulates in the presence of water vapor over a catalyst as PdSO 4 and Al 2 (SO 4 ) 3 [7][8][9].Attempts have been made to solve the disadvantage of sulfur poisoning on methane conversion activity by modifying washcoat materials [2,[10][11][12], and by varying noble metal content and their combinations [13][14][15].Fundamental studies of poisoned and aged methane oxidation catalysts have been conducted to understand the formation of the inactive form of the catalyst.In general, the formation of PdSO 4 has been concluded to be the reason for the deactivation of the catalyst [8,[16][17][18].Aluminum oxide has been known to be the best support for the palladium-rich methane combustion catalyst, because it hinders any poisoning of the catalyst by forming Al 2 (SO 4 ) 3 [2,16,19].The latest results show that in fact, PdSO 4 itself is not a poison, but it sensitizes the catalyst for water inhibition [20].
Deactivation of the exhaust gas aftertreatment system is still a challenge, and thus solutions are needed to meet the future durability requirements [21][22][23].A possible solution to increase the life-time of the exhaust gas aftertreatment system could be a regeneration of the catalyst.However, the number of studies conducted in the field of regeneration so far is low, which has also been noted in a recent review article [24].Based on the published research, sulfur removal in the presence of excess oxygen requires high temperature, at least 650 • C, and it has been concluded to be always incomplete [3,25].The regeneration of the sulfur-poisoned catalyst by heating under vacuum [17] or treating with hydrogen gas, occurs at a remarkably lower temperature [26].A small improvement in methane conversion activity has been achieved already after regeneration under hydrogen at 350 • C [25].However, a better response to methane conversion activity has been achieved when the poisoned catalyst has been treated at 600 • C under the same gas atmosphere [27].An alternative reductive method to regenerate a sulfur-poisoned catalyst, besides thermal or hydrogen treatments, has been presented by Arosio et al. [28].They successfully used reductive methane pulses to partially regenerate the catalyst at 550 • C; such temperature could also be achieved in a real engine, but complete regeneration was achieved at 600 • C, which requires additional thermal energy, and may cause a fuel penalty.A reason for partial regeneration of the sulfur-poisoned catalyst has been proposed in a recent study [29], where PdSO 4 has been observed to decompose under a reductive atmosphere to Pd 4 S. Hence, small quantities of sulfur will always remain in the regenerated catalyst.They concluded also that alternately reductive (rich) and oxidative (lean) pulses result in better sulfur removal in a regenerated catalyst compared to rich-only conditions.
This study focuses on the decomposition of sulfur-poisoned methane combustion catalyst with model catalysts.This paper answers the following research questions: How does the presence of Al 2 (SO 4 ) 3 affect decomposition of PdSO 4 ?What is the state of palladium after the regeneration?
Methane Conversion Activity and Regeneration of Model Catalysts under Simulated Exhaust Gas
The activities of the model catalysts were evaluated in powder form with simulated exhaust gas in methane combustion.The effect of Al 2 (SO 4 ) 3 (AS) on the performance of PdSO 4 (PS) in complete methane oxidation is presented in Figure 1, together with the sulfur-poisoned modern methane combustion catalyst as a reference [20].The activity of the commercial reference catalyst is between the activities of the PS/Al 2 O 3 and 0.25 AS+PS/Al 2 O 3 catalysts.The similarity of CH 4 conversion curves of the commercial reference catalyst and model catalysts justifies the use of the model catalyst in further fundamental studies, when regeneration and texture of the catalysts will be examined.The shape analysis of the curves revealed that the methane oxidation reaction was temperature-controlled for the PS/Al 2 O 3 model catalyst.
An addition of Al 2 (SO 4 ) 3 into the catalyst led to a loss of active sites [30].Hence, the temperature that is required to initiate methane conversion increased if Al 2 (SO 4 ) 3 was added into the model catalyst.
An addition of Al2(SO4)3 into the catalyst led to a loss of active sites [30].Hence, the temperature that is required to initiate methane conversion increased if Al2(SO4)3 was added into the model catalyst.It was decided to perform regeneration at 500 °C due to the threshold temperature observed by several researchers [25,29].Simulated exhaust gas contains reducing agents such as CH4, CO and NO, which can be expected to decrease the decomposition temperature of the sulfur species.The best regeneration was observed in the case of the PS/Al2O3 model catalyst, for which the methane conversion at 500 °C doubled to 60%, due to the regeneration (Figure 2).The co-existence of PdSO4 and Al2(SO4)3 was observed to have a disadvantageous effect on regeneration of the sulfated methane oxidation catalyst.After regeneration the catalysts PS + 1.0 AS/Al2O3 and 1.0 AS + PS/Al2O3 show higher methane conversion activities than the catalysts PS + 0.25 AS/Al2O3 and 0.25 AS + PS/Al2O3.A reason relies on the higher sulfur content of the samples, resulting in the catalysts bulk-kind-of sulfates, which decomposes in regeneration at lower temperature [31].In fact, the PS + 0.25 AS/Al2O3 model catalyst even showed a slight decrease in activity even though sulfur species were decomposed at least partially during the regeneration.The decrease in activity after regeneration may indicate that PdSO4 does not decompose during the process, or the active phase does not form after the regeneration.However, SO2 release can be detected in all the cases, and thus decomposition of inactive PdSO4 to metallic Pd and Pd4S can be noted to have occurred.Thus, a potential reason for the decrease in activity could be the lack of active phase re-formation (PdOx) of the low-temperature methane combustion catalyst.Overall, the simulated exhaust gas regeneration results allow us to deduce the following hypothesis: "After regeneration, Al2(SO4)3 inhibits Pd re-oxidation to PdO, which leads to an activity decrease in low temperature methane oxidation, when lean operation conditions are returned."
Palladium State after Regeneration under Simulated Exhaust Gas
As shown in Figure 2, the presence of Al2(SO4)3 in the model catalysts decreased methane conversion activity after regeneration under realistic operation conditions.Inspection of the crystalline palladium state after the regeneration treatment, was carried out by powder X-ray diffraction in order to support the hypothesis.The powder X-ray diffraction method may be used in this case, because regeneration affects the surface of less active PdSO4.Thus, the regenerated active Pd/PdO may form on top of less active PdSO4.It was decided to perform regeneration at 500 • C due to the threshold temperature observed by several researchers [25,29].Simulated exhaust gas contains reducing agents such as CH 4 , CO and NO, which can be expected to decrease the decomposition temperature of the sulfur species.The best regeneration was observed in the case of the PS/Al 2 O 3 model catalyst, for which the methane conversion at 500 • C doubled to 60%, due to the regeneration (Figure 2).The co-existence of PdSO 4 and Al 2 (SO 4 ) 3 was observed to have a disadvantageous effect on regeneration of the sulfated methane oxidation catalyst.After regeneration the catalysts PS + 1.0 AS/Al 2 O 3 and 1.0 AS + PS/Al 2 O 3 show higher methane conversion activities than the catalysts PS + 0.25 AS/Al 2 O 3 and 0.25 AS + PS/Al 2 O 3 .A reason relies on the higher sulfur content of the samples, resulting in the catalysts bulk-kind-of sulfates, which decomposes in regeneration at lower temperature [31].In fact, the PS + 0.25 AS/Al 2 O 3 model catalyst even showed a slight decrease in activity even though sulfur species were decomposed at least partially during the regeneration.The decrease in activity after regeneration may indicate that PdSO 4 does not decompose during the process, or the active phase does not form after the regeneration.However, SO 2 release can be detected in all the cases, and thus decomposition of inactive PdSO 4 to metallic Pd and Pd 4 S can be noted to have occurred.Thus, a potential reason for the decrease in activity could be the lack of active phase re-formation (PdO x ) of the low-temperature methane combustion catalyst.Overall, the simulated exhaust gas regeneration results allow us to deduce the following hypothesis: "After regeneration, Al 2 (SO 4 ) 3 inhibits Pd re-oxidation to PdO, which leads to an activity decrease in low temperature methane oxidation, when lean operation conditions are returned."
Palladium State after Regeneration under Simulated Exhaust Gas
As shown in Figure 2, the presence of Al 2 (SO 4 ) 3 in the model catalysts decreased methane conversion activity after regeneration under realistic operation conditions.Inspection of the crystalline palladium state after the regeneration treatment, was carried out by powder X-ray diffraction in order to support the hypothesis.The powder X-ray diffraction method may be used in this case, because regeneration affects the surface of less active PdSO 4 .Thus, the regenerated active Pd/PdO may form on top of less active PdSO 4 .
The powder X-ray diffractograms of the catalyst in Figure 3 show that the peak of crystalline metallic Pd is pronounced if the catalyst contained Al 2 (SO 4 ) 3 .Closer inspection of the peak data, shown in Table 1, reveals that the presence of Al 2 (SO 4 ) 3 results in a high amount of metallic Pd.The results support the conclusions that PdSO 4 decomposed during the regeneration process, and the lower activity after regeneration could be due to less active metallic Pd phase formation.Thus, we rely on the fact that Al 2 (SO 4 ) 3 may hinder the re-oxidization of metallic Pd after regeneration, thus inhibiting the formation of active PdO x .
Catalysts 2019, 8, x FOR PEER REVIEW 4 of 10 The powder X-ray diffractograms of the catalyst in Figure 3 show that the peak of crystalline metallic Pd is pronounced if the catalyst contained Al2(SO4)3.Closer inspection of the peak data, shown in Table 1, reveals that the presence of Al2(SO4)3 results in a high amount of metallic Pd.The results support the conclusions that PdSO4 decomposed during the regeneration process, and the lower activity after regeneration could be due to less active metallic Pd phase formation.Thus, we rely on the fact that Al2(SO4)3 may hinder the re-oxidization of metallic Pd after regeneration, thus inhibiting the formation of active PdOx.The powder X-ray diffractograms of the catalyst in Figure 3 show that the peak of crystalline metallic Pd is pronounced if the catalyst contained Al2(SO4)3.Closer inspection of the peak data, shown in Table 1, reveals that the presence of Al2(SO4)3 results in a high amount of metallic Pd.The results support the conclusions that PdSO4 decomposed during the regeneration process, and the lower activity after regeneration could be due to less active metallic Pd phase formation.Thus, we rely on the fact that Al2(SO4)3 may hinder the re-oxidization of metallic Pd after regeneration, thus inhibiting the formation of active PdOx.The observation can be further supported and confirmed with TPO re-oxidation measurements.Re-oxidation of metallic Pd to active PdO can be observed as a downward peak in Figure 4 in a temperature range of between 470 • C and 700 • C. The presence of Pd 4 S should bear in mind as detected in the latest experiments if the catalyst is heated under hydrogen gas [29,31].Because steam reforming and water gas shift reactions form hydrogen during regeneration in simulated exhaust gas, Pd 4 S structure is possible to form during the regeneration period.The most feasible re-oxidation product of Pd 4 S may be PdSO 4 , PdO and metallic Pd.Due to the stoichiometry of Pd 4 S structure, in respect of Pd atoms, re-oxidation may form one PdSO 4 , and three Pd units may form PdO or remain in the metallic Pd state.Quantitative oxygen uptake was determined by integrating the peaks and the values represented in Table 2. Individual PdSO 4 supported on Al The observation can be further supported and confirmed with TPO re-oxidation measurements.Re-oxidation of metallic Pd to active PdO can be observed as a downward peak in Figure 4 in a temperature range of between 470 °C and 700 °C.The presence of Pd4S should bear in mind as detected in the latest experiments if the catalyst is heated under hydrogen gas [29,31].Because steam reforming and water gas shift reactions form hydrogen during regeneration in simulated exhaust gas, Pd4S structure is possible to form during the regeneration period.The most feasible re-oxidation product of Pd4S may be PdSO4, PdO and metallic Pd.Due to the stoichiometry of Pd4S structure, in respect of Pd atoms, re-oxidation may form one PdSO4, and three Pd units may form PdO or remain in the metallic Pd state.Quantitative oxygen uptake was determined by integrating the peaks and the values represented in Table 2. Individual PdSO4 supported on Al2O3 (PS/Al2O3) had the highest oxygen uptake, being 19.2 µmol gcat -1 , corresponding to an O:Pd mole ratio of 0.43.This means that regenerated metallic Pd oxidizes only partially into an active PdO form.Oxygen uptakes of model catalysts including Al2(SO4)3 were lower than that of PS/Al2O3, between 9.0 and 15.7 µmol gcat -1 , corresponding to O:Pd mole ratios of 0.22-0.36,revealing the fact that Al2(SO4)3 hinders oxidation of metallic Pd back to an active PdO form.The addition of Al2(SO4)3 increases the re-oxidation temperature of metallic Pd, and thus the formation of Al2(SO4)3 might be undesirable in the CH4 catalyst, even though conclusions in literature have been contradictory [2,6].The results confirm the above hypothesis that Al2(SO4)3 hinders the re-oxidation of metallic Pd to active PdO.Inhibition of the re-oxidation was the strongest for the PS + 0.25 AS/Al2O3 model catalyst, which explained the observed decrease in activity after the regeneration. See Scheme 1 for details about the catalysts. 2CH 4 conversion is the maximum that has been achieved after the regeneration procedure (Figure 2).
To summarize the results, the relation between regenerated CH 4 conversion is illustrated in Figure 5, together with three indicators: O:Pd mole ratio (Table 2), oxygen uptake of the catalyst (Table 2) and PdO(101):Pd(111) ratio (Table 1).The trends of all three indicators show that addition of Al 2 (SO 4 ) 3 into the PdSO 4 -containing catalyst hinders the formation of the active phase, such as PdO after regeneration, and thus activity in CH 4 conversion decreases, compared to the case of the individual PdSO 4 model catalyst.Thus we conclude that the formation of Al 2 (SO 4 ) 3 is not beneficial for the low temperature methane combustion catalyst, because it causes a decrease in the methane conversion activity of the catalyst after regeneration.Unforeseen correlation of methane combustion activity of the model catalysts, together with the PdO(101):Pd(111) ratio could be explained, at least partially, by the formation of a regenerated, active Pd/PdO structure on top of less active PdSO 4 .Regeneration studies were done at 500 • C, which may be low to re-oxidize regenerated metallic Pd to PdO x .However, higher regeneration temperature may enhance re-oxidation, but it exposes the catalyst to sintering, and thus may decrease its methane conversion activity. 1 See Scheme 1 for details about the catalysts. 2CH4 conversion is the maximum that has been achieved after the regeneration procedure (Figure 2).
To summarize the results, the relation between regenerated CH4 conversion is illustrated in Figure 5, together with three indicators: O:Pd mole ratio (Table 2), oxygen uptake of the catalyst (Table 2) and PdO(101):Pd(111) ratio (Table 1).The trends of all three indicators show that addition of Al2(SO4)3 into the PdSO4-containing catalyst hinders the formation of the active phase, such as PdO after regeneration, and thus activity in CH4 conversion decreases, compared to the case of the individual PdSO4 model catalyst.Thus we conclude that the formation of Al2(SO4)3 is not beneficial for the low temperature methane combustion catalyst, because it causes a decrease in the methane conversion activity of the catalyst after regeneration.Unforeseen correlation of methane combustion activity of the model catalysts, together with the PdO(101):Pd(111) ratio could be explained, at least partially, by the formation of a regenerated, active Pd/PdO structure on top of less active PdSO4.Regeneration studies were done at 500 °C, which may be low to re-oxidize regenerated metallic Pd to PdOx.However, higher regeneration temperature may enhance re-oxidation, but it exposes the catalyst to sintering, and thus may decrease its methane conversion activity.
Catalysts
The modern commercially available methane combustion catalyst contains 0.97 wt.% of sulfur after sulfur poisoning treatment [20].A modern sulfur-poisoned methane combustion catalyst was used as a reference in activity experiments for model catalysts to justify their similar performance and behavior after sulfur poisoning treatment.The catalyst was provided by the Dinex Finland Oy.To model and mimic the catalyst composition, a series of catalysts were prepared (Table 3 and Scheme 1) by using PdSO 4 (Sigma Aldrich, Saint Louis, MO, USA, CAS: 13566-03-5), Al 2 (SO 4 ) 3 × 18H 2 O (Merck, Darmstad, Germany, CAS: 7784-31-8), and Al 2 O 3 (Sasol) as starting materials.Bulk PdSO 4 and Al 2 (SO 4 ) 3 compounds were used in model catalyst preparation to quantitatively control the amount of sulfur and structure of sulfate.If the sulfating were done in the gas phase with SO 2 gas, the formed sulfates species and amounts could be hard to control.Impregnation of PdSO 4 and Al 2 (SO 4 ) 3 was carried out in cold water by mixing at least for 2 h.After impregnation, the solid was dried at room temperature, and to finalize the catalyst it was heated at 90 • C under air.Detailed preparation procedures are described in our previous work [31].Amounts of added PdSO 4 correspond to 1 wt.% of sulfur and 4 wt.% of palladium loading, whereas X indicates the amount of sulfur in Al 2 (SO 4 ) 3 -containing catalysts, and it is 0.25 or 1.0 wt.% of sulfur.Sulfates are abbreviated as follows to clarify the names of the catalysts: PS refers to PdSO 4 , and AS refers to Al 2 (SO 4 ) 3 .
Catalysts
The modern commercially available methane combustion catalyst contains 0.97 wt% of sulfur after sulfur poisoning treatment [20].A modern sulfur-poisoned methane combustion catalyst was used as a reference in activity experiments for model catalysts to justify their similar performance and behavior after sulfur poisoning treatment.The catalyst was provided by the Dinex Finland Oy.To model and mimic the catalyst composition, a series of catalysts were prepared (Table 3 and Scheme 1) by using PdSO4 (Sigma Aldrich, Saint Louis, USA, CAS: 13566-03-5), Al2(SO4)3 × 18H2O (Merck, Darmstad, Germany, CAS: 7784-31-8), and Al2O3 (Sasol) as starting materials.Bulk PdSO4 and Al2(SO4)3 compounds were used in model catalyst preparation to quantitatively control the amount of sulfur and structure of sulfate.If the sulfating were done in the gas phase with SO2 gas, the formed sulfates species and amounts could be hard to control.Impregnation of PdSO4 and Al2(SO4)3 was carried out in cold water by mixing at least for 2 h.After impregnation, the solid was dried at room temperature, and to finalize the catalyst it was heated at 90 °C under air.Detailed preparation procedures are described in our previous work [31].Amounts of added PdSO4 correspond to 1 wt% of sulfur and 4 wt% of palladium loading, whereas X indicates the amount of sulfur in Al2(SO4)3containing catalysts, and it is 0.25 or 1.0 wt% of sulfur.Sulfates are abbreviated as follows to clarify the names of the catalysts: PS refers to PdSO4, and AS refers to Al2(SO4)3.
Characterization Techniques
Methane conversion activities were measured for five PdSO4-containing model catalysts.Regeneration experiments were carried out under steady-state conditions with a laboratory reactor at 500 °C in the presence of simulated exhaust gas.Gasmet™ DX-4000 Multigas FTIR (Gasmet technologies, Helsinki, Finland) was used as a detector in both light-off and regeneration experiments.An amount of 0.2 g model catalyst powder was used in the experiments.The exhaust gas composition used in the experiments was 2000 ppm of CO, 2000 ppm of CH4, 500 ppm of C3H8, 500 ppm of NO, 10 ppm of SO2, 6% of CO2, 10% of O2 and a balancing amount of N2.The total gas flow rate of 1180 cm 3 min -1 corresponded to a space velocity of 354,000 cm 3 gcat -1 h -1 through the model catalyst powder.Regenerations were carried out by replacing oxygen from the exhaust gas with N2 in order to maintain a constant gas flow rate.Otherwise, the composition of the gas mixture remained the same.
Characterization Techniques
Methane conversion activities were measured for five PdSO 4 -containing model catalysts.Regeneration experiments were carried out under steady-state conditions with a laboratory reactor at 500 • C in the presence of simulated exhaust gas.Gasmet™ DX-4000 Multigas FTIR (Gasmet technologies, Helsinki, Finland) was used as a detector in both light-off and regeneration experiments.An amount of 0.2 g model catalyst powder was used in the experiments.The exhaust gas composition used in the experiments was 2000 ppm of CO, 2000 ppm of CH 4 , 500 ppm of C 3 H 8 , 500 ppm of NO, 10 ppm of SO 2 , 6% of CO 2 , 10% of O 2 and a balancing amount of N 2 .The total gas flow rate of 1180 cm 3 min −1 corresponded to a space velocity of 354,000 cm 3 gcat −1 h −1 through the model catalyst powder.Regenerations were carried out by replacing oxygen from the exhaust gas with N 2 in order to maintain a constant gas flow rate.Otherwise, the composition of the gas mixture remained the same.
Powder X-ray diffractograms of catalyst samples were recorded with a Bruker-AXD D8 Advance diffractometer (Bruker, Karlsruhe, Germany) using Cu Kα radiation.The diffraction pattern at a range of 2θ from 15 • to 85 • was recorded with a scanning speed of 0.11 • min −1 , and a step size of 0.04 • .Bragg-Brentano geometry was utilized in the experiments.TOPAS software was utilized in estimating palladium and palladium oxide crystallite sizes of the model catalysts and peak areas [32].
Re-oxidation model catalysts were studied with a Quantachrome Autosorb iQ device using a temperature programmed oxidation (TPO) hysteresis technique.A sample of 100 mg was heated from room temperature to 1000 • C with a heating rate of 10 • C min −1 under continuous flow of 10% O 2 /He gas.The gas flow rate was 20 mL min −1 .To obtain re-oxidation, the sample was cooled down to 250 • C under the same gas atmosphere.No pretreatment was done prior to the measurement and cold trap was used in the measurement.
Elemental analyses were carried out with an Elementar varioMICRO cube device (Elementar, Langenselbold, Germany).Sulfanilamide was used both for calibrating the device, and also as a reference compound for sulfur in the measurements.The mass of the samples varied between 10 mg and 30 mg.
Conclusions
The role of Al 2 (SO 4 ) 3 on the regeneration of the low-temperature methane combustion catalyst was studied in the presence of simulated exhaust gas.This study showed that the presence of Al 2 (SO 4 ) 3 over Al 2 O 3 support material inhibits the re-oxidation of metallic Pd back to its active form of PdO x .Hence, the low temperature activity of the regenerated catalyst decreases, and does not necessarily increase after regeneration.The outcome was supported with powder X-ray measurements and finally confirmed with the TPO re-oxidation method.These aspects should be taken into account when developing a regeneration method for an aftertreatment system of natural gas or bio-methane fueled engines.From the catalyst development point of view, this study shows that we should also focus on support materials, not only on the active phase, because a good catalyst can always be developed, but the catalyst should have the ability to be regenerated.
Figure 1 .
Figure 1.Methane conversion curves of (a) PS/Al2O3, PS + X AS/Al2O3, and (b) X AS + PS /Al2O3 (X = 0.25 or 1.0) model catalysts together with sulfur-poisoned commercial reference [20] before regeneration under simulated exhaust gas (indicated with an orange dashed line in the figures).See Scheme 1 for details about the catalysts.
Figure 1 .
Figure 1.Methane conversion curves of (a) PS/Al 2 O 3 , PS + X AS/Al 2 O 3 , and (b) X AS + PS/Al 2 O 3 (X = 0.25 or 1.0) model catalysts together with sulfur-poisoned commercial reference [20] before regeneration under simulated exhaust gas (indicated with an orange dashed line in the figures).See Scheme 1 for details about the catalysts.
Figure 2 .
Figure 2. Methane conversion during steady-state operation and regeneration at 500 °C under simulated exhaust gas for (a) PS/Al2O3, PS + X AS/Al2O3 and (b) X AS + PS/Al2O3 (X = 0.25 or 1.0) model catalysts.See Scheme 1 for details about the catalysts.
Figure 3 .
Figure 3. Powder X-ray diffraction patterns of PS/Al2O3, PS + X AS/Al2O3 (X = 0.25 or 1.0) and X AS + PS/Al2O3 (X = 0.25 or 1.0) model catalysts after regeneration at 500 °C under simulated exhaust gas.See Scheme 1 for details about the catalysts.
Figure 2 .
Figure 2. Methane conversion during steady-state operation and regeneration at 500 • C under simulated exhaust gas for (a) PS/Al 2 O 3 , PS + X AS/Al 2 O 3 and (b) X AS + PS/Al 2 O 3 (X = 0.25 or 1.0) model catalysts.See Scheme 1 for details about the catalysts.
Figure 2 .
Figure 2. Methane conversion during steady-state operation and regeneration at 500 °C under simulated exhaust gas for (a) PS/Al2O3, PS + X AS/Al2O3 and (b) X AS + PS/Al2O3 (X = 0.25 or 1.0) model catalysts.See Scheme 1 for details about the catalysts.
Figure 3 .
Figure 3. Powder X-ray diffraction patterns of PS/Al2O3, PS + X AS/Al2O3 (X = 0.25 or 1.0) and X AS + PS/Al2O3 (X = 0.25 or 1.0) model catalysts after regeneration at 500 °C under simulated exhaust gas.See Scheme 1 for details about the catalysts.
Figure 3 .
Figure 3. Powder X-ray diffraction patterns of PS/Al 2 O 3 , PS + X AS/Al 2 O 3 (X = 0.25 or 1.0) and X AS + PS/Al 2 O 3 (X = 0.25 or 1.0) model catalysts after regeneration at 500 • C under simulated exhaust gas.See Scheme 1 for details about the catalysts.
2 O 3 (PS/Al 2 O 3 ) had the highest oxygen uptake, being 19.2 µmol g cat −1 , corresponding to an O:Pd mole ratio of 0.43.This means that regenerated metallic Pd oxidizes only partially into an active PdO form.Oxygen uptakes of model catalysts including Al 2 (SO 4 ) 3 were lower than that of PS/Al 2 O 3 , between 9.0 and 15.7 µmol g cat −1 , corresponding to O:Pd mole ratios of 0.22-0.36,revealing the fact that Al 2 (SO 4 ) 3 hinders oxidation of metallic Pd back to an active PdO form.The addition of Al 2 (SO 4 ) 3 increases the re-oxidation temperature of metallic Pd, and thus the formation of Al 2 (SO 4 ) 3 might be undesirable in the CH 4 catalyst, even though conclusions in literature have been contradictory [2,6].The results confirm the above hypothesis that Al 2 (SO 4 ) 3 hinders the re-oxidation of metallic Pd to active PdO.Inhibition of the re-oxidation was the strongest for the PS + 0.25 AS/Al 2 O 3 model catalyst, which explained the observed decrease in activity after the regeneration.Catalysts 2019, 8, x FOR PEER REVIEW 5 of 10
Figure 4 .
Figure 4. Re-oxidation of metallic Pd to an active PdO form after thermal decomposition.Re-oxidation was studied by decreasing the temperature after thermal decomposition of model catalysts under a gas blend of 10% O2 in He.See Scheme 1 for details about the catalysts.
Figure 4 .
Figure 4. Re-oxidation of metallic Pd to an active PdO form after thermal decomposition.Re-oxidation was studied by decreasing the temperature after thermal decomposition of model catalysts under a gas blend of 10% O 2 in He.See Scheme 1 for details about the catalysts.
Figure 5 .
Figure 5. Regenerated CH4 conversion under simulated exhaust gas as a function of (a) O:Pd mole ratio, (b) oxygen uptake of the catalyst and (c) PdO(101):Pd(111) ratio.See Scheme 1 for details about the catalysts.
Figure 5 .
Figure 5. Regenerated CH 4 conversion under simulated exhaust gas as a function of (a) O:Pd mole ratio, (b) oxygen uptake of the catalyst and (c) PdO(101):Pd(111) ratio.See Scheme 1 for details about the catalysts.
Table 1 .
Pd and PdO peak areas and crystallite sizes of model catalysts after regeneration under simulated exhaust gas.See Scheme 1 for details about the catalysts.2Peakareasandcrystallite size were measured for the catalysts after regeneration under simulated exhaust gas.3PdO(101):Pd(111)ratio was calculated based on corresponding peak areas of X-ray diffraction patterns.Alumina signals were used as internal references. 1
Table 1 .
Pd and PdO peak areas and crystallite sizes of model catalysts after regeneration under simulated exhaust gas.
Table 2 .
Quantitative O 2 uptake of the catalysts during cooling down.
Table 2 .
Quantitative O2 uptake of the catalysts during cooling down. | 2019-05-06T13:25:23.872Z | 2019-05-03T00:00:00.000 | {
"year": 2019,
"sha1": "7f750a658c916f92e199da7c40c0950b4c7d250f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4344/9/5/417/pdf?version=1556868963",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "7f750a658c916f92e199da7c40c0950b4c7d250f",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
261588274 | pes2o/s2orc | v3-fos-license | A Multilayered Imaging and Microfluidics Approach for Evaluating the Effect of Fibrinolysis in Staphylococcus aureus Biofilm Formation
The recognition of microbe and extracellular matrix (ECM) is a recurring theme in the humoral innate immune system. Fluid-phase molecules of innate immunity share regulatory roles in ECM. On the other hand, ECM elements have immunological functions. Innate immunity is evolutionary and functionally connected to hemostasis. Staphylococcus aureus (S. aureus) is a major cause of hospital-associated bloodstream infections and the most common cause of several life-threatening conditions such as endocarditis and sepsis through its ability to manipulate hemostasis. Biofilm-related infection and sepsis represent a medical need due to the lack of treatments and the high resistance to antibiotics. We designed a method combining imaging and microfluidics to dissect the role of elements of the ECM and hemostasis in triggering S. aureus biofilm by highlighting an essential role of fibrinogen (FG) in adhesion and formation. Furthermore, we ascertained an important role of the fluid-phase activation of fibrinolysis in inhibiting biofilm of S. aureus and facilitating an antibody-mediated response aimed at pathogen killing. The results define FG as an essential element of hemostasis in the S. aureus biofilm formation and a role of fibrinolysis in its inhibition, while promoting an antibody-mediated response. Understanding host molecular mechanisms influencing biofilm formation and degradation is instrumental for the development of new combined therapeutic approaches to prevent the risk of S. aureus biofilm-associated diseases.
Introduction
An interplay between hemostasis and inflammation is essential in the host defense against pathogens [1,2].The activation of coagulation and fibrinolysis occurs during both acute and chronic bacterial infections [3][4][5].Coagulation and fibrin formation exert direct antimicrobial functions by physically entrapping bacteria or encapsulating bacterial foci within infected tissue, thus limiting dissemination [6], or by regulating the local inflammatory response [7].Components of the humoral innate immunity system affect the hemostatic response [8,9].The inflammation-induced activation of coagulation pathways is initially beneficial, even contributing to antimicrobial defense [10], but when deregulated, coagulation may lead to widespread microvascular thrombosis and tissue damage [11].
Pathogens 2023, 12, 1141 2 of 22 The involvement of the extracellular matrix (ECM) elements in the innate immune response is a recurring theme.Indeed, although ECM and coagulation molecules are not considered part of innate immunity, the evasion of pathogens from host defense includes mechanisms mediated by their interaction with the ECM, as well as hemostasis [12][13][14].On the other hand, elements of the ECM display immunological functions, such as acting as opsonins for certain microbial species [12,14,15].This suggests a mutually dependent functionality between the ECM and innate immune system [16].
Staphylococcus aureus (S. aureus) is a pervasive Gram-positive bacterium, a common cause of bacteremia and responsible for several diseases, with a case-fatality rate of 20-25% [17].S. aureus-related infections range from minor skin infections to serious, life-threatening conditions, such as endocarditis, pneumonia and sepsis [18].The emergence of antibiotic-resistant strains of S. aureus, such as methicillin-resistant S. aureus and vancomycin-resistant S. aureus, has renewed the interest in better defining mechanisms of pathogen virulence and host defense [19,20].The pathogenic potential of S. aureus includes immune evasion strategies based on the interaction with elements of the ECM and hemostasis, through the expression of a variety of surface proteins and specific proteases.Different S. aureus virulence factors specifically affect the host' hemostasis [21,22].In blood, S. aureus coagulases are essential to forming a mechanical barrier to protect S. aureus from recognition by opsonins and phagocytes [23,24] and act as crucial determinants for dissemination [22,25].In tissue, S. aureus staphylokinase interacting with plasminogen (PLG) plays a key role in dissemination, causing multi-organ dysfunction syndrome [26].Immunization against these molecules protects against disease in mice [27].Moreover, S. aureus interacts with elements of the ECM, such as fibronectin (FN), which allows for invasion into different cell types via the α5β1 integrin [13].
A biofilm refers to a community of bacteria in which surface-exposed proteins, called microbial surface components recognizing adhesive matrix molecules, initiate attachment to biotic or abiotic surfaces [28][29][30].Biofilm is composed of a self-secreting matrix of extracellular polymeric substances, including polysaccharides and extracellular DNA (eDNA), forming an ECM that encloses bacteria and anchors them to the surface of an implant [31].The interaction of S. aureus with host molecules present in the blood and ECM also influences biofilm formation by promoting adhesion and aggregation [32].In particular, several studies have shown that in situ fibrin formation is a constituent of the biofilm matrix, and S. aureus-induced coagulation through the action of coagulase is important in the initiation stages of the process and in biofilm establishment [28,33].S. aureus coagulase-mediated biofilm exhibits increased resistance to immune recognition and antimicrobial treatment [34].
S. aureus biofilm-related infections represent a medical need, given the extensive use of indwelling medical devices (such as prosthetic heart valves, orthopedic implants and intravascular catheters) in modern medicine [35].Microorganisms that grow attached to the surface of an implant or medical device are estimated to be responsible for 60-70% of all hospital-acquired infections, and most of them are related to S. aureus or Staphylococcus epidermidis [28].Biofilms can negatively interfere with device function, damage surrounding tissues, cause inflammation and eventually colonize adjacent body sites [29].Infections related to biofilms are particularly difficult to treat due to their structure, allowing them to evade the immune response and favoring antibiotic resistance.Protocols for preventing biofilm formation, including the use of antibacterial coatings and nanostructured materials, have been applied [28,29], but the development of new approaches to the prevention, treatment and management of biofilm-related infections remains crucial.
S. aureus-driven molecular mechanisms underlying processes leading to biofilm formation have been extensively investigated [31].However, the role of the engagement of host molecules by S. aureus is not exhaustively described.Evidence points to a functional relationship between ECM and hemostasis in the initiation of biofilm [32].The addition of fibrinogen (FG) to coagulase-positive S. aureus cultures promotes biofilm formation acting on early stages of adhesion and clotting [36].
Microfluidic devices have emerged as a powerful tool for mimicking in vivo hydrodynamic conditions in biofilm-related studies [37].They enable long-term assays and real-time dynamic analyses [38] and offer more precise control over relevant parameters (such as fluid flow and surface properties), as close as possible to actual clinical conditions in patients.Moreover, the use of geometrical confinement in microfluidic channels has yielded valuable insights into the behavior of microbes at the single-cell level [39].Studies based on different microfluidics tools are essential in deepening our understanding of biofilm formation and potential strategies for their management [40].
The present study is aimed at expanding the knowledge of the underlying mechanisms that lead to S. aureus biofilm formation by dissecting the complex process.For the purpose, we combined Live Cell Imaging, microfluidics and data analysis in order to investigate the role of elements of the ECM and hemostasis in the different phases leading to S. aureus biofilm formation.Using different microfluidics approaches, we defined FG as an essential molecule in the S. aureus activities during adhesion, coagulation and matrix assembly, biofilm formation and constitution.We also described a role of fibrinolysis in interfering with biofilm formation and in promoting immunoglobulin (IgG)-mediated immune responses that lead to pathogen killing in plasma from septicemic patients.The results provide a better understanding of the mechanisms at the basis of S. aureus biofilm formation, mediated by the interaction between S. aureus and host molecules and instrumental in the development of new combined therapeutic strategies for preventing S. aureus biofilm-associated infections and sepsis.
Ethics Statement and Clinical Samples
Acid-citrate-dextrose (ACD)-plasma of patients was collected after a positive bacteriological diagnosis for S. aureus infection by the clinical personnel of the Intensive Care Unit (ICU) in Humanitas Research Hospital under Ethic Statement Approval n • 820/18.One patient with septic shock from osteomyelitis undergoing multiple surgeries complicated by infective endocarditis of the mitral valve (Pz 1) and a patient with bacteremia and sepsis from an epidural abscess (Pz 2) were included for the sample collection.Informed consent was obtained from all subjects involved in the study.The levels of FG in the ACD-plasma vs. serum of normal donors (n = 3) were measured by ACL TOP ® 750 CTS (Werfen, Milan, Italy).
IgG Depletion
The ACD-plasma of patients was collected in BD Vacutainer ® and maintained on ice during the procedures of depletion to avoid the activation of the complement.IgG depletion was obtained by passing human plasma-citrate at 10% diluted in TSB (3 mL) on a protein-G Sepharose TM Fast Flow (GE Healthcare, Uppsala, Sweden) column, as indicated by the manufacturer's instructions.Bound IgG were eluted with 0.1 M Glycine-HCl pH 2.8 and measured using the Pierce TM Coomassie (Bradford reagent) protein assay kit (ThermoFisher Scientific, Waltham, MA, USA).The actual depletion of IgGs was evaluated by Western blot analysis after loading 1 µL/lane ACD-plasma on SDS-PAGE (10-12% acrylamide-bis; Bio-Rad Laboratories, Milan, Italy) and the use of horseradish peroxidase (HRP)-conjugated goat anti-human IgG (1 µg/mL; Jackson ImmunoResearch, West Grove, PA, USA).
IgG Titration
An indirect ELISA method with a 96-well plate coated with S. aureus lysate was used.S. aureus was cultured in TSB until O.D. = 0.6 A 600nm , corresponding to 1 × 10 8 CFU/mL.A total of 200 µL of the culture was resuspended in lysis buffer (150mM Tris-HCl pH 7.5 containing 2 mM EDTA, 2 mM EGTA, 1% triton X-100, all from Sigma/Merck, Germany, and a complete protease inhibitor cocktail from Roche, Basel, Switzerland).Lysate was obtained after three cycles of freezing and thawing.S. aureus lysate was diluted 1:100 in carbonate buffer (pH = 9.6, 35 mM NaHCO 3 , 15 mM Na 2 CO 3 ) and incubated at 4 • C overnight for adsorption.The blocking of non-specific binding to plastic wells was performed with washing buffer containing 0.5% vol/vol Tween-20 in PBS ++ pH 7.4, (0.9 mM CaCl 2 , 0.49 mM MgCl 2 , 137.9 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , and 1.8 mM KH 2 PO 4 ; Sigma-Aldrich, Darmstadt, Germany) with 5% dry milk (w/v) for 2 h at room temperature (r.t.).ACD-plasma was serially 10-fold diluted in washing buffer containing 100 µg/mL of purified goat IgGs (026202; #804535A, Invitrogen, Waltham, MA, USA) and then incubated on S. aureus lysate for 2 h at r.t.An anti-human IgG was used for the detection of specific anti-S.aureus IgGs (secondary antibody; goat anti-Human IgG HRP-conjugated, 1:5000 dil.; A18817 #9363092322, Invitrogen, Waltham, MA, USA).The addition of purified goat IgGs in the washing buffer avoids the IgGs binding activity of protein A of S. aureus.The results are expressed as [log] O.D at A 405nm .
Microfluidic Device
Two different polydimethylsiloxane (PDMS) microfluidic devices were employed.PDMS is a commonly used material in microfluidics due to its flexibility, transparency, and ease of fabrication.Moreover, it can be a suitable substrate for studying biofilm formation and bacterial adhesion in various medical contexts [42].The first microfluidic device was composed of 12 straight channels (16 mm long, 100 µm high and 800 µm wide) and was used to run experiments using different plasma and treatments after bacterial adhesion.The second device (micro-pillars chip) was composed of eight straight channels (40 mm long, 40 µm high and 1 mm wide) with five isolated micro-pillars (with a diameter of 50 µm) placed along the channel at 6 mm from each other and slightly shifted with respect to the midline of the channel.The presence of an isolated pillar acts to divert the flow from a rectilinear path, inducing secondary vortices that trigger the reproducible formation of filamentous biofilm structures known as streamers [43,44].This device was used for experiments of S. aureus adhesion in flow and biofilm streamers formation.The devices were fabricated using soft lithography and rapid prototyping, as previously described [45].Briefly, master molds were fabricated by patterning the negative photoresist SU-8 (Kayaku Advanced Materials, Inc., Westborough, MA, USA) on silicon wafers.Positive replicas of the microfluidic channels were obtained by pouring PDMS (Sylgard 184, Dow Corning, Midland, MI, USA) and a curing agent 10:1 (w/w) on the master and degassed in a vacuum chamber to remove bubbles.PDMS was thermally cured on a heat plate at 120 • C to create a negative replica mold.Cured PDMS was peeled off, and connecting holes (inlets and outlets) were created using a biopsy puncher (1.5 mm).The PDMS channels were irreversibly bonded to a glass slide upon treatment with oxygen plasma.The devices were sterilized by UV irradiation before each experiment.
Microfluidic Experimental Workflow and Live Cell Imaging
Microfluidic experiments were performed using TSB and different human plasma, serum and proteins, as described in the Section 3 and figure legend.ACD-plasma de-pleted from FG (FG -; FG-DP, #DP1-0097), Factor X (FX -; FX-DP, #DP10-0139), Factor VII (FVII -; FVII-DP #DP7-0153), plasminogen (PLG -; PG-DP, #DP21-0025) and plasma control (NP; VisuConF UFNCP0125 #0019-73FCP) were acquired from Stago (Leiden, The Netherlands).Normal human serum (HS; NHS, #47c) was acquired from Complement Technology (Tyler, TX, USA).The recombinant (from NSO cells) purified human urokinase plasminogen activator (uPA; #HKY0921101) and human tissue-type plasminogen activator (tPA; #DATN032304) were acquired from R&D systems (Minneapolis, MN, USA).Fibrinogen (FG; human native fibrinogen plasminogen-depleted, 341578, #D00160002, Calbiochem, San Diego, CA, USA), fibronectin (FN; human native fibronectin; 341635, #3156097, Calbiochem, San Diego, CA, USA), hyaluronic acid (HA, hyaluronan from human umbilical cord; 385902, #B66144m Calbiochem, San Diego, CA, USA) and type I (#12CSP01A, Nutacon BV, Leimuiden, The Netherlands) and IV collagen (from equine tendon, 311501c, #L296/1, Biolife Italiana, Milan, Italy) were also used in specific experiments.Before each experiment, microfluidic channels were carefully filled with sterile TSB in order to prevent the entrance of air bubbles and to prime the observation chamber in the system.For each experimental condition, a glass syringe (1 mL, Inner Diameter 4.78 mm, BD Luer-Lok ™ ) containing a different medium with or without bacteria was connected to the channel through needles 21 G (BD Microlance, 304432, BD, Milan, Italy) and Tygon tubings (inner diameter 508 µm, outer diameter 1.524 µm, #AAD04103, Saint-Gobain, France) and injected into the observation chamber.The flow was driven by a syringe pump (NE 1800, New Era Pump Systems, Farmingdale, NY, USA), using a flow rate of 0.5 µL/min.Syringes were kept in ice throughout the acquisition in order to avoid S. aureus growth in the syringe and to preserve the enzyme cascade efficacy (coagulation and fibrinolysis).As indicated, in specific experiments aimed at measuring S. aureus adhesion, microfluidic devices were precoated with the ECM components and incubated at 37 • C, 5% CO 2 for 2 h.Microchannels were then injected with TSB containing S. aureus (1.7 × 10 6 CFU/mL).In post-adhesion experiments using the straight 12-channel device, S. aureus (1.7 × 10 7 CFU/mL) was initially seeded inside the microfluidic channels for 30 min at r.t.before starting the flow of different conditions of the medium and imaging acquisition.In experiments using a micro-pillar device, syringes containing S. aureus (1.7 × 10 6 CFU/mL) and different conditions of the medium were used.
Propidium Iodide (PI) (#P4170, Sigma-Aldrich, Darmstadt, Germany) was added to a final concentration of 1 µg/mL to all the conditions and throughout the experiment as an elective probe for the detection of the eDNA associated with biofilm formation [44,[46][47][48].When indicated, the flow of plasma was trimmed up to 50 µL/min, and images were acquired every 5 min in order to evaluate the mechanical response of the biofilm colonies and the consequent S. aureus detachment from the surface of the device.All the experiments were performed under climate control (37 • C, 5% CO 2 in a humidified atmosphere; Okolab, Naples, Italy).Images were acquired using two DMI8 Leica microscopy systems equipped with a 20 × air objective (20 ×/0.40NAHC PL FLUOTAR L).For each time frame, at least three to four consecutive but not-overlapping images for each channel and condition were acquired after sequential illumination with the Lumen 200 Fluorescence System (Prior Scientific Inc., Rockland, MA, USA) and the collection of the signal contribution for GFP (Em.495/517 nm), PI (Em.550/580 nm) and bright field contrast (BF) using an ORCA-Flash 4.0 V3 Digital CMOS camera (C13440-20CU; Hamamatsu, Milan, Italy).Images were acquired every 5 or 10 min up to 5 h using Leica Application Suite X software (LASX; v 3.5.5.19976) or Metamorph (v 7.10.1.161).
Analysis
Images were extracted from .liffiles and renamed and processed using a custom Image-J/Fiji pipeline [49].During the pre-processing steps, the images were downscaled to 1024 × 1024 pixels, and time-lapses were visually inspected in order to exclude critical acquisitions (e.g., bubble formation) and to ensure that images were in focus for the duration of the experiment.In general, measurements were carried out in regions of interest (ROIs) belonging to the central area of the channel in order to avoid borders where debris adhere and accumulate, and S. aureus grows unevenly as large clumps.For each experiment, a subset of at least three to four GFP images from different time points was used to train a pixel classification model using Ilastik (v 1.3.3[50]) in order to distinguish GFP-positive (GFP + ) regions from the background.The model was applied to segment all GFP images obtaining eight-bit masks.A custom pipeline using Image-J macro and R [51] was used to perform all the subsequent analysis.The mask was slightly enlarged (dilation of two pixels in each direction) to close small gaps before measuring the median fluorescence intensity (MFI) of the GFP raw signal inside the mask itself.For adhesion experiments, PI MFI was measured inside the mask at each time point, while in flow experiments, it was calculated above the manually thresholded background.In experiments using a micro-pillar device, a signal contiguous to the pillar was included in the analysis, and specific ROIs were created by hand using ImageJ.A minimal size of 200 µm 2 was considered for the analysis.In the adhesion experiment, in order to reduce differences due to the initial seeding in the channels, as well as different background signal, both the GPF and PI signals were normalized to the initial MFI (relative MFI, rMFI).
In order to quantify the morphological differences between S. aureus colonies dependent on different growing conditions, we associated with each image its Circularity Index (CI) (4π area/perimeter 2 , range of 0-1).CI was calculated with the Analyze Particle plugin on the original non-dilated GFP mask, on colonies with sizes between 10 and 2000 µm 2 and, typically, for up to 150 min in order to avoid the merging of adjacent colonies.Prism (GraphPad v. 9.5.1) was used to plot the data.The GFP + area was calculated inside each ROI only on unmasked pixels.Statistical analysis was performed using non-parametric one-way ANOVA and a post hoc pairwise multiple comparisons test (Rstatix package kruskal_test and dunn_test functions).Non-parametric one-way ANOVA (Kruskal-Wallis test) on the different conditions was performed at each time point, and Dunn's test was used to perform pairwise comparisons between selected groups.Initial time points (t < 100 min) were excluded from the statistical analysis on GFP MFI and PI MFI since, in most of the experiments, the increase in these quantities was negligible.
Fibrinogen Is Essential in the Different Phases of S. aureus Biofilm Formation
The interaction of S. aureus with host molecules present in blood and ECM is reported to influence biofilm formation by promoting adhesion and aggregation [52].We employed a multilayer method to define the relevance of ECM and hemostasis molecules in biofilm formation and evaluate the different stages that include attachment, colony formation and maturation, irreversible attachment or detachment (dispersal of S. aureus colonies), leading to the colonization of other sites [53][54][55].
In the first series of experiments (n = 2), S. aureus adhesion was measured over time (0-200 min) on a glass bottom surface of straight channels of a microfluidic device previously adsorbed with purified FG, FN, HA and type I and type IV collagen (all at 100 µg/mL) (Figure 1 and Movies S1-S6).Among the molecules tested, FG was shown to be essential in inducing S. aureus adhesion to the surface, as ascertained by the quantification of GFP MFI (FG, 282 ± 91 vs. ctrl, 171 ± 1 at t = 180 min; n = 6, 2; p = 0.02) and the GFP + area (FG, 53.5 ± 46% vs. ctrl, 0.4 ± 0.2% at t = 180 min; n = 6, 3; p = 0.02) (Figure 1A and Movie S1) compared with the control (Figure 1A and Movie S6).In the same experimental settings, no relevance was observed with the use of FN (Figure 1A and Movie S2), HA (Figure 1A and Movie S3), type I (Figure 1A and Movie S4) and type IV (Figure 1A and Movie S5) collagen-coated surfaces in S. aureus adhesion and growth; therefore, subsequent efforts in defining the role of host molecules in the stages leading to S. aureus biofilm formation focused on FG.
(FG, 53.5 ± 46% vs. ctrl, 0.4 ± 0.2% at t = 180 min; n = 6, 3; p = 0.02) (Figure 1A and Movie S1) compared with the control (Figure 1A and Movie S6).In the same experimental settings, no relevance was observed with the use of FN (Figure 1A and Movie S2), HA (Figure 1A and Movie S3), type I (Figure 1A and Movie S4) and type IV (Figure 1A and Movie S5) collagen-coated surfaces in S. aureus adhesion and growth; therefore, subsequent efforts in defining the role of host molecules in the stages leading to S. aureus biofilm formation focused on FG.In a different series of experiments (n = 4), S. aureus was seeded on the surface of a microfluidic device, and the effect of FG in the assembly of the biofilm matrix was assessed by flowing (with a flow rate of 0.5 µL/min) normal human plasma (NP) in comparison with FG-depleted ACD-plasma (FGˉ).As established in a previous setup (Figure S1A,B and Movies S7-S11), the use of NP ensured the effective visualization and measurement of biofilm compared to human serum (HS) used at equal % (PI rMFI: 10% NP, 2.4 ± 0.4 vs. HS, 1.0 ± 0.1 at t = 180 min; n = 5, 3; p = 0.03) (Figure S1A,B and Movies S7 and S9).This suggests the relevance of FG and of an active coagulation cascade in biofilm formation, since these elements are consumed during serum preparation [56] (n = 3 NP, levels of FG In a different series of experiments (n = 4), S. aureus was seeded on the surface of a microfluidic device, and the effect of FG in the assembly of the biofilm matrix was assessed by flowing (with a flow rate of 0.5 µL/min) normal human plasma (NP) in comparison with FG-depleted ACD-plasma (FG -).As established in a previous setup (Figure S1A,B and Movies S7-S11), the use of NP ensured the effective visualization and measurement of biofilm compared to human serum (HS) used at equal % (PI rMFI: 10% NP, 2.4 ± 0.4 vs. HS, 1.0 ± 0.1 at t = 180 min; n = 5, 3; p = 0.03) (Figure S1A,B and Movies S7 and S9).This suggests the relevance of FG and of an active coagulation cascade in biofilm formation, since these elements are consumed during serum preparation [56] (n = 3 NP, levels of FG = 230.7 ± 53.5 mg/dl vs. undetectable in correspondent HS; not shown).Similar S. aureus growth associated with biofilm detection was observed when percentages of 1, 3 and 10 NP were used (GFP rMFI: 10%, 7.8 ± 1.8, 3%, 6.0 ± 0.2 1%, 7.0 ± 1.1, at t = 180 min; PI rMFI: 10%, 2.1 ± 0.1, 3%, 2.29 ± 0.2, 1%, 1.5 ± 0.1, at t = 180 min; n = 8, 2, 3) (Figure S1A,B and Movies S9-11).However, S. aureus colonies showed typical characteristics when grown in 10% plasma compared to the other conditions, and, as ascertained from morphometric measurement parameters (Figure S1C), they appeared more expanded and rounded (CI: 10%, 0.78 ± 0.04, 3%, 0.76 ± 0.01, 1%, 0.68 ± 0.02, at t = 100 min; p = 0.03 NP 10% vs. NP 1%; n = 8, 3), indicating a mature state of S. aureus colonies associated with the highest biofilm measurement.
The staphylokinase-dependent activation of PLG prevents biofilm formation in a murine model of catheter infection [25].Therefore, a possible relevance of active PLG in interfering with biofilm formation was evaluated using PLG-depleted plasma (PLG -). S. aureus showed similar growth and biofilm formation in PLG -and NP (GFP rMFI, 6.1 ± 1.0 vs. 6.0 ± 0.2; PI rMFI, 1.9 ± 0.1 vs. 1.7 ± 0.1, at t = 180 min; n = 3) (Figure 3A,B and Movies S17 and S19), whereas, as expected, S. aureus growth and biofilm were abolished in FG -(GFP rMFI, 2.1 ± 0.1; PI rMFI, 1 ± 0.1 vs. at t = 180 min; n = 3; p = 0.02 vs. PLG -) (Movie S18).A similar morphology and structure of S. aureus colonies were also observed in PLG -and NP (CI, 0.75 ± 0.02, vs. 0.81 ± 0.01, at t = 180 min; n = 3) (Figure 3B,C and Movies S17-S19); thus, in our experimental approach, no relevance of PLG in altering the biofilm process was observed.In order to corroborate the results obtained by further mimicking a pathological contest of infection occurring in an indwelling medical device [57], experiments (n = 3) using the same conditions were performed by flowing S. aureus into a microfluidic device In order to corroborate the results obtained by further mimicking a pathological contest of infection occurring in an indwelling medical device [57], experiments (n = 3) using the same conditions were performed by flowing S. aureus into a microfluidic device equipped with micro-pillars (Figure 4).In NP, S. aureus bacteria in flow adhere to the inner surfaces of the microfluidic channels, aggregate and form a coagulative state close to the pillar, which is essential in supporting the formation of filamentous biofilm structures known as "streamers" (Figure 4A-C and Movie S20) [44,58].In FG -, S. aureus was unable to perform the first phases involving adhesion, aggregation and coagulation (GFP MFI: FG -, 882 ± 668 vs. NP, 1311 ± 724 at t = 180 min; n = 6,10; 2 field with a detectable signal in FG -); therefore, biofilm formation was not detected under these conditions (Figure 4B,C and Movie S21).Interestingly, the reconstitution of FG (400 µg/mL) alone in FG -rescued the S. aureus's ability to adhere to the pillar, aggregate (GFP MFI: FG -+ FG, 1212 ± 571, at t = 180 min; n = 15; 15 field with a detectable signal), coagulate and form biofilm (PI MFI: FG -+ FG, 986 ± 187, FG -809 ± 148 at t = 180 min; n = 15, 4; p = 0.03) (Figure 4B,C and Movie S22).
Pathogens 2023, 12, x FOR PEER REVIEW 11 of 22 equipped with micro-pillars (Figure 4).In NP, S. aureus bacteria in flow adhere to the inner surfaces of the microfluidic channels, aggregate and form a coagulative state close to the pillar, which is essential in supporting the formation of filamentous biofilm structures known as "streamers" (Figure 4A-C and Movie S20) [44,58].In FGˉ, S. aureus was unable to perform the first phases involving adhesion, aggregation and coagulation (GFP MFI: FGˉ, 882 ± 668 vs. NP, 1311 ± 724 at t = 180 min; n = 6,10; 2 field with a detectable signal in FGˉ); therefore, biofilm formation was not detected under these conditions (Figure 4B Therefore, as assessed by employing two different microfluidic methods, FG plays an essential role in triggering early phases of S. aureus biofilm formation and by driving adhesion and aggregation to the surface.S. aureus-induced fibrin polymerization is critical in the assembly and organization of the matrix, leading to biofilm constitution.
In pillar-based devices, similar results were obtained, with tPA showing a higher efficiency than uPA, in line with its high specificity of activating fibrinolysis in the fluid phase compared to the cell-mediated fibrinolysis at tissue sites [62].Indeed, in the presence of tPA, uPA and NP, the PI MFI values were 965 ± 230, 1391 ± 466 and 1542 ± 628 (t = 220 min; n = 10; p = 0.04), (Figure 6A,B and Movies S27, S29 and S30).The use of tPA resulted in S. aureus's inability to trigger the initial actions critical in biofilm formation, such as adhesion, aggregation (GFP MFI, tPA, 2639 ± 1328 vs. NP, 5644 ± 1870, at t = 220 min; n = 10; p = 0.002) and clot formation essential in the continuation of the process (Figure 6A,B and Movies S27, S29 and S30).In both methods, FG -was used as a reference control of the growth, biofilm production and morphology of S. aureus colonies (Figures 5A-D and 6A,B and Movies S24 and S28).
The results indicate a role of fibrinolysis in inhibiting S. aureus biofilm formation, possibly by acting on the degradation of fibrin.In agreement with the results obtained in FG-deficiency, the activation of fibrinolysis alters S. aureus adhesion and growth, thus interfering from the earliest stages in the biofilm constitution.
The Reactivation of Fibrinolysis in S. aureus-Induced Sepsis Favors an IgG-Mediated Immune Response
In S. aureus biofilm, a fibrin scaffold provides high resistance to antimicrobial treatments and immune cell recognition [63].In blood, S. aureus coagulases are essential to forming a mechanical barrier to protect S. aureus from recognition by opsonins [23].In S. aureus-induced septicemic patients, an impairment of fibrinolysis contributes to disseminated intravascular coagulation (DIC) [11,[64][65][66].Therefore, we evaluated the effect of fibrinolysis reactivation in the immune response in S. aureus-induced septic patients.
In this regard, S. aureus that was previously adhered on the bottom of straight channels was flushed with 10% plasma obtained from patients with a bacteriological diagnosis of blood S. aureus infection and a different titre of specific anti-S.aureus IgGs, high in ACDplasma from patient 1 (PzP 1) and low in ACD-plasma from patient 2 (PzP 2) (Figure S3).As shown in Figure S1, the method and analysis were revised in detail in order to specifically distinguish the detection of biofilm formation from pathogen killing, as expected from an incubation of S. aureus in a context of specific immunoresponsiveness.Actually, it was possible to obtain an unbiased measure of S. aureus biofilm formation in NP vs. S. aureus killing (vancomycin-mediated in Figure S1D-F).Therefore, at increased PI MFI, GFP MFI correspondingly decreased due to bacterial death (Figure S1E), as ascertained by the relative GFP MFI values (NP, 7.8 ± 1.8, vs. vancomycin, 1.8 ± 0.3, at t = 180 min; n = 8, 4; p = 0.007) of S. aureus growth and relative PI MFI (NP, 2.2 ± 0.5, vs. vancomycin, 4.8 ± 1.1, at t = 180 min; n = 8, 4; p = 0.007) (Figure S1D,E
Discussion
The participation of ECM elements in the innate immune response is an ongoing topic [16].Although generally considered to be separate from the innate immune response, the evasion of pathogens from the host defense involves mechanisms that are mediated by their interaction with the ECM as well as by the manipulation of elements of hemostasis [66].In specific contexts, the interaction of pathogens with the same molecules is a disadvantage for the onset of infection.In fact, ECM elements act as an integral part of inflammatory and innate immune responses [67,68], have antimicrobial functions by acting as recognition elements and behave as osponins [15,69].On the other hand, fluid phase molecules of innate immunity play essential roles in tissue repair and healing through remodelling the ECM by interacting with its elements [12] or regulating the activities of the cells involved [70].A mutually functional relationship between hemostatic and innate immune responses is consolidated [1,2].
In the present study, we investigated the complexity of interactions among the ECM and hemostatic system in the biofilm formation of S. aureus, a leading cause of hospitalassociated bloodstream infections and the most common cause of several life-threating conditions, such as endocarditis and sepsis [18,19,21].S. aureus accounts for the majority of medical device-associated infections and sepsis, which are difficult to treat because of the biofilm structure that allows them to evade the immune response and disfavor the efficacy of antibiotic treatment [28].
By investigating the underlying mechanisms of these processes, we sought to develop insights that could contribute to improving medical treatments.To this end, we have used Live Cell Imaging, microfluidics and data analysis to mimic an in vivo pathological context of S. aureus infection on an indwelling medical device.Specifically, we exploited different microfluidic channels to capture the spatiotemporal dynamics of S. aureus formation and to evaluate the effect of ECM and hemostasis molecules in this process.This parallel experimentation enabled efficient data collection and reduced experimental bias, as it allows for the comparison of experimental conditions within the same analysis.In addition, we used a newly developed microfluidic device whose basic unit is a straight channel with isolated micropillars located along its length [37,43].The pillars serve as nucleation sites for the formation of streamers [43,44].The induction of streamer formation provides a valuable approach to probing the composition and mechanical properties of a biofilm [43].
S. aureus-driven molecular mechanisms at the various phases leading to the formation of the biofilm have been extensively investigated [71].On the other hand, there is scattered evidence on the role of host's molecules in influencing the biofilm process.Surfaceassociated host proteins on implants are reported to mediate the adhesion of S. aureus [52], and evidence points to a functional relationship between S. aureus, hemostasis elements and ECM in the initiation of biofilm [13,36].The broad-spectrum approach of the different ECM molecules used in our study allowed for consolidating the essential role of fibrinogen in promoting S. aureus adhesion to the device surface, thus allowing for the initiation of the process of biofilm formation, with respect to the other molecules considered in the study.Moreover, in line with reported evidence [24,25], fibrin conversion is essential in assembling a stable matrix supporting biofilm formation and providing the attachment and resistance of colonies to fluid flow.At this stage, we demonstrated a functional exclusivity of S. aureus-driven coagulation, since no effect was observed in experiments using human plasma depleted of coagulation elements.
The formation of a stable and impenetrable protective coagulative matrix underlies the altered recognition by immune system molecules and the ineffective treatment of biofilm-associated S. aureus [33,72,73].Evidence suggest an effect of fibrinolytic agents (e.g., streptokinase, nattokinase) in preventing S. aureus biofilm formation in vitro and enhancing the efficacy of antimicrobials in S. aureus device-related infections in vivo [59,60].A prophylaxis use of tPA, as a coating of devices, prevented S. aureus adhesion and increased susceptibility to the treatment in vitro [61].Using two different microfluidics systems, we defined the role of uPA-and tPA-mediated fibrinolysis activation in plasma in the inhibition of biofilm formation at the different stages that include attachment, colony formation and maturation, thus expanding the understanding of the mechanisms underlying the activity of these mediators.In the multichannel platform, an inhibitory effect of uPA-mediated fibrinolysis activation has never been described.Using the pillar-based device, the activation of tPA-mediated fibrinolysis resulted in interference in S. aureus adhesion, aggregation and coagulation and hence in the continuation of the process of biofilm formation.
Hereafter, the same rationale prompted us to evaluate the effect of fibrinolysis in promoting S. aureus recognition and immune effector functions favored by the remodeling of the fibrin matrix interacting with S. aureus adhering to device surfaces.In S. aureus-induced septicemic patients, the hemostatic system is activated in a dysregulated manner due to the alteration of a pro/anticoagulant system and the impairment of fibrinolysis (the so-called fibrinolytic shut-down), contributing to multiorgan damage and mortality [11,64,65].Indeed, plasminogen activator inhibitor-1 (PAI-1) is a crucial regulator of fibrinolysis [74].Increased levels of PAI-1 in sepsis predict disease severity and mortality [75], thus indicating a role of fibrinolysis in the disease outcome by inhibiting the disseminated thrombotic process in the microcirculation, a major cause of multiple-organ dysfunction in sepsis [76,77].Thus, our results obtained through approaches that mimic the in vivo condition indicate that, despite the presence of antibodies in septicemic individuals [78], a reduction in fibrinolysis that promotes biofilm formation can alter an immune-mediated recognition and clearance of S. aureus.We observed that the activation of fibrinolysis is associated with the prevention of S. aureus biofilm formation and the enhancement of IgG-mediated pathogen killing, most likely associated with fibrinolysis activity in degrading S. aureus-associated fibrin.This translational part deserves extensive insights and will be extended to an increased number of patients and to associative analyses related to the clinical outcome by evaluating markers of hemostasis, inflammation and disease severity.Future studies will also be aimed at evaluating a synergistic effect between fibrinolysis activation and innate defense systems such as the complement system and other fluid-phase mediators of innate immunity in response to biofilm-associated bacteria.
Conclusions
S. aureus-forming biofilm is a cause of critical infections, and this represents a medical challenge given the lack of therapeutic approaches.Our study defines the importance of host-derived hemostasis factors in the different stages leading to S. aureus-driven biofilm formation, highlighting a central role of fibrinolysis in preventing biofilm initiation and formation.Actually, fibrinolysis is effective in unmasking surface-associated S. aureus and, in turn, in reactivating recognition and effector functions of the immune system, leading to pathogen clearance.The results are therefore instrumental in the development of new combined therapeutic approaches for the clinical management of S. aureus biofilmassociated infections and sepsis.
Figure 2 .
Figure 2. Role of fibrinogen in the assembly and formation of S. aureus (SA) biofilm.(A-D) S. aureus (1.7 × 10 7 CFU/mL) previously adhered on the bottom surface of straight microfluidic channels.Conditions include TSB, 10% of normal (NP) and fibrinogen-depleted (FGˉ) ACD-plasma diluted in TSB and FGˉACD-plasma diluted in TSB added with human purified FG (400 µg/mL) and FG (400 µg/mL) in TSB.(A) S. aureus growth and biofilm formation were evaluated, respectively, as relative GFP (upper) and PI (lower) MFI.Values are represented as functions of time and normalized over the first time point.The PI signal was considered superimposed to the GFP-positive mask, as described in Materials and Methods.* p < 0.05, ** p < 10 −3 FGˉ vs. NP at t = 180 min, n = 12, 15, Dunn's test.(B) Images of bright field (BF), GFP and PI at representative time points (t = 30, 100 and 180 min), referring to one experiment.At t = 100 min, BF close-up images representing S. aureus morphology are also shown.Bar, 50 µm.(C) Mean CI of S. aureus colonies *** p < 10 −4 FG -vs.NP at t = 100 min.(D) Relative GFP MFI over time after the flux increase (at t = 200 min, 10 µL/min for 30 min to 50 µL/min until the end of the experiment).*** p < 10 −4 FGˉ vs. NP, n = 9, 12, Dunn's test.(A,C) Mean ± SE of 9 to 15 ROIs from three experiments out of four performed with similar results.(D) Mean ± SE of 3 to 12 ROIs from two experiments out of four performed with similar results.
Figure 2 .
Figure 2. Role of fibrinogen in the assembly and formation of S. aureus (SA) biofilm.(A-D) S. aureus (1.7 × 10 7 CFU/mL) previously adhered on the bottom surface of straight microfluidic channels.Conditions include TSB, 10% of normal (NP) and fibrinogen-depleted (FG -) ACD-plasma diluted in TSB and FG -ACD-plasma diluted in TSB added with human purified FG (400 µg/mL) and FG (400 µg/mL) in TSB.(A) S. aureus growth and biofilm formation were evaluated, respectively, as relative GFP (upper) and PI (lower) MFI.Values are represented as functions of time and normalized over the first time point.The PI signal was considered superimposed to the GFP-positive mask, as described in Materials and Methods.* p < 0.05, ** p < 10 −3 FG -vs.NP at t = 180 min, n = 12, 15, Dunn's test.(B) Images of bright field (BF), GFP and PI at representative time points (t = 30, 100 and 180 min), referring to one experiment.At t = 100 min, BF close-up images representing S. aureus morphology are also shown.Bar, 50 µm.(C) Mean CI of S. aureus colonies *** p < 10 −4 FG -vs.NP at t = 100 min.(D) Relative GFP MFI over time after the flux increase (at t = 200 min, 10 µL/min for 30 min to 50 µL/min until the end of the experiment).*** p < 10 −4 FG -vs.NP, n = 9, 12, Dunn's test.(A,C) Mean ± SE of 9 to 15 ROIs from three experiments out of four performed with similar results.(D) Mean ± SE of 3 to 12 ROIs from two experiments out of four performed with similar results.
Figure 3 .
Figure 3. Plasminogen does not affect S. aureus (SA) biofilm formation.(A-C) Same experimental setting and analysis as those in Figure 2 were used; 10% of PLG-depleted (PLGˉ) or FGˉ and NP ACD-plasma diluted in TSB were used.(A) Relative GFP (upper) and PI (lower) MFI ± SE over time.* p < 0.05 PLGˉ vs. FGˉ, t = 180 min, Dunn's test.(B) BF, GFP and PI images at t = 30, 100 and 180 min, representative of one experiment.Bar, 50 µm.(C) Mean CI of S. aureus colonies.(A,C) Each point refers to the Mean ± SE of three ROIs from one experiment of three performed.
Figure 3 .
Figure 3. Plasminogen does not affect S. aureus (SA) biofilm formation.(A-C) Same experimental setting and analysis as those in Figure 2 were used; 10% of PLG-depleted (PLG -) or FG -and NP ACD-plasma diluted in TSB were used.(A) Relative GFP (upper) and PI (lower) MFI ± SE over time.* p < 0.05 PLG -vs.FG -, t = 180 min, Dunn's test.(B) BF, GFP and PI images at t = 30, 100 and 180 min, representative of one experiment.Bar, 50 µm.(C) Mean CI of S. aureus colonies.(A,C) Each point refers to the Mean ± SE of three ROIs from one experiment of three performed.
Figure 4 .
Figure 4. Role of fibrinogen in the adhesion, assembly and formation of biofilm by S. aureus (SA) in flow.(A-C) A pillar-based microfluidic device was used.S. aureus (1.7 × 10 6 CFU/mL) was injected with a flow rate of 0.5 µL/min at the same experimental conditions as in Figure 2. (A) Representative BF images which refer to one experiment of six performed, showing different initiation phases (adhesion, coagulation, fibrin matrix assembly) that lead to biofilm formation around the micropillar.(B) S. aureus growth as GFP MFI (×10 3 ) and biofilm detection as PI MFI (×10 3 ) as a
Figure 4 .
Figure 4. Role of fibrinogen in the adhesion, assembly and formation of biofilm by S. aureus (SA) in flow.(A-C) A pillar-based microfluidic device was used.S. aureus (1.7 × 10 6 CFU/mL) was injected with a flow rate of 0.5 µL/min at the same experimental conditions as in Figure 2. (A) Representative BF images which refer to one experiment of six performed, showing different initiation phases (adhesion, coagulation, fibrin matrix assembly) that lead to biofilm formation around the micropillar.(B) S. aureus growth as GFP MFI (×10 3 ) and biofilm detection as PI MFI (×10 3 ) as a function of time.Each point refers to the Mean ± SE of 6-15 ROIs from three experiments.Missing values in FG - correspond to time points where less than two pillars (over six) had a detectable signal, as described in the Section 2. * p < 0.05 FG -+ FG vs. FG -at t = 180 min, Dunn's test.(C) GFP, PI and contrast images are shown at t = 30, 100 and 180 min.Bar, 50 µm.(A,C) Arrow indicates the flow direction.
Figure 5 .
Figure 5. Triggering fibrinolysis interferes in the formation of S. aureus (SA) biofilm.(A-D) Straight microfluidic channels, with S. aureus (1.7 × 10 7 CFU/mL) previously adhered on the bottom surface, were used in the experiments (n = 2); 10% of NP with or without the addition of recombinant purified uPA (0.4 µg/mL) or tPA (0.4 µg/mL) and FG -were used.(A) S. aureus growth as relative GFP MFI ± SE (upper) and biofilm formation as PI MFI ± SE (lower) over the initial time.(B) BF, GFP and images referring to one representative experiment are shown at t = 30, 100 and 180 min.Bar, 50 µm.(C) CI of S. aureus colonies expressed as the Mean ± SE. (D) S. aureus detachment as an expression of the relative GFP after the flow rate was increased up to 10 µL/min for 30 min and later to 50 µL/min until the end of the acquisition.Each point is the Mean ± SEM of three to eight ROIs from one experiment out of two performed with similar results.* p < 0.05 NP vs. NP + uPA at t = 180 min (A), t > 100 min (C), t = 240 min (D), Dunn's test.
Figure 5 .
Figure 5. Triggering fibrinolysis interferes in the formation of S. aureus (SA) biofilm.(A-D) Straight microfluidic channels, with S. aureus (1.7 × 10 7 CFU/mL) previously adhered on the bottom surface, were used in the experiments (n = 2); 10% of NP with or without the addition of recombinant purified uPA (0.4 µg/mL) or tPA (0.4 µg/mL) and FG -were used.(A) S. aureus growth as relative GFP MFI ± SE (upper) and biofilm formation as PI MFI ± SE (lower) over the initial time.(B) BF, GFP and images referring to one representative experiment are shown at t = 30, 100 and 180 min.Bar, 50 µm.(C) CI of S. aureus colonies expressed as the Mean ± SE. (D) S. aureus detachment as an expression of the relative GFP after the flow rate was increased up to 10 µL/min for 30 min and later to 50 µL/min until the end of the acquisition.Each point is the Mean ± SEM of three to eight ROIs from one experiment out of two performed with similar results.* p < 0.05 NP vs. NP + uPA at t = 180 min (A), t > 100 min (C), t = 240 min (D), Dunn's test.
Figure 6 .
Figure 6.Triggering fibrinolysis interferes in the initial phase, leading to biofilm formation by S. aureus (SA) in flow.(A,B) A pillar-based microfluidic device and the same experimental conditions as those in Figure 4 were used; 10% of NP with or without the addition of recombinant purified uPA (0.4 µg/mL) or tPA (0.4 µg/mL) and FGˉ were used.(A) S. aureus growth (left) as GFP MFI (×10 3 ) and biofilm detection (right) as PI MFI (×10 3 ) as a function of time.Each point refers to the Mean ± SE of 8-10 ROIs from two experiments.* p < 0.05, ** p < 0.005 NP vs. NP + tPA, n = 10, t = 220 min.(B) GFP, PI and contrast images (t = 30, 100 and 180 min) of one representative experiment.Bar 50 µm.The arrow indicates the flow direction.
Figure 6 .
Figure 6.Triggering fibrinolysis interferes in the initial phase, leading to biofilm formation by S. aureus (SA) in flow.(A,B) A pillar-based microfluidic device and the same experimental conditions as those in Figure 4 were used; 10% of NP with or without the addition of recombinant purified uPA (0.4 µg/mL) or tPA (0.4 µg/mL) and FG -were used.(A) S. aureus growth (left) as GFP MFI (×10 3 ) and biofilm detection (right) as PI MFI (×10 3 ) as a function of time.Each point refers to the Mean ± SE of 8-10 ROIs from two experiments.* p < 0.05, ** p < 0.005 NP vs. NP + tPA, n = 10, t = 220 min.(B) GFP, PI and contrast images (t = 30, 100 and 180 min) of one representative experiment.Bar 50 µm.The arrow indicates the flow direction. | 2023-09-08T15:21:22.580Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "47bf8d41c19a5e7d0b378850cf255b0240d8ad6b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-0817/12/9/1141/pdf?version=1694054044",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bf57474e1c0936c1960a7a64e6a31f5689b7cbcb",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
255826912 | pes2o/s2orc | v3-fos-license | Myopia and its associated factors among pregnant women at health institutions in Gondar District, Northwest Ethiopia: A multi-center cross-sectional study
Background Myopia is the most common form of uncorrected refractive error with a growing burden worldwide. It is the principal complaint of most women during pregnancy. Although myopia has led to several consequences across the standard life of pregnant women, there is no previous study in Ethiopia regarding this topic. Thus, this study determined the prevalence of myopia and identifies its associated factors among pregnant women attending antenatal care units at governmental health institutions in Gondar City District, Northwest Ethiopia. Methods An institution-based cross-sectional study design was conducted from 08 February to 08 April 2021. From the selected health centres, study participants were recruited by systematic random sampling technique. A pre-tested, structured-interviewer-administered questionnaire consisting of socio-demographic variables, obstetric and clinical-related variables was used to collect the required data. Non-cycloplegic refraction was performed using trial lenses, trial frames, and retinoscopy in a semi-dark examination room. EpiData 3 and STATA 14 were used for data entry and statistical analysis respectively. Both bivariable and multivariable binary logistic regression analyses were executed to identify associated factors of myopia. Variables with a p-value ≤0.05 in the multivariable logistic regression analysis were declared as statistically significant with myopia. Model fitness was checked by Hosmer and Lemeshow goodness of test (at p > 0.05) Results A total of four-hundred and twenty-three pregnant women participated with a 100% response rate in this study. The overall prevalence of myopia among pregnant women was 26.48% (95% CI: 22.48–30.91). Eighty-Eight (20.81%) and Eighty-Four (19.85%) of the study participants had myopia in their right and left eyes respectively. The prevalence of myopia was significantly associated with age (AOR = 1.17; 95% CI: 1.09–1.28), the third trimester of gestation (AOR = 2.05, 95% CI: 1.08–3.90), multi & grand multipara (AOR = 3.15; 95% CI: 1.59–6.25), and history of contraceptive use (AOR = 3.30; 95% CI: 1. 50–7.28). Conclusion The finding of our study shows that there is a higher prevalence of myopia among pregnant women in our study area. Further prospective analytical studies regarding visual systems among pregnant women, particularly as a result of pregnancy, are strongly recommended.
Background: Myopia is the most common form of uncorrected refractive error with a growing burden worldwide. It is the principal complaint of most women during pregnancy. Although myopia has led to several consequences across the standard life of pregnant women, there is no previous study in Ethiopia regarding this topic. Thus, this study determined the prevalence of myopia and identifies its associated factors among pregnant women attending antenatal care units at governmental health institutions in Gondar City District, Northwest Ethiopia. Methods: An institution-based cross-sectional study design was conducted from 08 February to 08 April 2021. From the selected health centres, study participants were recruited by systematic random sampling technique. A pre-tested, structuredinterviewer-administered questionnaire consisting of socio-demographic variables, obstetric and clinical-related variables was used to collect the required data. Noncycloplegic refraction was performed using trial lenses, trial frames, and retinoscopy in a semi-dark examination room. EpiData 3 and STATA 14 were used for data entry and statistical analysis respectively. Both bivariable and multivariable binary logistic regression analyses were executed to identify associated factors of myopia. Variables with a p-value ≤0.05 in the multivariable logistic regression analysis were declared as statistically significant with myopia. Model fitness was checked by Hosmer and Lemeshow goodness of test (at p > 0.05) Results: A total of four-hundred and twenty-three pregnant women participated with a 100% response rate in this study. The overall prevalence of myopia among pregnant women was 26.48% (95% CI: 22.48-30.91). Eighty-Eight (20.81%) and Eighty-Four (19.85%) of the study participants had myopia in their right and left eyes respectively. The prevalence of myopia was significantly associated with age (AOR = 1.17; 95% CI: 1.09-1.28), the third trimester of gestation (AOR = 2.05, 95% CI: 1.08-3.90), multi & grand multipara (AOR = 3.15; 95% CI: 1. 59-6.25), and history of contraceptive use (AOR = 3.30; 95% CI: 1. 50-7.28).
Introduction
Myopia, (defined as "the spherical equivalent of objective refraction is ≤ -0.50 diopter in either eye or both"), is the most common form of uncorrected refractive error. Myopia is the chief cause of visual impairments across the globe irrespective of age and sex, affecting about 30% of the world's population (1,2). Nowadays, Myopia is a frightening pandemic refractive problem affecting about 2.5 billion people worldwide (3). As the recent systematic review and meta-analysis has suggested, about 34% of the global population became myopic by 2020 and half (49.8%) of the world's population may be affected by myopia by 2050 (4).
According to studies carried out in different countries of the world, adult females are highly prone to develop myopia than males (9)(10)(11)(12). In Europe, 42.3% of women are affected by nearsightedness (6). Myopia is detected in 27.5% of females according to a study in Israel (13). The prevalence of myopia among Chinese women is about 45.4% (8). The study in Saudi Arabia estimated that the prevalence of myopia among women is 18.1% (14). The prevalence rate of myopia among female medical students based on another study in Saudi Arabia is 34.6% (15). Based on the study in Ethiopia, the prevalence of myopia among female school-age children is about 27% while only 12% among male students (16).
Metabolic and hormonal changes during pregnancy can upset the normal visual functions of the women's eyes. Myopia is the principal complaint of most women during pregnancy. This problem is due to either physiological changes during pregnancy or exacerbations of pre-existing medical conditions (17, 18). Most myopic changes that happened during pregnancy are transient but occasionally, they may lead to permanent complications which will interfere with the usual health of the women (17, 19). Based on the study in Iran, myopia is observed in 11.77% of pregnant women which is more aggravated in the third trimester of gestation (18). A study in India revealed that 65% of pregnant women have myopia (20). The prevalence of myopia among pregnant women was reported as 77.50%, based on a study in South India (21). A study in Nigeria shows that myopia is the most prevalent type of refractive error among pregnant women which accounts for 57% (17). According to a recent institution-based cross-sectional study in Ethiopia, 35.66% of pregnant women have refractive errors (22).
The global burden of myopia is increasing over time and influences the quality of life of individuals, by way of poor vision (low vision and blindness), low productivity, and social interactions (1,4,23). Myopia is also a cause of retinal degenerative changes (retinal detachment) which may lead to intra-and post-partial ophthalmological complications during pregnancy (24). It can also increase the risk of social loneliness and depression, lead to the inability to perform tasks alone, increase the risk of fall-related injuries, and sexual violence and abuse (25).
Although myopia has led to several consequences across the quality of life of pregnant women, there is no previous study in Ethiopia regarding this topic. Thus, this study aimed to determine the prevalence of myopia and associated factors among pregnant women attending antenatal care units at selected governmental health institutions in Gondar district, Northwest Ethiopia. Information on the prevalence of myopia among pregnant women can help clinicians and policymakers to design appropriate prevention strategies.
Methods and materials
Study design, setting, and population An institution-based cross-sectional study design was conducted from 08 February to 08 April 2021. The study was conducted at selected governmental health institutions in Gondar District. Gondar is a historical city in Ethiopia located 727 km far from the capital city, Addis Ababa in the Northwest direction. It has 12 subcities with 12 urban and 10 rural kebeles. In Gondar district, there are eight health centres and one teaching referral hospital providing antennal care (ANC) services for about 41,000 pregnant women annually. This study was conducted among pregnant women of the 15-49 age group. All pregnant women who visited ANC services of the selected health institutions were included in the study whereas; those with congenital eye problems and eye trauma during the study period were excluded. Sample size determination and sampling procedure Single population proportion formula was used to calculate the required sample size. A 0.5 proportion of the population with myopia has been taken to estimate the minimum sample size because there was no study in the same study area. 5% margin of error, 95% confidence interval, and 10% non-response rates were also considered to calculate the sample size. Hence, the total sample size became 423. A simple random sampling method was used to select health institutions for the study. Four health centres from the district were randomly selected by lottery methods. From the selected health centres, study participants were recruited by systematic random sampling technique. To improve the representativeness of the sample size to the source population, the proportional allocation was performed for each health institution ( Figure 1).
Study variables
The dependent variable was myopia, which was dichotomized as "yes" or "no". We classified the study participants as "yes" (myopic) if the spherical equivalent of objective refraction is ≤-0.50 diopter in either eye or both and unless otherwise, normal (no) if the spherical equivalent of objective refraction is >-0.50 diopter in either eye or both.
The independent variables were age, residence, occupation, educational level, parity, gestational age, history of DM, GDM, history of HTN, PIH (preeclampsia and eclampsia), history of medication, regular use of smartphones and computers or watching TV, history of contraceptive use, sleep disturbance, and family history of vision problem.
Operational definitions
Myopia: the spherical equivalent of objective refraction is ≤-0.50 diopter in either eye or both. The severity of myopia is categorized as Mild myopia: spherical equivalent = −0.50 to −2.99 D; Moderate myopia: spherical equivalent = −3.00 to −6.00 D; High myopia: spherical equivalent greater than −6.00 D (2, 48,49). Regular use of computers or television: Reading or watching computers or television at least once a day for not less than 2 h (50).
Regular use of smartphones: Using smartphones at least once a day for more than 2 h (50).
Medication History: Taking anti-rheumatic, anti-psychiatric & anti-thrombotic drugs in the last 30 days.
Data collection tools, procedures, and quality management
A pre-tested, structured-interviewer-administered questionnaire consisting of socio-demographic variables, obstetric and other clinical-related variables was used to collect the required data. Presenting visual acuity test was determined using Snellen's illiterate "E" chart in a well-illuminated room, at a distance of 6 meters from the chart. Non-cycloplegic refraction was performed for all study participants using trial lenses, trial frames, and retinoscopy in a semi-dark examination room. Data were collected by two BSc Midwives and two Optometrists. The training was given to the data collectors and the supervisor about the objectives of the study, data collection techniques and ethical issues. Strict supervision was undertaken during the process of data collection. The study participants had gotten counselling and a referral system depending on the ocular findings. Data processing and analysis procedure The collected data were entered into EpiData 3.1 and exported into STATA 14 for statistical analysis. Descriptive measures like median, frequency and interquartile range (IQR) were calculated. Bi-variable binary logistic regression analysis was used to select the candidate variables for the final model. Those variables with a p-value of <0.2 in the bivariable binary logistic regression analysis were selected for multivariable binary logistic regression. Multivariable binary logistic regression analysis was executed to identify factors associated with myopia. The measure of association was defined by adjusted odds ratio (AOR) with a 95% confidence interval. In the final model, variables with a p-value ≤0.05 were declared as statistically associated with Myopia. Model fitness was checked by Hosmer and Lemeshow goodness of test (at p > 0.05) and multi-collinearity was tested by a variance inflation factor (VIF).
Ethical approval and consent to participate
Prior to the study commencement, all the ethical issues were secured. Ethical clearance was gotten from the Institutional Review Board (IRB) of the University of Gondar with the reference number 1828/2012. A permission letter was obtained from Gondar district health office before data collection. This study was done in accordance with the relevant guidelines and regulations of the Declaration of Helsinki. After the study participants were adequately briefed about the study, written informed consent was taken from each study participant. Privacy and confidentiality of information were kept properly. Study participants who had moderate and high myopia at the time of data collection were referred to the Department of Ophthalmology at the University of Gondar Comprehensive Specialized Hospital for better diagnosis and management.
Socio-demographic characteristics of the pregnant women
A total of four-hundred and twenty-three pregnant women participated with a 100% response rate in this study. The age range of pregnant women who participated in the study was from 16 to 46 years. The majority of the pregnant women (82.51%) were from urban residences. 37.12% of our study participants had a college or university level of education and 33.10% of them are housewives by occupation (Table 1).
Lifestyle, clinical, and obstetric-related characteristics
The majority of the study participants were nulli and primiparous (61.70%) and 64.78% of them were in third the trimester of gestation. Thirty-nine (9.22%) pregnant women had a history of vision problems (myopia). The majority of our study participants (61.70%) had used smartphones for more than 2 h per day. Two-hundred and forty (56.74%) of the study participants had a history of contraceptive use prior to their current pregnancy ( Table 2).
Prevalence of myopia and its associated factors
In this study, the overall prevalence of myopia among pregnant women was 26.48% (95% CI: 22.48-30.91). Eighty-Eight (20.81%) and Eighty-Four (19.85%) of the study participants had myopia in their right (Rt) and left (Lt) eyes respectively. The spherical equivalent (SE) refractive error in the right and left eyes of the study participants ranged from −14.0D to +4.0D and −12.0D to +4.0D respectively. In both eyes, the majority of the study participants had 0.0D of SE (Two hundred fifty-five (60.28%) on their Rt eyes and Two hundred sixty-two (61.94%) on their Lt eyes) (Figures 2 & 3). The median spherical equivalent in both eyes (Rt & Lt) was 0.0 in our study. The spherical equivalent result showed that 61.46% of the women were emmetropic, 12.06% were hyperopic and the rest 26.48% were myopic (Table 3).
Amongst all variables entered in to a binary logistic regression, age, residence, educational status, occupation, parity, gestational age, history of DM, GDM, History of HTN, PIH, family history of vision problem, history of contraceptive use, and history of medication were associated with myopia at p-value <0.2. However, in the final model, only age, parity, gestational age and history of contraceptive use were significantly associated with myopia at pvalue ≤0.05. The odds of developing myopia among study participants was increased by 1.17 times (AOR = 1.17; 95% CI: 1.09-1.28) for a unit increase in the age of pregnant women. Pregnant women who were in the third trimester of gestational age had 2.05 times (AOR = 2.05, 95% CI: 1.08-3.90) increased odds of myopia than those in the first and second trimesters of gestational age. Being multi & grand multiparous among pregnant women was 3.15 times (AOR = 3.15; 95% CI: 1.59-6.25) more likely to develop myopia than those who were nulli and primi parous. The odds of having myopia among pregnant women who had a history of contraceptive use prior to their current pregnancy was 3.3 times (AOR = 3.30; 95% CI: 1. 50-7.28) higher than the non-users ( Table 4).
Discussion
Pregnancy is a normal physiological condition, which is often characterized by both physiological and pathological changes in all organ systems of the body including the visual system during pregnancy. Most of the changes during pregnancy are due to transient responses to the hormonal and metabolic modifications to take on the gestational product. There are also critical pathological complications that may persist after postpartum period in reproductive age women (53, 54). Refractive errors are the common types of ocular alterations among pregnant women, of which myopia is the largest variety but to the best of our knowledge, very little is known about the magnitude of myopia among pregnant women in Ethiopia. Thus, this study (the first of its kind in Ethiopia) tried to offer insight on the magnitude of myopia and its significant factors among pregnant women at health institutions in Ethiopia, the case of Gondar District governmental health institutions.
In our study, the overall prevalence of myopia among pregnant women was 26.48% (95% CI: 22.48-30.91) which is comparable with studies in Israel (27.5%) (13). However, our finding is lower than the studies conducted in India (65%) (20), South India (77.5%) (21), Nigeria 57% (17), and USA (25%-50%) (40). This discrepancy might as a result of the differences in study settings and study design. For instance, we applied institution based cross- sectional study while most of the previous studies used observational prospective studies. Another possible reason for the variation would be also cultural and socio-economic characteristics of the study population. The Ethiopian populations including women have the least exposure to potential risk factors like access to use digital devices and environmental hazards (industries) when compared to people of developed countries.
The prevalence of myopia among pregnant women in this study is higher than in other previous studies in Saudi Arabia (18.1%) (14), Iran (11.77%) (18), and South Africa (2.9%) (55). This variation might be attributable to the differences in the study population. Here, the study population in our study were only pregnant women whereas in the compared studies above are non-pregnant women. Many previous studies in the world revealed that the prevalence of myopia is increased during pregnancy because of metabolic and hormonal changes (18,26,49,54,56). In the course of pregnancy, there is an increased level of estrogen and progesterone, which cause fluid retention in the cornea. This leads to corneal edema, thickness and curvature, and amplified lens thickness, which subsequently increases refractive power of the eye and end-up with myopia (18,20,21,26,54,57). Myopia can be also associated with neuro-ophthalmic and other pre-existing conditions precipitated by gravidity (26).
A unit increase in years of maternal age was significantly associated with myopia which is in line with other studies in South India (58), United States of America (59), China (11,48), and Sri Lanka (60). The increased likelihood of myopia with age might be due to an increased risk of age related diseases of the eye. With increasing of age, the nature and functions of the lens and cornea gradually decreases and strongly affects the normal focusing of the light at the retina (26).
Myopia was 2.05 times more likely to occur in the third trimester of GA of the women, which is consistent with other studies in South India (21), Turkey (61), Iran (62), and Nigeria (17). As reported by previous studies, the reason for this occurrence might be due to the metabolic and hormonal fluctuations because of gestational pressure, which may lead to corneal thickness and greater refractive power of the lens that finally brings about myopia among the pregnant women (21,26,63).
The odds of developing myopia was increased by 3.15 times among Multi & grand multiparous pregnant women than those who were nulli and primiparous. This result is in line with other previous study in South India (21) and China (34). This occasion is probably due to the repetitive ocular shifts because of hormonal influences on the subsequent gravidity of mothers who had higher number of parity. With increasing of parity, corneal edema, thickness, and curvature might be more elevated, which will upset the normal refraction power of the eyes.
The odds of having myopia among pregnant women who had history of contraceptive use before their current pregnancy was 3.3 times higher than the non-users. Our finding is similar with previous studies in India (64), Egypt (47), and Greece (65). This may be due to the fact that using contraceptives (oral and injectable) as a family planning method will cause corneal edema and an increase in the corneal thickness and curvature associated to the hormonal effects (estrogen and progesterone), which leads to myopia (66, 67).
A perfect response rate (100%) was the strength of our study. However, this study was cross-sectional, which did not measure the cause-effect relationship between independent variables and myopia. We did not also perform cycloplegic refraction test assuming that the procedure is exhaustive and its overall effect on the outcome variable is very little since most of our study participants were adults (26-35 years old).
Conclusion
The findings of our study showed that there is a higher prevalence of myopia among pregnant women in our study area. Myopia was significantly associated with maternal age, the 3rd trimester of gestation, multi & grand multiparous women, and those who had history of contraceptive use before the current pregnancy. Further prospective analytical studies regarding visual system among pregnant women, particularly as a result of pregnancy, are strongly recommended. We also recommended health professionals to perform a routine initial evaluations, promotions and preventions for the health of visual systems and pre-existing conditions of the pregnant women.
Data availability statement
The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation.
Ethics statement
The studies involving human participants were reviewed and approved by Institutional Review Board (IRB) of the University of Gondar. Written informed consent to participate in this study was provided by the participants' legal guardian/next of kin. Publisher's note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-01-16T14:22:51.987Z | 2023-01-16T00:00:00.000 | {
"year": 2022,
"sha1": "8034d532119f5496aebf7780fe114031f838b773",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "8034d532119f5496aebf7780fe114031f838b773",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
88523367 | pes2o/s2orc | v3-fos-license | Estimation of component reliability in repairable series systems with masked cause of failure by means of latent variables
In this work, we propose two methods, a Bayesian and a maximum likelihood model, for estimating the failure time distribution of components in a repairable series system with a masked (i.e., unknown) cause of failure. As our proposed estimators also consider latent variables, they yield better performance results compared to commonly used estimators from the literature. The failure time model considered here is the Weibull distribution but the proposed models are generic and straightforward for any probability distribution. Besides point estimation, interval estimations are presented for both approaches. Using several simulations, the performances of the proposed methods are illustrated and their efficiency and applicability are shown based on the so-called cylinder problem.
Introduction
Recently, Zhang et al. (2017) proposed a method to estimate the failure time distribution of cylinders (components) from a diesel engine (system). The engine structure is in series, that is, it fails as soon as the first of the 16 cylinders fails. A failed cylinder is replaced by an identical functioning one in the corresponding porating expert knowledge and/or past experiences as a priori distribution, besides considering the statistical inference under the Bayesian paradigm.
The Weibull distribution is considered for the components' lifetime distributions, and thus, each socket represents a Weibull Renewal Process. However, it is quite simple to extend the work to other distributions.
Section 2 describes the Weibull Renewal Process and data structure. Sections 3 and 4 present the maximum likelihood and Bayesian approaches in more detail. Both methods are evaluated by means of simulation studies and the corresponding results are given in Section 5. Section 6 shows the applicability of the methodology in the cylinder dataset and Section 7 concludes this work.
Data structure and model
Consider a system with m components operating in m sockets. Once a component fails, it is replaced by a new one in the same socket. In the following, we will define quantities for a single socket and hence omit the socket indices.
Weibull Renewal Process
Let Y l denote the lifetime of the component before replacement l, for l = 1, 2, . . .. Under the assumption that the components' failure times are independent and identically distributed (i.i.d.), let f (·) and R(·) be the density and reliability functions of the component failure time. The distribution considered here is the Weibull distribution, which enables to model changes in both distribution shape and hazard rates. We can have increasing, decreasing and constant failure rates in this family of Weibull distributions (Rinne, 2008).
The Weibull reliability function is defined as for y > 0, with parameter vector θ = (β, η), in which β > 0 and η > 0 are the shape and the scale parameters, respectively.
Let Z k be a positive random variable that denotes the time of occurrence of the k-th failure in the socket.
Thus, Z k = k l=1 Y l , k ≥ 1, and {Z k } is a Weibull Renewal Process (WRP), that is, each socket in the system represents a WRP.
The mean and variance of Z k are given by:
Superposed Renewal Process and Data Structure
Once a system has m independent sockets, each system-level set of failure times forms a superposed renewal process (SRP). Let T k be the k-th failure time of the system, in which T 1 = min{Y 11 , Y 21 , . . . , Y m1 } and Y j1 denotes the first component failure time in the j-th socket, j = 1, . . . , m.
Let T = (t 1 , t 2 , . . . , t r , τ) denote the observed event history of a single SRP with event times t 1 < t 2 < . . . < t r , and end-of-observation time τ with τ > t r . A data set will consist of n independent SRPs corresponding to the n systems in the fleet.
In summary, the assumptions made here are: (a) the component distribution function is the same for all sockets and systems over time, (b) the failures within a socket are independent, (c) all sockets within one system have the same end-of-observation time τ, and (d) the n systems in the fleet are independent.
Maximum likelihood approach
Consider a sample of n systems. Let t i = (t 1i , t 2i , . . . , t r i i ) be the vector of observed r i failure times for the i-th system and τ i the end-observation time, with i = 1, . . . , n. Let d i = (d 1i , d 2i , . . . , d r i i ) the vector that indicates the cause of failure, in which d ki = j, if component j causes the k-th failure in the i-th system, for j = 1, . . . , m, k = 1, . . . , r i and i = 1, . . . , n.
Lets first assume that d i is observed. As an example consider a system i with m = 16 components for which r i = 3 failures, d i1 = d 3i = 1 and d 2i = 13, were observed. According to Zhang et al. (2017) the likelihood contribution of this system is Note that the likelihood contribution of system i presents (3) in a situation where In a masked cause of failure scenario, the actual failure position d i of system i are not observable. Hence, there are V i = m r i = 16 3 = 4,096 possible configurations of likelihood contributions for this system, in which V i is the number of possible data configurations of system i with r i failure times in m components.
Based on Zhang et al. (2017), the likelihood contribution of the i-th system is given by in which L iv is the likelihood contribution of the v-th configuration for system i. Considering that a fleet of n independent systems is observed, the likelihood function for θ is where t = (t 1 , . . . , t n ). Zhang et al. (2017) propose the maximization of the likelihood function given in (4).
In the masked cause of failure scenario, d i is a vector of latent variables. A suitable approach for estimating the parameter values, which maximize the likelihood function, is to consider an expectation-maximization (EM) algorithm. The latter is presented in the following subsection.
EM algorithm
The EM algorithm is an iterative method with Expectation (E) and Maximization (M) steps (Dempster et al., 1977). The E-step evaluates the expectation of the full log-likelihood function and the M-step tries to find the parameter configuration, which maximizes the expectation found within the E-step.
The augmented likelihood function (i.e., the likelihood function with latent variables) of θ is given by The form of L i (θ | t i , d i ) depends on the number of failures r i . For this reason, a general form is presented in the following.
Given d i , let Γ i be the set of v i component indexes that cause at least one failure for system i. In a situation in which no failure is observed, v i = 0. Let x ilk the k-th failure time caused by the l-th element of Γ i , with l = 1, . . . , v i and k = 1, . . . , n l . As an example, for system i with r i = 3 failures observed and d 1i = d 3i = 1 and d 2i = 13, we have Γ i = {1, 13}, v i = 2, n 1 = 2 and n 2 = 1, x i11 = t 1i , x i12 = t 3i and x i21 = t 2i . Thus, v i l=1 n l = r i .
The likelihood contribution of the i-th system can be written as Thus, the logarithm of the augmented likelihood in (5) can be written as Let θ r be the value assumed by θ in the r-th iteration of the algorithm. The (r + 1)-th E-step consists of calculating the expectation of (6), that is, Unfortunately, there exists no analytical expression of the expectation in (7). Instead, it can be approximated by Monte-Carlo simulations. Consider that L random samples d (1) i , . . . , d (L) i are simulated based on f (d i | t), i.e., the density function of d conditional to T = t, i = 1, . . . , n (see Subsection 3.1.1). Thus, the E-step results in calculating The M-step maximizes (8) with respect to θ resulting in θ r+1 . The optimization method considered within this work is the Nelder-Mead algorithm (Nelder & Mead, 1965). The E-and M-steps are alternated until the difference of estimates between two consecutive iteration values is less than 10 −4 . The estimate of θ, say θ, is obtained when the convergence criterion is reached. In this work, we consider L = 1,000.
Conditional distribution of d given T=t
For a fixed i, f (d i | t i ) can be written as As an example, consider r i = 3 and t i = (t 1i , t 2i , t 3i ). Thus, . . , p 1mi ) and p 1 ji = 1/m, j = 1, . . . , m. Note that in this special case, the multinomial distribution equals a discrete uniform distribution.
Similarly, the distribution of d 2i | (t i , d 1i = j) can be described as follows: For the conditional distribution of d 3i , one has to consider the following two cases:
Asymptotic Distribution
The asymptotic distribution of the maximum likelihood estimator θ can be approximated by a multivariate normal distribution with mean θ and variance-covariance matrix I θ (θ) −1 , where I θ (θ) is the observed information matrix for θ. As demonstrated by Louis (1982), I θ ( θ) is the sum of The matrix I 1 (θ | θ) can be estimated by Detailed information on the development of I θ ( θ) −1 is given in the appendix.
Thus, an asymptotic γ% confidence interval for θ (CIγ%) is given by in which I j j denotes the jth element of the main diagonal of I θ ( θ) −1 .
Confidence intervals for functions of θ can be obtained by the delta method (Casella & Berger, 2002).
Bayesian Approach
The posterior distribution of θ can be written as where L(θ, d | t) has the same form as (5) in which d now is faced as parameter and π(θ, d) is the prior distribution of (θ, d).
In real-world settings, it is possible that the prior distributions can be influenced by expert knowledge and/or past experiences on the functioning of the components. In this work, no prior information about the functioning of the components is available, which is the reason for the choice of non-informative prior distributions. The priors of Weibull parameters are considered to be independent gamma distributed with mean 1 and variance 100. Besides, d li follows a Multinomial distribution M(1, p li ), where p li = (p l1i , . . . , p lmi ) and p l ji = 1/m, with j = 1, . . . , m.
Given the posterior density in Equation (9) does not have a closed form, statistical inferences about the parameters can rely on Markov-Chain Monte-Carlo (MCMC) simulations. Here, we consider the Metropolis within Gibbs algorithm (Tierney, 1994) once it is possible to sample some of the parameters directly from the conditional distribution; however, this is not possible for other parameters. The algorithm works in the steps presented in Algorithm 1.
Discarding burn-in (i.e., the first generated values are discarded to eliminate the effect of the assigned initial values for parameters) and jump samples (i.e., gaps between the generated values in order to avoid correlation problems), a sample of size n p from the joint posterior distribution of (θ, d) is obtained. The sample from the posterior distribution can be expressed as (θ 1 , θ 2 , . . . , θ n p ). Posterior quantities of θ can be Algorithm 1 The Metropolis within Gibbs algorithm.
easily obtained (Robert & Casella, 2010). For instance, the posterior mean of θ is The sample from the posterior distribution of g(θ) can be expressed as (g(θ 1 ), g(θ 2 ), . . . , g(θ n p )) and posterior quantities of g(θ) can be obtained. For instance, the posterior mean of the reliability function is in which R(· | θ) has the form presented in (1).
Note that E(Z k ) and Var(Z k ) are functions of g(θ) and thus can be obtained in an analogous way.
Model evaluation by means of a simulation study
This section presents the results of some exemplary simulations to evaluate the performance of the estimation methods described above, with regards to the estimation quality.
The steps for generating the data of each simulated example, with m being the number of sockets and n the sample size, are presented in Algorithm 2. The mean (7) and variance (4) To obtain posterior quantities, we used an MCMC procedure to generate a sample from the posterior distribution of the parameters. We generated 20,000 samples from the posterior distribution of each parameter.
The first 10,000 of these samples were discarded as burn-in samples. A jump of size 10 was chosen to reduce correlation effects between the samples. As a result, the final sample size of the parameters generated from the posterior distribution was 1,000. The chains' convergence was monitored in all simulation scenarios for good convergence results to be obtained.
Algorithm 2 Data generation.
1: for each system unit i = 1, . . . , n do 2: Draw τ i from a Weibull distribution with mean m c and variance 0.05.
3:
Draw Y 11i , Y 21i , . . . , Y m1i from a Weibull distribution with mean 7 and variance 4, where Y j1i is the first component failure time in the j-th socket, for j = 1, . . . , m.
5:
if T 1i ≥ τ i then 6: stop simulation process and r i = 0. 7: Draw Y l2i from Weibull distribution with mean 7 and variance 4 conditional to Y l2i > t 1i , where Y l2i is the second component failure time in the l-th socket, once the first failure occurred in the l-th socket. 10:
The mean absolute error (MAE) from each estimator to the true reliability of each method is considered as
Simulated examples
For the Bayesian approach, the Gelman-Rubin convergence diagnostic measures (Gelman & Rubin, 1992) for parameters β and η are 1.0011 and 1.0004, respectively, in Example 1 and they are 1. 0002
Cylinder dataset analysis
A fleet of n = 120 diesel engines (systems) is observed. Each engine has 16 identical cylinders working in series, that is, the first cylinder to fail causes the engine failure. When a cylinder fails, it is replaced by an identical functioning one in the socket (cylinder position), but the information about which socket each replacement comes from is not observed. Table 4 presents the distributon of the number of failures across all 120 systems.
To obtain posterior quantities related to the posterior distribution of θ = (β, η) from (9) through MCMC simulations, we discarded the first 10,000 as burn-in samples and used a jump of size 10 to avoid correlation problems, obtaining a sample size of 1,000. The chains' convergence was monitored through graphical analysis, and good convergence results were obtained. The Gelman-Rubin convergence diagnostic measures for parameters β and η are 1.005 and 1.002, respectively. The measures are close to 1, which suggests that convergence chains have been reached. in Equation (2)) estimates through Bayesian approach of cylinder dataset. through the package SRPML available by the authors. The proposed methods are not affected by the high numbers of failures and/or components and they work perfectly even in these situations. Besides, in settings that Z-ML finds solutions, the proposed methods also find and present similar performance. Thus, the huge advantage of our proposed methods is that they estimate the components' failure time distribution regardless the numbers of failures and components.
Conclusion
The practical applicability was assessed in cylinder dataset, in which components' failure time quantities were estimated convincingly.
In this work, the assumption of independent and identically distributed (i.i.d.) components failure times has been made and found to be suitable for the cylinder dataset characteristics. However, this assumption might not be applicable to other scenarios. Thus, in future works, our proposed method can be extended to situations in which the assumption of independent and identically distributed failure times is violated. Thus, in which θ = ( η, β). Besides, The quantity I θ ( θ) can be estimated by I + II + III. | 2018-06-20T03:29:04.000Z | 2018-06-20T00:00:00.000 | {
"year": 2018,
"sha1": "7afff9010379fcce94d4c3b14672c53f2c3d0071",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7afff9010379fcce94d4c3b14672c53f2c3d0071",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
81983254 | pes2o/s2orc | v3-fos-license | Diffuse Idiopathic Skeletal Hyperostosis: A Case with Dysphonia, Dysphagia and Myelopathy
Patient: Male, 66 Final Diagnosis: Diffuse idiopathic skeletal hyperostosis Symptoms: Dysphagia • dysphonia • myelopathy Medication: — Clinical Procedure: X-ray computed tomography Specialty: Orthopedics and Traumatology Objective: Rare disease Background: Diffuse idiopathic skeletal hyperostosis (DISH) is characterized by the ossification of soft tissues, primarily the ligaments and enthesis. Exuberant osteophyte formation of the anterior longitudinal ligament of the spine is usually found. Among the reported complications of cervical osteophyte, dysphagia is the most frequent symptom, and dysphonia is rare. Case Report: A 66-year old male was suffering from progressive dysphonia, dysphagia, and myelopathy. Anterior cervical osteophytes and ossification of the posterior longitudinal ligament (OPLL) was shown on x-ray and computed tomography (CT). He was diagnosed with DISH and the osteophytes were resected. The patient’s symptoms gradually improved. Conclusions: DISH may induce varying symptoms and surgical intervention is a good way to relieve these symptoms. We rarely see the symptoms of dysphonia, but we should consult with other professionals, such as otolaryngologist and dietician, when treating DISH patients.
Background
Diffuse idiopathic skeletal hyperostosis (DISH) is a common disorder of unknown etiology that is characterized by osteophyte formation of ligament, tendon, or joint capsule insertion, and can cause back pain and spinal stiffness.
Dysphagia is the most frequently reported complications of DISH and occurs in about 30% of patients with cervical osteophytes. Other symptoms, such as a cough, sore throat, and sleep apnea, are also described in these patients, but dysphonia is a rare complication.
We herein report a rare Japanese case of DISH with progressive dysphonia, dysphagia, and myelopathy. Surgical treatment was performed via the anterior approach and showed good results.
Case Report
A 66-year-old male with a history of brain infarction, arrhythmia, hypertension, and sleeplessness presented with a 2-month history of progressive dysphonia, dysphagia, and gait disturbance. He only made conversation by means of writing. An examination revealed hyper-reflex of both extremities; however, there was no muscle weakness or sensory disturbance. He felt slight gait disturbance and was afraid of falling down, so a walking stick or cane was indispensable in his daily life. Neck x-ray and computed tomography (CT) showed a massive osteophyte of the anterior longitudinal ligament at C4 to T1 and compressing the pharynx at the C3/C4 level (Figures 1, 2).
Ossification of the posterior longitudinal ligament (OPLL) was also noted in the spinal canal. Unfavorable instability was also found in this place. Magnetic resonance imaging (MRI) showed slight compression of the cervical cord at the C3/C4 level by OPLL ( Figure 3A, 3B). A fiberscope examination by an otolaryngologist revealed protruding submucosal mass in the posterior pharyngeal wall and vocal cord dysfunction ( Figure 4). The patient had a high risk of aspiration regardless of meal form, so it was hard for him to take anti-inflammatory drugs.
An operation to remove the anterior osteophyte and a discectomy at the C3/C4 level and fusion were performed to reduce the pharynx compression and diminish the unfavorable movement of the OPLL. The patient was placed in the supine position with neck extension. A collar incision was made at the left side by retracting the omohyoid muscle and pharynx, and the C3/C4 osteophyte was easy palpable just posterior to the pharynx. The osteophyte from C3 to C4 was resected to make a smooth surface, and bleeding from the bone was stopped with bone wax ( Figure 5). Anterior cervical discectomy and fusion using a stand-alone anchored spacer (Stryker ® 7 mm, 4°, fixed with 12 mm screw) with local bone grafting was performed at C3/C4. The operation time was 114 minutes, and the estimated blood loss was 10 g.
Dysphagia and dysphonia were improved by day 3 after surgery, although an x-ray showed massive swelling in the retropharyngeal space ( Figure 6). The patient's myelopathy, such as clumsy hand movements and walking difficulty, was also improved. The subjective symptoms of dysphagia and dysphonia were improved by day 20 after surgery, and food intake was started with a dysphagia diet after confirming that deglutition (swallowing) was not a problem through video-fluoroscopic examination, although x-ray and a fiberscope examination still showed swelling in the retropharyngeal space ( Figure 7A, 7B).
Discussion
This case report describes a patient who presented varying symptoms with DISH. DISH tends to occur more often in elderly men with metabolic syndrome, diabetes mellitus, and obesity. Growth hormone and insulin-like growth factor are said to promote bone growth [1].
DISH is distinguished from ankylosing spondylosis and other degenerative disease. DISH is characterized by 3 criteria identified by Resnick [2]: 1) calcification and ossification along the anterolateral paravertebral ligaments at least 4 vertebral bodies; 2) relative preservation of the intervertebral disc height in contiguous body; and 3) absence of apophyseal ankylosis or erosion of sacroiliac fusion.
The thoracic spine is most frequently affected, causing back pain and stiffness. The symptoms of DISH depend on its localization. In the cervical spine, the formation of a large osteophyte can result in dysphagia due to esophageal compression. Exuberant osteophyte formation of the anterior longitudinal ligament sometimes induces dysphagia in DISH patients. The mechanisms underlying dysphagia in DISH, as suggested in the literature, includes restriction of epiglottis mobility, incomplete glottal closure, and restriction of the movement of larynx [3]. Patients rarely have dysphonia [4]. Dysphonia may be attributed not only to the mechanical obstruction of the larynx but also to a reduction in glottal mobility by retro-cricoid inflammation. Direct compression of the osteophyte can cause recurrent laryngeal nerve paralysis [5]. There are a few reports that have suggested cases of bilateral vocal cord paralysis due to compression of a retro-cricoid lesion associated with exuberant osteophyte formation [6,7]. This mechanism can also cause dyspnea and sometimes needs immediate surgical interventions for emergency airway opening, but increased formation of osteophytes can sometimes interfer with tracheal intubation.
Our patient presented with progressive dysphonia, dysphagia, and myelopathy of over 2 months history. CT showed a huge osteophyte compressing the pharynx and the OPLL compressing the cervical cord at the C3/C4 level. A fiberscope examination prior to the operation showed severe oropharyngeal dysphagia and vocal cord dysfunction. Surgical treatment is considered a reasonable treatment option [8]. After the removal of the osteophyte and anterior decompression and fusion, the patient's dysphonia and dysphagia gradually improved. Myelopathy also improved by stabilizing the OPLL. However, despite symptom resolution, the swelling in the retropharyngeal space persisted for about a month, which was a relatively long time in this case. This may have been due to pre-existing inflammation induced by the protruding osteophyte. Some cases have shown that despite a successful surgery, the lack of swallow coordination may continue resulting in a patient unable to eat on their own any longer. Patients should be followed-up for several years in terms of symptom recurrence as the long-term outcomes of post-operative recurrence remains unclear [9].
Conclusions
We must remember DISH may induce varying symptoms: dysphagia, gait disturbance, sleep apnea, and so on. However, dysphonia due to mechanical obstruction, such as was found in our case, is rare. Radiological examinations, such as x-ray and CT-scan, are a simple and easy way to investigate the situation. However, when we come across these patients, cooperation with otolaryngologists and dieticians with various expertise is also important before a patient's symptoms worsen. | 2019-03-19T13:02:18.259Z | 2019-03-17T00:00:00.000 | {
"year": 2019,
"sha1": "14f33348673d7635d64e1de65f62195edaf14a1f",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6434612?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "14f33348673d7635d64e1de65f62195edaf14a1f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
165379646 | pes2o/s2orc | v3-fos-license | Melot, M. (2015). Uma breve história da imagem. Vila Nova de Famalicão: Ed. CECS & Húmus
Born on August 9, 1943 in Blois, Michel Melot, librarian and art historian, served as Director of the Department of Prints and Photographs of the National Library of France (1981-1983), where he organized several exhibitions, including the great exhibition dedicated to Impressionist prints, Director of the Public Information Library of the Centre George Pompidou (from 1983 to 1990) and President of the Superior Council of Libraries, at the French Ministry of Culture, where he managed the general inventory of architecture and heritage until his retirement (from 1997 to 2003). He is the author of a vast work in the fields of archaeology and art history, novels, and reflections about books, libraries and heritage. His works include the following: 1970s: (1975) L’Œil qui rit: Le pouvoir comique des images, Paris: Bibliothèque dês arts, (1978) Fontevrault, Paris: CLT; 1980s: (1981) L’estampe, Genève: Skira, (1984), L’illustration, Historie d’un art. Genève: Skira; 1990s: (1994) L’estampe impressionniste, Paris: Flammarion; 2000s: (2004), La sagesse du bibliothécaire, Paris: l’Œil Neuf, (2006), Livre, Paris: l’Œil Neuf, (2012), Essai sur l’inventaire général du patrimoine culturel, Paris: Gallimard. This vast bibliography highlights the fact that Uma Breve História da Imagem [A Brief History of the Image] is one of the works by Michel Melot dedicated to the image – a passion that was accompanied by his keen interest in writing – not as a linguistic phenomenon, but as a graphic matter that unfolds in space. In this slim volume, originally published in 2007 by the publisher, L’Oeil Neuf, now translated by Aníbal Alves and published by the Communication and Society Research Centre, in partnership with the publisher Húmus, Melot proposes a brief history of the image. I have structured this overview into several notes, in which I have recorded some of the impressions and reflections raised by his work.
is no question that the readers miss the images and the author is probably aware of this. At the end of his book, when he presented an annotated bibliography, he wrote: "in terms of images, it's not enough to read books (...) this book owes much to visiting museums" (104). That's why, in order to follow the book, alongside the journey that he proposes to us, we need many images, images from our memories and personal experience and images from collective memories. When will a multimedia edition of this Brief History of the Image be published?
The passion that images awake in us
Melot recounts this brief history of the image in a passionate, discursive manner that seduces us and simultaneously keeps us in a permanent position of tension and questioning. He seduces us, because he writes in a clear, straightforward and often deeply poetic fashion. One must also recognize in the creation of these effects, of course, the talent and art of the translator, Aníbal Alves. But his siren song doesn't intoxicate us, it doesn't surrender us to temptation. The text keeps us in a position of constant attention: because it provides a great deal of information -it is a brief but dense and well-documented history -because it forges a dialogue with us -bringing our beliefs into play, and the things that we take for granted in terms of images -and because it challenges us to think about the relations we have with images, as social actors, as human beings and as bodies.
The lifetime of images
The exercise that Melot develops by looking at the history of images, from frontto-back and from back-to-front, allows us to understand the coexistence of the distinct dimensions of the time of the image: the past exists in the present, not only as a "before" and "after", but also as a "during", that inhabits it in various ways. Melot gives us several examples: • Image-based magical practices still exist today; • We still oppose image to writing, forgetting that an image is always a form of writing, and that writing is primarily an image; • Images and writing share our screens, on televisions, on cell phones, just as occurred on Palaeolithic walls; • The pictograms of the figurative writings, which were used in America up until the nineteenth century, to recount the warlike exploits of Indian tribes, continue to populate our streets and our commercials, in the form of logos, signs and signboards (p. 25).
Melot teaches us that there is no transcendence or eradication in the history of the image, its technical conditions, the uses that we make of it, and the theories we produce about it. Instead there is remediation (Bolter & Grusin, 1999), dialectics and coexistence. The image, as a human artifice, jumps from its original period to our own or to others.
The author shows us that "the image, like writing, has several histories" (p. 37), and stresses that "the origin of the image does not have to be sought over the course of the centuries. It is always in us. A form becomes an image as long as it is observed, and associations of memories appear" (p. 23). Perhaps for this very reason Melot argues that the "history of the image can be summed up as an eternal struggle or permanent tension between analogy and code, or index and symbol, abstraction and figuration, realism and idealism" (p. 25).
How much the image has suffered and how much it has conquered
After pointing out a turning point or "pictorial turn" -an expression coined by Mitchell (1992), Professor of English and Art History at the University of Chicago -which many consider to be fundamental in the history of the image in Judeo-Christian culture, where the image passed from an object connected to worship and rituals to a worldly, desacralized use, Melot stresses the following: • How, after invention of the small picture frame, the image changed hands, thereby moving from the spiritual power to the temporal power; • How the mass reproduction of the image has raised the problem of originality, its link to the original, with the model: has it perhaps lost the aura (mentioned by Benjamin (1936Benjamin ( -1939Benjamin ( /1992)?
• How the image suffered from invention of the printed book: it was "dragged along with the baggage of writing" (p.51), adulterated in the form of illustrations and placed "outside the text", or "off-field", in a marginal position that persisted for at least three centuries" (p. 53); • How the image became a "mirror instrument", as a result of photography; • How the image encompassed the gesture and the word (p. 83), it combined with sound and was pulverized into pixels, thus being mathematically defined as a surface in which each point is determined by its coordinates (p. 95).
• How the image has short-circuited language -given that writing itself, which was originally invented to escape from the image, has now become an image in its own right (p. 98).
• And how it has transformed us into an image of flesh: "the tattoo transforms us into an image of flesh (p. 99), becoming a body.
History of an announced image
Are all these progresses new, or completely new? Are there any real ruptures? Is it true, as some commentators argue, e.g. Moisés Martins in his text "O que podem as imagens?" [What can images do?] -published in the book Imagem e Pensamento [Image and Thought] in 2011 by Grácio Editor -that the proliferation of images on screens means that the nature of the image has changed: it no longer refers to the world, nor to the other, but, on the contrary, it is now things and ourselves that imitate images (p. 132)? Or is it the case, as Melot argues, that the nature of the image hasn't changed at all, digitization hasn't stripped the image of its analogue nature -it is only the reproduction technique that has been digitized -and the relationship between things, ourselves and images, never had a single meaning (p. 94)?
This propensity of the image to integrate itself into reality, or that of reality to emancipate itself within the image, is nothing new, Melot notes (p. 99). He adds, "the myths of the image have never been as strong as at the moment when we thought we had mastered its techniques". "After so much progress, how did we get there, or rather, how do we "remain there"? (p. 94), asks the author. And he concludes: "If there is a crisis of representation, it is as old as the image itself (p. 99).
Are we the ones who see the images or are they seeing us?
"The real danger lies in us not wanting to know that they are just images -in truth images don't fall from the sky" (p. 35), emphasizes Melot. It is important, therefore, to "remove the sorcerer power that we bestow upon them" (p. 36). And how do we remove this power?
By describing what we think we see in them? No, Melot replies, we remove this power from the image "by raising the current of meanings which have been attributed to the image, and deducting from the image the meanings that which we give to it (p. 19).
Does this mean deciphering images, as if they were word games? No, writes Melot, we must first understand what they conceal through that which they show (p. 35), since "the entire image is simultaneously a means of access to an absent reality, which symbolically evokes it, and yet is an obstacle to that reality" (p. 14).
Does this imply "that the meaning brought to an image remains perpetually open? (p. 25). And does this mean that images aren't encoded? In that case, what causes us to be able to recognize an image as an image?
Images and ourselves, a boundless passion
Michel Melot invites us to see images not as things but as relations. In the 1960s, Guy Débord affirmed: "the spectacle is not a set of images, but a social relation between people, mediated by images" (1967, thesis 4).
To see images as relations, therefore means that we frame them, that we look for their raison d'être in the diverse communities of producers and consumers of images and in the relations between them. It also means understanding that the lifetime of each image lies in the hands of the viewers. They are the true sorcerers, rather than the images themselves. As Aníbal Alves points out in the translator's note, to understand images as relations is to understand that they always pertain to something, of which they are an image (p. 6), and therefore shouldn't be confused with reality, nor be seen as a mere illusion.
If we are disturbed by the images we see on TV or in newspapers why instead of being revolted by the social and human reality that they represent, we want to ban them? Only those who believe in ghosts are afraid of their images, Melot reminds us.
It is true "that the image isn't learned like a language and escapes the teachers' instruction", that "the image is felt before being understood" (p. 67), but books such as | 2019-05-27T13:20:34.497Z | 2017-06-30T00:00:00.000 | {
"year": 2017,
"sha1": "f0a8749df249919bc8ca2f10c6551a6d77ecba02",
"oa_license": "CCBY",
"oa_url": "https://rlec.pt/index.php/rlec/article/download/1822/1871",
"oa_status": "GREEN",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "baded9d0ff2547882d9d0aea0705047ba7e6d96c",
"s2fieldsofstudy": [
"Art",
"History"
],
"extfieldsofstudy": [
"Art"
]
} |
87093394 | pes2o/s2orc | v3-fos-license | Fusarium verticillioides and its fumonisin production potential in maize meal
This study aimed to identify the presence of fungi of the genus Fusarium and to evaluate the presence of contamination of fumonisin in maize meal destined to human consumption in the city of Teresina, Piauí, Brazil. It was used 30 samples of maize meal of six different brands sold in supermarkets. Mycological evaluation was carried out immediately. Then aliquots were stored at 4 °C for later analysis of fumonisins. It was obtained 34 isolates from Fusarium verticilloides. These isolates had the ability to produce fumonisins B1, B2 and B3, with values ranging from 48.2 to 1190.1 μg g-1 for B1; from 6.7 to 311.5 μg g-1 for B2; and from 23 to 667 μg g-1 for B3. The fumonisin concentrations for isolates ranged from 84.3 to 2168.6 μg g-1. All the samples presented fumonisins with values ranging from 0.10 to 2.13 μg g-1. Isolates from strains of F. verticillioides were obtained in the maize meal samples, and the lots examined had different levels of fumonisins, which may represent risks to consumers.
Introduction
Among the leading crops in Brazil, corn (Zea mays) is the second most produced grain, behind only to soybeans, according to research done by Conab (2010). Corn is highlighted for being used as food for humans and animals; its derivatives have effective participation in the diet of the poorest sections of the population (Melo-Filho & Richetti, 1997). Tropical and subtropical climates Brazil promote fungal growth, which under suitable conditions can produce mycotoxins. These toxins have been frequently reported by several researchers (Bittencourt et al., 2005;Caldas & Silva, 2007).
Among the fungi, the major pathogen of plants belongs to genus Fusarium. These fungi can produce mycotoxins before or immediately after harvest (Sartori et al., 2004). Associated with corn, there are Fusarium species causing rot of stem and ear, besides the presence of the endophytic colonization F. verticillioides, F. proliferatum and F. subglutinans, in which F. verticillioides is the predominant species, particularly in the tropics (Leslie & Summerell, 2006).
Fumonisins are important due to the occurrence of mycotoxicosis in domestic animals and humans. Some important events occurred because of the consumption of foods containing fumonisin. In South Africa, specifically in the Transkei area, it was found the presence of high levels of moldy beans. Consequently, the local people who ate this product had esophageal cancer (Sydenham et al., 1990). In this same country it was confirmed the presence of equines with leukoencephalomalacia. The analysis of moldy corn samples eaten by these equines found the presence of fumonisin (Wilson et al., 1990). Fumonisins (FBS) are a group of mycotoxins produced mainly by Fusarium verticillioides, most prevalent, and F. proliferatum, which are the species most frequently associated with contamination in maize meal (Leslie & Summerell, 2006). This contamination by both fungi and mycotoxins, have been previously reported in maize grain around the world (Rheeder et al., 2002).
Fumonisin B1 is classified as a possible carcinogen for human (IARC, 2002), the consumption of food contaminated with this toxin has been related to the incidence of liver cancer and esophageal (Quieroga & Pernambuco, 2006;Sun et al., 2007), and also neural tube defects (Torres-Sanchez & Lopez-Carrillo, 2010). Moreover, the FBS can cause a variety of diseases in animals, which may reduce the productivity of them. The fumonisin levels in products from corn are dependent on the degree of contamination and transformation processes used in the production of the final product (Saunders et al., 2001). The fumonisins are heat stable and survive under cooking and frying conditions (Humpf & Voss, 2004).
Earlier studies conducted in Brazil evaluated the exposure of the Brazilian population to fumonisins by means of maize flour consumption (Bittencourt et al., 2005;Caldas & Silva, 2007). It was recently demonstrated that F. verticillioides is the prevalent species in some of producing regions of corn in Brazil; isolated from this study showed different levels of fumonisin production (Lanza et al., 2014). There are several work related to the F. verticillioides and maize; however, there are few studies related to F. verticillioides and maize by-products. Furthermore, it is necessary to evaluate the level of fumonisins present in foods marketed nationwide. These studies are important to define secure levels of contamination by mycotoxins in the food chain and to define priorities for monitoring programs (Join FAO & WHO, 2002).
This study aimed to identify isolates of Fusarium, using the concept of morphological species and evaluate the fumonisins levels in maize by-products purchased in markets of Teresina, PI.
Material and Methods
It was used 30 samples (500 g) maize meal from six different brands sold in supermarkets in the city of Teresina, Piauí, Brazil. After collection, the samples were homogenized, mixed and fractionated to obtain samples of 100g that were forwarded to the laboratory of the Center for Studies, Research and Food Processing of the Federal University of Piauí, Teresina, Piauí, for isolation of the genre. The identification of the isolates of Fusarium and production, detection and quantification of fumonisins by its isolates was held at the Mycology Laboratory of the National University of Río Cuarto, Córdoba, Argentina. The quantification of fumonisins in maize meal was made at the Center for mycological and mycotoxicologic research of the Rural University of Rio de Janeiro.
It was added 25 grams of each sample to 225.0 mL of 0.1% peptone water, which formed the dilution of 10 -1 this mixture was homogenized and diluted to final concentrations of 10 -2 and 10 -3 . It was spread 0.1 ml of each dilution (in duplicate) in the solid medium surface, Dicloran Rose Bengal Chloramphenicol agar (DRBC). The plates were incubated for seven days at 25 ° C. On the last day of incubation, the genders of the colonies were identified according to criteria proposed by Pitt & Hocking (2009). All Fusarium strains were transferred to Spezieller Nalvistof Agar (SNA), and incubated at 25 ° C for seven days for species identification. After growing in SNA, the colonies of Fusarium spp. were plated on monosporic cultivation and later cultivated in the media Carnation Sheets Agar (CLA) and Potato Dextrose Agar (PDA). The colonies were then incubated for 14 days at 24 ° C in cyclo of 12 hours of white light and 12 hour of dark light. After this period, the species identification was made by both macroscopic and microscopic features according to Leslie & Summerell (2006).
To proceed the determination and quantitation of fumonisins production by isolates of F. verticillioides, it was used Erlenmeyer flasks with 100 g of maize and 40 ml of distilled water, which was autoclaved twice for 30 minutes at 121 ° C. After cooling down, it was inoculated in maize a conidial aqueous suspension of spores (1.0 mL) obtained in the culture of carnation leaf agar (CLA) incubated in the dark at 25 ° C for 28 days. To avoid the formation of lumps, the cultures were shaken during the first days of incubation, and thereafter only when it was necessary. Maize crops were dried at 50 ° C and finely ground in a laboratory mill and stored at 4 ° C for later analysis of the presence of fumonisin. For each 15 g of the maize culture sample, it was added 50 mL of acetonitrile: water (1:1). Subsequently they were agitated in a shaker for 30 minutes and filtered through Whatman # 4 filter paper. It was taken aliquots of the extracts (1000μL) for high performance liquid chromatography (HPLC).
An aliquot of 50 µL of this extract was derivatized with 200 µL of an S-phthaldialdehyde solution (OPA). The derivatives of OPA fumonisin (solution of 20 µL) were analyzed by reverse phase HPLC / fluorescence detection system. The HPLC system consisted of a pump 1050 Hewlett Packard (Palo Alto, CA, USA) connected to a fluorescence detector 1046A programmable of Hewlett Packard and a Hewlett Packard Kayak XA data module (HP ChemStation Rev. A.06.01). Chromatographic separations were performed on C18 reversed phase column (150 x 4,6 mm id, particle size of 5 µm; Luna-Phenomenex, Torrance, CA, EUA), connected to the cartridge safety guard (4 id × 3 mm, particle size of 5 µm; Phenomenex, Torrance, CA, EUA). The mobile phase used was methanol: 0.1 M sodium dihydrogen phosphate (75:25 v v -1 ) adjusted to pH 3.35 with phosphoric acid, at a flow rate of 1.5 ml min -1 . The derivative fluorescence of OPA Fumonisin was recorded at excitation and emission in wavelengths of 335 and 440 nm, respectively. The fumonisins were evaluated by the peak heights and compared with standard solutions of fumonisin references B1, B2 and B3 (Sigma Chemical Co., St. Louis, MO, USA). The detection limit of the analytical method was 0.01 µg g -1 .
For detection and quantification of total fumonisins in maize meal samples, it was used AgraQuant® commercial kits produced by Romer Labs Singapore Pte Ltd. The kit is based on the direct competitive ELISA in plaques. The mycotoxin sample is extracted by agitation it with 70% methanol. The extract is filtered and then tested by immunoassay, following the manufacturer guidelines. The determination of the toxin was performed by comparison with standards of different concentrations provided by the kit. The quantitative analysis was performed on a specific computer program provided by the manufacturer.
It was performed the analysis of variance followed by the test to compare means, SNK, with significance (p <0.05) using the Sigma Stat Statistical Package (1994) for the results of the amount of fumonisins in maize meal.
Results and Discussion
Thirty-four isolates showed microconidia produced in long chains and in monofialides. The produced macroconidia had 3-5 septa, with little apparent foot and apical cell, and it was not observed the formation of chlamydospores in the tested isolates. In BDA the color of the cultures ranged from orange to purple, and the color tone were evident over the last days of evaluation. According to the characteristics observed the isolates were identified as F. verticillioides. The culture of color pattern is not a good marker for characterization of Fusarium and for complex species as Fusarium fujikuroi (Seifert, 1996). Fusarium thapsinum shares the same morphological markers with F. Verticillioides; however, this species is predominantly associated with sorghum and millet (Leslie & Summerell, 2006). In Brazil and other countries, it is observed that the species predominantly associated with the cultivation of maize and maize byproducts is F. verticillioides, though other species of the Fusarium fujikuroi complex are also present as F. proliferatum e F. subglutinans (Lanza et al., 2014, Leyva-Madrigal et al., 2015. Regard to the production of fumonisin by F. verticilloides isolated from maize meal, the data showed that all 34 isolates tested were positive for FB1 (Fumonisin B1), FB2 (Fumonisin B2) and FB3 (Fumonisin B3), the concentration produced ranged from 48.2 to 1190.1 µg g -1 for FB1; from 6.7 to 311.5 µg g -1 for FB2; and from 23 to 667 µg g -1 for FB3. Given the three types of fumonisins, the concentrations for each strain ranged from 84.3 to 2168.6 µg g -1 . The industry processes maize grains eventually without symptoms of rot; however, F. verticillioides is a species with endophytic character. This means that the beans even not presenting symptoms, may be lodged with the fungus inside of it. From a total of 49 isolates of F. verticillioides from the Philippines, 70% of its isolates were potentially producers of fumonisins (Magculia et al., 2011). Fumonisin production has generally varying according to the origin of the isolate. Isolates from northern Mexico produced high levels of toxins, while isolated from the central region of Brazil showed low production of fumonisin (Stumpf et al., 2013;Lanza et al., 2014).
Brazilian law had no limits for fumonisin in food by 2010, however, in February 2011 came into effect a new resolution (N° 07, of February 18, 2011 developed by the Agency ANVISA). This Resolution has limits for various classes of food and several mycotoxins previously covered by other reference legislations, such as the US and several European countries, the maximum allowable limits for fumonisin B1 and B2 is 2.5 µg g -1 to maize meal (Brasil, 2011). All samples studied in this work were within of the standard recommended by the Brazilian legislation.
It was detected fumonisin in all samples of maize meal, but there was no significant difference (P> 0.05) for the concentration of fumonisin among the surveyed brands (Table 1). The highest concentration of fumonisin was detected in a sample of the brand E, with 2.13 µg g -1 . The highest average fumonisin was also found in samples of the brand E with 1.12 µg g -1 . The average of all samples analyzed in this work was 0.83 µg g -1 .
Caldas & Silva (2007) detected fumonisins in maize meal samples in Brasilia-FD, and some of its samples had values of 6.17 µg g -1 , and the average of the samples was 1.68 µg g -1 , this results were higher than those found in our study. This may be a result of the region to which this maize was originated or the time when it was harvested (Stumpf et al., 2013;Lanza et al., 2014). The incidence of fumonisin contamination in maize grown in the South and Southeast of Brazil can reach 100% (Van Der Westhuizen et al., 2003). Fumonisin has been P = 0,295. µg g -1 = microgram per gram Table 1. Mean levels of fumonisins in maize meal of different brands studied frequently detected in maize meal and its by-products in the United States, China, European countries, South American countries and Africa (Rhedder et al., 2002).
In Campinas, SP, Machinski & Soares (2000) evaluated various derivatives of maize and demonstrated that the majority of the samples were contaminated with fumonisin. From these derivatives, only cornmeal was identified with fumonisins in all investigated samples with an average of 2290 µg kg -1 (2.29 µg g -1 ) of fumonisin B1. In research with maize meal in the city of Recife, PE, the fumonisin levels found ranged from undetectable to 150 µg/kg (0.15 µg g -1 ) (Kawashima & Soares, 2006). These results were inferior to those reported in this work. In another study in the city of São Paulo, it was found fumonisin B1 and B2 in 60 samples of maize meal and cornmeal (Bittencourt et al., 2005). Other authors also found contamination by fumonisin B1, B2 and B3 in maize samples used for human consumption in the state of Santa Catarina (Van Der Westhuizen et al., 2003).
Conclusion
In this study was observed that the maize meal samples tested were contaminated with fumonisin, and it is likely that the source of this contamination is the presence of F. verticillioides. | 2019-03-31T13:46:25.836Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "e68898e744e86ef45fcb3816a6ce0b5632144e45",
"oa_license": "CCBYNC",
"oa_url": "http://www.agraria.pro.br/ojs-2.4.6/index.php?journal=agraria&op=download&page=article&path[]=4852&path[]=agraria_v10i4a4023",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "bf653cccfbb49ca120c1fee99cd3dcc5a69004e5",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
237295124 | pes2o/s2orc | v3-fos-license | Barriers to screening, diagnosis and management of hyperglycaemia in pregnancy in Africa: a systematic review
Abstract Gestational diabetes mellitus (GDM) complicates pregnancies in Africa. Addressing the burden is contingent on early detection and management practices. This review aimed at identifying the barriers to diagnosing and managing GDM in Africa. We searched PUBMED, Web of Science, WHOLIS, Google Scholar, CINAHL and PsycINFO databases in May 2020 for studies that reported barriers to diagnosis and management of hyperglycaemia in pregnancy. We used a mixed method quality appraisal tool to assess the quality and risk of bias of the included studies. We adopted an integrated and narrative synthesis approach in the analysis and reporting. Of 548 articles identified, 14 met the eligibility criteria. Health system-related barriers to GDM management were the shortage of healthcare providers, relevant logistics, inadequate knowledge and skills, as well as limited opportunities for in-service training. Patient-related barriers were insufficient knowledge about GDM, limited support from families and health providers and acceptability of the diagnostic tests. Societal level barriers were concomitant use of consulting traditional healers, customs and taboos on food and body image perception. It was concluded that constraints to GDM detection and management are multidimensional. Targeted interventions must address these barriers from broader, systemic and social perspectives.
Introduction
Across the globe, diabetes contributes significantly to the burden of non-communicable diseases (NCDs). 1 According to the 2015 International Diabetes Federation report, among the 15.2% of pregnancies affected by hyperglycaemia in pregnancy (HIP) globally, gestational diabetes mellitus (GDM) constituted 85% of all cases. 2 Even though the current GDM epidemic affects both highand low-income countries, it is estimated that nearly 90% of the global cases occur in low-income countries. 3 Pathologically, over 50% of pregnant women who develop GDM progress to develop type II diabetes within 2-10 y after the index diagnosis. 4,5 If left unchecked, GDM might compound the already high burden of NCDs in the African region, where about 80% of the cases of diabetes occur. 1,6,7 In accordance with the WHO guideline, GDM is defined as carbohydrate intolerance resulting in hyperglycaemia of variable severity, with onset or first detection in pregnancy. 8 Even though diabetes in pregnancy (known diabetes before pregnancy) is a matter of concern, the most common type of HIP is GDM 3 and, therefore, understanding the practices associated with screening, diagnosis and management within the African context is essential.
There is ample evidence that GDM exposes pregnant women to the risk of caesarean section, traumatic delivery, prolonged delivery, pregnancy-induced hypertension, pre-eclampsia and could also lead to maternal and foetal death. 9,10 Evidence has also shown that exposure of the foetus to a hyperglycaemic intrauterine environment increases the risk of macrosomia, anencephaly, spinal bifida, cerebral palsy and large for gestational age. 9,[11][12][13][14] Apart from these short-term complications, offspring born to mothers with GDM are also at a higher risk of developing diabetes and obesity in later life. 1,15,16 Glycaemic control through lifestyle modification and medical therapy during pregnancy are promising strategies to reduce the risk of adverse foetal, perinatal and neonatal events. [17][18][19] However, preventing complications and improving disease prognosis hinge on early detection and effective management. Nonetheless, some studies have established the merits of universal screening for GDM. 20,21 However, practical implementation of detection and management approaches may not be feasible due to multiple constraints. Therefore, identifying these factors would be essential in informing policy decisions on addressing the burden of GDM.
In this systematic review, we performed a comprehensive literature search to summarise evidence regarding the barriers to screening, diagnosing and managing GDM in Africa. We sought to answer three research questions: (1) What impedes GDM screening and testing? (2) What are the barriers towards supporting and managing pregnant women diagnosed with GDM? (3) How can the experiences of pregnant women regarding GDM testing and management help to improve care?
Study design
We conducted this systematic review by searching through PUBMED, Web of Science, WHOLIS, Google Scholar, CINAHL and PsycINFO databases, taking into consideration the guideline of 'Preferred Reporting Items for Systematic Reviews and Metaanalyses' (PRISMA) statement. 22 A protocol for the review was developed a priori and registered with the PROSPERO international register for systematic reviews (2020: CRD42020180335). 23 Using the PICo framework recommended by the Joana Briggs Institute (JBI) and Cochrane collaboration as the preferred approach for developing review questions, search terms for the review were categorised into three components: P=Population, I=Phenomenon of Interest and Co=Context. Where appropriate, Medical Subject Heading (MeSH) terms were used. The search terms used and the search strategy for each database are included (Supplementary Data 1). The review started in February 2019, but the search was completed in May 2020. Eligible studies included in this review were published from 2012 to 2019 in English, which captured recent challenges to GDM care in the Africa region. Reference lists of included studies were also screened for eligible studies.
Inclusion and exclusion criteria
Studies considered for inclusion were peer-reviewed, published, quantitative, qualitative, mixed-methods and randomised research papers conducted in any African country or subregion related to barriers to screening, diagnosis or management of GDM and/or diabetes in pregnancy (DIP) that focused on the women and their families, as well as health practitioners, policymakers and stakeholders involved in the care process. Studies that focused exclusively on prevalence and risk estimation or involved postpartum women were excluded.
Study selection and eligibility
We included qualitative, quantitative and mixed-method studies. Deduplication, title and abstract screening, and reviewing of reference lists of potentially eligible studies for relevant literature and full-text screening, was performed by TH. FA also independently screened the full-text articles to assess their eligibility. In instances where a decision could not be reached, discussions were held with AJ. All studies retrieved were imported to Endnote library where deduplication was performed using Barmer's method. 23
Data handling and extraction
Data was extracted by TH using Microsoft Office Excel. In addition to the findings of the studies, other data extracted included the names of authors, aim of the study, year of publication, geographic zone where the study was conducted, study design, sampling methods, sample size, characteristics of participants, gestational age of participants at the time of screening and level of healthcare where the study was conducted. TH and FA independently reviewed the data extracted for each study using the data items listed in the review protocol to ensure that the data extracted were compliant with the review objectives.
Risk of bias and quality assessment
Quality assessment of studies included in this review were independently assessed by TH using a mixed-method quality appraisal tool (MMAT). 24 FA reassessed a subset of the studies (one in each design category) to verify the appraisal outcomes. The studies were initially subjected to two mandatory screening questions according to the MMAT tool. A 'yes' answer was obtained for all of the studies, making it feasible to apply the subsequent questions based on the study's design. Overall, the scores obtained as per methodological criteria and quality assessment ranged from 2 to 5 out of a total possible score of 5. A mark of 5 (represented by five asterisks [*****]) implied that the study met 100% of the quality criteria, whereas marks of four (****), three (***), two (**) and one (*) corresponded to 80%, 60%, 40% and 20% of the quality criteria, respectively (Table 1). Overall, the studies were of appreciable quality with the final quality rating ranging from 60 to 100%. Qualitative studies incorporated in the review showed adequate interpretation of results supported by specific quotations from respective participants. There was adequate coherence between data collected and the interpretation of findings. Three of the qualitative studies [25][26][27] used data saturation as a sample size determination approach. One common trend observed in the mixed-method studies was the inadequate integration of qualitative and quantitative data sources.
Data synthesis
Given the heterogeneity of the design of articles included in the review, we followed an integrated and narrative synthesis approach described by the JBI mixed method systematic review methodology. 28 The integrated synthesis approach allowed combining data extracted from quantitative and qualitative studies for further analysis into themes. The quantitative studies 29,30 were 'qualitised' using textual descriptions of the findings. TH and FA read and re-read the full text of the studies to understand how the barriers to GDM care were reported in the articles. Following the grouping of the articles according to the three specific objectives, we performed a thematic analysis of the results. We then discussed the themes generated until agreements on the themes were reached. The major themes generated were health system-and patient-related barriers under objectives Table 1.
Result of quality and risk assessment using the mixed method appraisal tool 24 Q 2.5 Did the participants adhere to the assigned intervention? Key: yes (1 point), no (0 points), cannot tell (0 points). *Mixed method studies; NB: questions ranging from 3.1 to 3.4 are missing as none of the studies adopted a quantitative non-randomised approach. Studies are arranged in order of the design; the first five studies are qualitative, the next one is a randomised controlled trial, the next two are quantitative and the last four are of mixed method design. A quality rating of ***** means that 100% quality criteria were met, **** 80%, *** 60%, ** 40% and * 20%.
T. Hinneh et al. one and two. An additional subtheme on sociocultural barriers was generated specifically under objective two. The themes and subthemes are summarised and shown in Figure 1.
Search outcome
The search of the various electronic databases and other sources yielded 548 articles. Sixty-five duplicates were found and removed; 489 articles had their titles screened, of which 417 were excluded. The remaining 72 had their abstracts screened, and 30 other studies conducted outside Africa were removed. Forty-two full-text articles were assessed for eligibility, of which 28 were excluded. Articles were eliminated on the basis of being nonprimary research (n=3) alongside not being relevant regarding barriers to GDM care or reporting any experience on GDM screening, diagnosis or management (n=23). Two articles that presented findings from low-and middle-income countries (LMICs) were excluded because findings specific to Africa could not be extracted separately. The study selection process is represented in a PRISMA flow diagram in Figure 2.
Studies included in the review
A total of 14 primary studies were included in this review comprising 7 solely qualitative articles, 2 solely quantitative articles, 4 mixed-methods articles and 1 randomised control trial. The overview of the studies included in this review is presented in Table 2. Half of the studies employed qualitative designs, [25][26][27][31][32][33][34] with interviews and focus group discussions as the predominant approaches for data collection. Two of the studies were quantitative, 29,30 one was a randomised controlled trial 35 and four employed mixed-method designs. [36][37][38][39] The sample size of the papers included ranged from 10 to 3080, while the total sample size from the 14 studies was 4006.
Demographic characteristics of participants diagnosed with GDM and DIP
Participants in the eligible studies were women diagnosed with either GDM 26,27,31,33 or DIP. 32,36 Other studies generally focused on pregnant women attending antenatal clinics where an adjunct objective was to explore barriers related to GDM screening, diagnosing or management. 29,30,35,39 The age of the women ranged from 15 to 49 y. Despite the limited evidence on how marital status could influence screening and diagnosis of GDM, one study reported that women who were not in any relationship had a higher chance of not returning for a confirmatory test such as an oral glucose tolerance test (OGTT) and fasting plasma glucose. 30
Healthcare context
In Table 2, we provide details of the contexts under which the studies were conducted, thus indicating the level of healthcare at which GDM services are often provided. A key factor that facilitated service provision for the detection and management of GDM was the availability of skilled healthcare providers at various healthcare levels. Utz low socioeconomic status of some pregnant women, poor road networks and work schedules could discourage women from accessing healthcare at tertiary levels. 25,32,36,39 Meanwhile, Utz et al. suggested that decentralising screening, diagnosis and nonpharmacological management of GDM to the primary level of care would improve access and mitigate the risk of complications. 39 In six of the eight studies included, healthcare providers constituted the study participants. In these studies, obstetricians, nurses, midwives and nurse-midwives were the professionals most frequently involved in the screening and diagnosing of GDM, even at tertiary levels of healthcare. 29,31,32,34,36,39 The healthcare experience of some of these healthcare providers ranged from 1 to 42 y. 25 A quality rating of ***** means that 100% quality criteria were met, **** 80%, *** 60%, ** 40% and * 20%.
Regarding diagnostic approaches, 2013 WHO diagnostic criteria were adopted by some facilities. However, pregnant women expressed concerns with the tolerability and acceptability of the test and shortage of diagnostic resources. 30,34,37 In a study conducted by Nielsen et al. on compliance and acceptability of screening and diagnosing procedures, health professionals in Kenya raised concerns about the nauseating effect of the 75 g glucose load used for the OGTT. Hence they experimented with 300 ml of sprite (a non-alcoholic drink), which by comparison had a less nauseating effect. 37 In terms of the gestational age for screening, while some health facilities screened pregnant women at 24-28 wk, others were screened at 16-34 wk. 35 Three studies, from Morocco, Nigeria and South Africa, reported screening for GDM at the initiation of antenatal care and sometimes after the first trimester. 26,29,39 In assessing management practices, two studies reported insulin and metformin as the medications of choice for managing GDM and emphasised dietary and lifestyle modification as an alternative to achieving glucose control. 26,39 Beyond medical intervention, healthcare providers in South Africa mentioned comprehensive non-pharmacological interventions such as peer group teaching and group or individual counselling with a dietician or healthcare professional as effective GDM management practices. 26
Themes generated from the review
We present the findings in line with the review objectives: (1) barriers to screening and diagnosis, (2) hindrances to implementing management interventions and (3) the experiences of women regarding GDM diagnosis and management. Through the thematic content synthesis, we generated three themes that contextualised women's experiences regarding the continuum of GDM care overlapping the three objectives of the review. These three themes comprised health system, patient-related and sociocultural barriers limiting GDM screening, diagnosis and management. Essentially, most of the experiences stemmed from lack of empathy and inadequate interaction with health providers coupled with inadequate social support from family and friends. 26,27,30,31,35,37,38
Barriers to GDM screening and diagnosis
In Supplementary Table 1, we summarise the health systemand patient-related barriers to initiating screening and diagnostic strategies for GDM.
Health system-related barriers to GDM screening and diagnosis
Overall, seven studies reported barriers to screening and diagnosis of GDM. 29,30,33,34,[37][38][39] Two studies reported barriers from the perspective of pregnant women and women previously diagnosed with GDM, 30,35 whereas the remaining five studies included views of GDM programme implementors, as well as health professionals, in addition to women diagnosed with GDM or DIP. 30,34,[37][38][39] A few of the studies reported on the shortage of trained health professionals as a barrier to GDM screening and diagnosis, 33,34,39 which led to healthcare professionals' inability to comprehensively provide health education and counselling support throughout pregnancy. 25,27,29,30,32,33,36 Beyond this, the few healthcare professionals at post do not have the requisite skills to provide GDM services. 31,33,34,38 Some studies attributed the lack of requisite skills among professionals to the limited opportunities for in-service training on the GDM care process 33,34,38,39 due to the emerging nature of guidelines on its management.
On the other hand, Muhwava et al. concluded that healthcare professionals do not satisfactorily explain GDM screening and diagnostic procedures. 33 Often, healthcare providers are unable to follow up women after the first antenatal visit, even among those who test positive for glucosuria or are scheduled for subsequent testing. 30,34,39 Although this may be due to the high patient turnout that characterises many antenatal clinics, it may be exacerbated by staff shortages, insufficient space or inadequate logistics and consumables. 34,37 An absence of protocols and guidelines also hamper screening and diagnosis, especially among newly recruited health professionals who may not be acquainted with the GDM care regimen. 29,34,37 Nwose et al. found non-adherence to GDM protocols and guidelines despite their availability in some health facilities. 29 This could result in long waiting hours at antenatal clinics, which could deter pregnant women who travel long distances from returning for subsequent antenatal care services. Meanwhile, some pregnant women leave the antenatal clinics without undergoing the prescribed test, especially if they are required to fast overnight for 8 h before the test. 29
Patient-related barriers to GDM screening and diagnosis
The intention to screen and diagnose GDM commences at the initiation of antenatal care. However, some women begin antenatal visits beyond the 24-28 wk period recommended for GDM testing. 29,34,37 Also, pregnant women are unable to accurately tell their last menstrual date, 29 while others tend to under-report their diabetes risk, 37 which potentially affects the decision to test, especially in settings where a risk-based approach to GDM screening is practised. Njete et al. reported lost to follow-up among pregnant women as a significant barrier to GDM screening. 29,30 In terms of acceptability of the diagnostic test, Nielsen et al. and Nwose et al. deduced that the glucose solution used for the OGTT and the 8-h prerequisite fasting are unbearable for some pregnant women. 29,35,37 Barriers to GDM management Health system-and patient-related barriers adversely affecting the provision of management interventions for pregnant women diagnosed with GDM are summarised in Supplementary Table 2. In all instances, the gaps we identified were related to the management of GDM.
Health system-related barriers to the management of GDM
Seven studies sampled the experiences of health professionals, pregnant women and women previously diagnosed with GDM or DIP on barriers to the management of GDM. [25][26][27]32,34,36,39 Four studies highlighted inadequate health professionals at various levels of care as a significant barrier to its management. 25,26,38 Here, too, insufficient knowledge about GDM, its management practices, as well as inadequate training on relevant skills for GDM compared with HIV, malaria and TB in the subregion, were identified as significant barriers to GDM management. 25,26,33,34,38 In a study conducted by Nielsen et al., women diagnosed with GDM cited inadequate knowledge of healthcare providers on menu planning as the reason why dietary compliance remains a challenge. 38 Regarding health service delivery, poor coordination and communication lapses between health providers and pregnant women disrupt the continuity of GDM care and treatment adherence. 25,27,33,36,39 Within the healthcare system, Nielsen et al. identified a weak referral system and difficulty accessing specialist care as barriers to GDM care. 39 Besides human resources, the shortage of medications, glucose strips, glucometers and reagents also poses a significant challenge to GDM care. 25,34,38 In two studies conducted by Mukona et al., healthcare providers mentioned the absence of diabetic medications as a hindrance to therapy compliance. 25,36
Patient-related barriers to the management of GDM
Three studies identified financial constraints and the absence of insurance systems as significant barriers to GDM management. For example, in healthcare settings where the cost of maternal health services is not covered by any form of insurance, women within the lower wealth index are constrained in purchasing medications, glucose strips and following the prescribed nutritional guidelines for managing GDM. 27,32,36 Also, difficulty in comprehending the treatment regimen and painful insulin injections hinder adherence to antidiabetic therapy. 32,36 Beside medication, women with GDM cited family and societal support in GDM care. Some studies have linked the absence of significant support from family, peers and other social networks with poor treatment compliance and adverse outcomes. 27,32,36 Patient support goes beyond family to healthcare providers with whom patients interact regularly. The need for treatment support from significant others emerged in two studies from Ghana, where women mentioned support from healthcare professionals and close relatives, particularly husbands, as a crucial component of GDM management. 27,31 Utz et al., who assessed GDM screening and management practices in Morocco, documented a lack of empathy and understanding by healthcare providers as a significant setback to its management. 39
Sociocultural barriers
Three studies explored how finances, culture, customs and traditions influence GDM/DIP management. The cheaper cost of herbal medicine encouraged some diagnosed women to consult traditional healers. Others mentioned pressure from family and friends, who believe that GDM is caused by spiritual or mystical forces, as the reason for consulting herbalists. 25,32,36 Furthermore, some cultures and religions forbid women from eating certain foods, even if they have positive implications for GDM treatment outcomes. 31 Concerning societal barriers, Nielsen et al. reported the perceptions of women regarding their body size, particularly during pregnancy, as a barrier to GDM management. 38 In typical rural settings, losing weight during pregnancy creates an impression of ill-health and poverty. In such settings, compliance with dietary guidelines that require optimum weight gain during pregnancy is often low. 38
Experiences of GDM women on detection and management
As summarised in Supplementary Table 3, women experience sadness, anxiety and mixed feelings while accessing GDM services, particularly before and after GDM diagnosis, largely due to the failure of health professionals to explain test procedures. 31,33 The mixed feelings were attributed to inadequate interaction with health providers and a lack of reassurance concerning positive treatment outcome. 36 Given the perceived spiritual connotation behind GDM, some women might keep the condition a secret for fear of stigmatisation. 31 Few authors have indicated the need to prioritise the psychological well-being of women diagnosed through counselling and health education. 27,31
Discussion
This review highlights barriers to screening, diagnosis and management of GDM and experiences of women with GDM in Africa. Perspectives obtained from healthcare providers and patients reveal barriers to the detection and management of GDM within the health system. Other key barriers included sociocultural and religious dimensions that affect health-seeking behaviour. Generally, the barriers are consistent across the studies included in this review, except for sociocultural barriers, which differed according to the country context.
Although the findings of this review cannot be universally adjudged, it establishes systematic gaps and inadequate attention to GDM, which constitutes one of the most significant burdens to diabetes in Africa. 2 The foremost barrier to GDM detection and management is inadequate knowledge. Awareness of the condition among both healthcare providers and pregnant women will limit progression to diabetes type II. Other reviews have established evidence that a shortage of healthcare providers hinders GDM detection and management. However, the problem extends to a lack of knowledge resulting from limited opportunities for skills-based training. 40 Because the majority of health services are concentrated at primary levels, this review proposes the need for capacity building at lower levels of care, alongside providing the essential equipment and consumables necessary to enhance GDM care.
Prioritising other diseases and programmes such as the prevention of mother-to-child transmission of HIV and intermittent preventive treatment of malaria at the expense of GDM is a concern. These programmes are prioritised and provide health professionals with an opportunity for training, but the same cannot be said of NCDs in pregnancy. While efforts have been made to improve NCD surveillance and response in LMICs, such interventions need to start from antenatal care clinics, where screening and diagnostic services start.
The multidimensional nature of the problems associated with GDM service provision require a comprehensive systemic revision to improve detection and management practices. As seen in this review, even in settings where protocols and guidelines exist, the feasibility of implementing WHO-recommended universal screening is problematic due to the scarcity of resources that characterises many healthcare settings in Africa. Therefore, there is a need for context-applicable screening and diagnostic protocols that are informed by cost, tolerability, availability, accessibility and sustainability to increase the uptake of GDM services. 17,37 Otherwise, several pregnant women may go unscreened, to the detriment of the mother and baby. The few who test positive and require further testing or treatment are often not followed up or are lost to follow-up. Poor coordination of referral pathways is a significant factor given that most of the cases are managed at tertiary levels, which are grossly inaccessible to the majority of pregnant women. Inadequate contact and interaction with healthcare providers reduce chances for health education and counselling, which limit the effectiveness and satisfaction with care received. Moreover, patient education and counselling 41,42 has been associated with positive treatment outcomes and facilitates postpartum care, which is key to early detection of type II diabetes. 43 As antenatal care offers an opportunity for screening, diagnosing and management, healthcare providers should leverage this opportunity to enforce behavioural change through evidence-based interventions such as peer counselling and dietician consultation.
Cultural diversity within the African context is more pronounced, and management of GDM is not spared its consequence. Although several interventional studies have established the importance of dietary adherence for GDM management, 44,45 lifestyle modification is disadvantaged by negative perceptions about weight gain during pregnancy and taboos and customs surrounding foods, thereby affecting adherence to the therapeutic regimen. Even in low-resource settings in developed countries including the UK, it is recommended that health education on diabetes in pregnancy incorporates culturally appropriate messages, as there are potential cultural issues, such as language and myths or 'old wives tales' that affect GDM treatment compliance. 46
Strength and limitations
This paper is the first review on barriers to GDM care in Africa and shows how GDM has received little attention in Africa. However, because of the growing interest in GDM research globally after the Hyperglycemia and Adverse Pregnancy Outcomes study in 2008, which reported adverse pregnancy outcomes per unit increase in maternal glucose, 47 we observed that the eligible studies included in this review were published recently (during 2012-2019). The eligible studies included in the review represent the five subregions and different ethnic groups in Africa. More importantly, the included articles cut across qualitative, quantitative and mixed-method designs, which complementarily provide a contextual understanding of the challenges to GDM care. A considerable number of studies sampled diverse stakeholders concerned with GDM, thereby allowing a comprehensive description of the barriers to GDM service provision. All these elements enhanced the validity and generalisability of the findings across Africa.
Nevertheless, this review has some limitations. Generally, there was missing information on the gestational age of pregnant women at screening and the context of healthcare from which participants were recruited. The cultural and religious barriers identified may be unique to specific ethnic or religious groups and could be misleading if generalised because they vary from country to country and from one ethnic group to another, even in the same country. Nevertheless, they provide an insight that some barriers may be culturally inclined.
Conclusions
This review shows the multidimensional factors that interact at different health systems and societal levels to hinder the detection and management of GDM in Africa. Insufficient clinical logistics, inadequate coordination of GDM care, limited human resources capacity and funding deficits grossly affect the testing and management of GDM in Africa. Women diagnosed experience anxiety, sadness, stigmatisation and uncertainty regarding treatment outcomes. Family support, customs and taboos are pertinent at the societal level. Broader consultation with key stakeholders to address these multifactorial challenges is essential in improving maternal and child health. The coexistence of infectious diseases continues to direct training needs, resource allocation and prioritisation of interventions. Nonetheless, pregnancy complications associated with GDM and its linkage with other NCDs is well established. Therefore, addressing these barriers is key to improving maternal and neonatal outcomes and promoting NCD-prevention strategies in Africa.
Supplementary data
Supplementary data are available at International Health online.
Authors' contributions: TH, FA and AJ conceived the study. The study protocol was designed by TH and reviewed by FA and AJ. The study design, initial literature search and study selection, quality assessment and data extraction were carried out by TH and independently reviewed by FA. Discussions were held with FA and AJ until consensus was reached at every stage. All the authors read and approved the final version of the manuscript. | 2021-08-26T06:17:44.759Z | 2021-08-25T00:00:00.000 | {
"year": 2021,
"sha1": "ccfddc267c5c2aa0d78728396b5d76e317d46d9c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/inthealth/advance-article-pdf/doi/10.1093/inthealth/ihab054/39904462/ihab054.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "0753147920437d23bf87256185c183411eec2309",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
13139169 | pes2o/s2orc | v3-fos-license | Post term pregnancy in a non-communicating rudimentary horn of a unicornuate uterus
Background Pregnancy in a rudimentary horn of a unicornuate uterus is rare in obstetrics and when it occurs, it seldom progresses to term as ruptures frequently occur before the third trimester. Case history A 29 year old female, presented at 42 weeks 5 days complaining of absent foetal movements, with results of a self-prescribed ultrasound scan showing an “abdominal pregnancy with foetal demise”. She was haemodynamically stable and there were no foetal heart tones. At laparotomy, a non-communicating rudimentary horn pregnancy (RHP) was discovered. The right horn and tube were resected, with delivery of a post term female stillbirth. There were no postoperative complications. Conclusion Rudimentary horn pregnancies are difficult to diagnose when advanced; especially in low-resource settings with suboptimal antenatal care. Maternal and foetal outcomes in RHPs are usually poor; RHPs should therefore be suspected in pregnancies with atypical ultrasonographic features and more investigations done to confirm the diagnosis in order to reduce the associated morbidity and mortality.
Background
Rudimentary horn pregnancy (RHP) is a very rare obstetric entity with a reported incidence from 1/100,000 to 1/140,000 pregnancies [1]. Hitherto, it was unreported in Cameroon [2]. Early diagnosis and management is important, though challenging especially in low-resource settings where diagnostic modalities like: ultrasonography, hysterosalpingography, hysteroscopy, laparoscopy, and MRI [3] are widely unavailable or their use inadequate.
The clinical course of this condition is usually characterized by a high rate of uterine rupture [2] with 85 % of pregnancies in a non-communicating rudimentary horn typically rupturing before the third trimester. These pregnancies rarely reach term, and when they do, foetal outcome is often poor, with a reported 6 % survival rate [1]. This situation is particularly compounded by suboptimal antenatal care (ANC). Current literature emphasizes the importance of state-of-the-art sonographic prowess in its diagnoses and timely intervention to optimize outcome. We report a first ever case of a rare post-term still birth from a rudimentary horn pregnancy, initially missed on ultrasound scan (USS) at 21 weeks then misdiagnosed twice in the third trimester as an abdominal pregnancy.
Case history
A 29-year-old female school teacher, G 2 P 0 010, blood group O rhesus positive presented at our antenatal clinic for the first time with the result of her obstetric USS at 42 weeks 5 days [dated from a reliable last normal menstrual period (LNMP)].
Antenatal care during pregnancy was inadequate and irregular with her first ANC consultation done at a peripheral health centre at 9 weeks 5 days. The evolution of the pregnancy was unremarkable and she had her first USS at 21 weeks, which reported: "a single viable intrauterine pregnancy" with foetal parameters corresponding to a 20 week gestation. At 41 weeks 3 days, results of a self-prescribed USS (done at another commercial imaging centre) reported: "a single viable abdominal pregnancy with a breech presentation and oligohydramnios". The patient was not advised to seek an obstetric consult. Ten days later, absence of foetal movements for over 24 h, prompted the patient to request another USS which revealed "an abdominal pregnancy with foetal demise". She subsequently consulted at our hospital for appropriate management. Upon presentation, she reported the absence of foetal movements for over 36 h, there was no abdominal pain nor fever, she denied any loss of show, vagina bleeding, or gush of fluid. She was conscious and well oriented with pink conjunctivae, a blood pressure of 118/72 mm Hg; a pulse of 66 beats per minute and a temperature of 36.9 °C. Obstetric examination revealed a 'fundal height' of 35 cm with an indeterminate foetal lie. She had normal external genitalia and the vagina and cervix appeared macroscopically normal. The cervix was posterior, soft, 1.5 cm long and cervical os was closed. Repeat USS reported "an empty uterus, with foetal demise on the right of the uterus". Her haemoglobin level was 12.4 g/dL and serum creatinine value was 1.0 mg/dL. She was admitted and prepared for laparotomy with indication for evacuation of an abdominal pregnancy.
Laparotomy was done under general anaesthesia with a midline infra umbilical incision
Intraoperatively, a right-sided rudimentary horn pregnancy was discovered, which was connected to the main uterine cavity with a thick 1.5 cm fibrous band. The gravid horn was attached directly to the fimbriated end of the right tube. There was no communication of the gravid horn with the cervix nor with the normal horn ( Fig. 1).
Excision of the gravid rudimentary horn with right salpingectomy was done en bloc. Haemostasis was assured. Dissection of the gestational sac revealed a non-viable 2300 g, female neonate, APGAR 0/10 with post-term features (abundant scalp hair; no lanugo; desquamating skin; diminished vernix caseosa, long finger nails and meconium stains) and no obvious external congenital malformations.
Postoperative stay was uneventful and patient was discharged on day 5, with advice to seek early and qualified medical care in subsequent pregnancies. An abdominal scan and intravenous urography at 8 weeks postpartum revealed no associated urological anomalies.
Discussion
Developmental anomalies of the female urogenital tract are not uncommon. The exact prevalence of these anomalies is difficult to determine due to its clinical subtlety [4]. The incidence of defective fusion of the paramesonephric (Müllerian) ducts is estimated to be 0.1-3 % in unselected populations [5], but increases from 2 to 8 % amongst infertile women and 5-30 % of women with a history of miscarriages [6].
Uterus bicornis unicollis (or unicornuate uterus) with one rudimentary horn is an anomaly that results from unilateral hypoplasia of the uterine ducts. It is classified as type U4a, according to the European Society of Human Reproduction and Embryology (ESHRE) and the European Society for Gynaecological Endoscopy (ESGE) [ESHRE/ESGE] system [7]. It may present a rudimentary horn without or with a functional endometrium, the latter being more prone to obstetric complications. It is postulated that pregnancy can occur in a non-communicating rudimentary horn via trans-peritoneal migration of spermatozoa or fertilized ovum from contralateral side, then implantation in the horn [4].
Rudimentary horn pregnancies are rare, difficult to diagnose and seldom progress to term as evolution is usually marked by uterine rupture in most cases. However, there has been a reported case of live term delivery from a RHP in rural Nigeria [8].
Only about 26 % of RHPs are diagnosed pre-operatively [9]. In 74 % of cases the diagnosis is often missed by ultrasonography (the most widely used investigation during pregnancy), and sensitivity of sonography decreases as pregnancy advances beyond first trimester. It can be missed in inexperienced hands or with low quality instruments. Tubal pregnancy, cornual pregnancy, intrauterine pregnancy, and abdominal pregnancy are common sonographic misdiagnoses [10]. Magnetic resonance imaging may be needed for confirmation [11] but this investigation is widely unavailable in Cameroon and where present, it is expensive. In our case, a RHP was misdiagnosed twice as an abdominal pregnancy by obstetric ultrasounds done in the second and third trimesters, highlighting the potential importance of first trimester USS [11]. However, these misdiagnoses could be explained, as abdominal pregnancies are about 10 more common than RHPs [9]. This pregnancy was dated using a reliable last normal menstrual period and was well-nigh confirmed by a second trimester ultrasound.
A RHP can also be suspected on early clinical pelvic exam, wherein a mass extending outside the uterine angle can sometimes be felt on bimanual examination (Baart de la Faille's sign) or displacement of the fundus to the contralateral side with rotation of the uterus and elevation of the affected horn (Ruge Simon syndrome) [12]. Using ultrasonography, Tsafrir et al. [11] outlined the following set of criteria for early diagnosis of RHPs: (1) a pseudo pattern of asymmetrical bicornuate uterus; (2) absent visual continuity tissue surrounding the gestation sac and the uterine cervix; (3) presence of myometrial tissue surrounding the gestation sac; (4) hypervascularization typical of placenta accreta [7]. Withal, these criteria may be important only in the first trimester.
Congenital uterine anomalies may lead to infertility; recurrent first-trimester pregnancy loss and other obstetrical complications like increased risk of preterm birth; preterm premature rupture of membranes; breech presentation; cesarean section; placenta previa; placental abruption and intrauterine growth retardation (IUGR) [4]. Our patient had one spontaneous abortion in the past for which she was uninvestigated hence the uterine anomaly remained undiagnosed. This emphasizes the need for patient-specific ANC consultations. Also the foetus had intrauterine growth restriction, partly explained by the restricted space in the rudimentary horn and poor placentation.
Furthermore, in our case, the obstetrical outcome was somewhat compounded by: sub-optimal ANC; the lack of early referral; the phenomenon of self-prescribed investigation and the absence of qualified sonographers in the numerous commercial imaging centres in Cameroon.
This patient had a resection of the rudimentary horn and an ipsilateral total salpingectomy-the recommended treatment option-to prevent future ipsilateral ectopic pregnancies. Her prognosis for future pregnancies has been reduced. She has no increased risk of uterine scar rupture in subsequent pregnancies. There were also no associated urinary tract malformations-a common coincidence with genital tract malformations.
Conclusion
Rudimentary horn pregnancy is a rare clinical entity whose diagnosis is still difficult especially in resourcepoor settings, where ANC remains suboptimal and access to thorough sonographic imaging is widely unavailable; with resulting poor maternal and foetal outcomes. In order to increase the probability of early diagnosis and improve the prognosis in resource-limited settings, this entity should be suspected by clinicians in some pregnancies with atypical ultrasonography findings and more robust investigations implored to confirm diagnosis.
Authors' contributions VFF consulted, managed and followed-up the patient and drafted the manuscript. CAD and BF performed the surgery, post-operative follow-up and reviewed the manuscript. TN edited and reviewed the manuscript. All authors read and approved the final manuscript. | 2016-05-12T22:15:10.714Z | 2016-04-11T00:00:00.000 | {
"year": 2016,
"sha1": "29c7296fbb2c0f37cc6fa6fa46f6a9c7fe9e49ca",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-016-2013-7",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37a664a89b680c9a1153c2cbf1820ca714bed97d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249381227 | pes2o/s2orc | v3-fos-license | Clinical efficacy and in vitro neutralization capacity of monoclonal antibodies for severe acute respiratory syndrome coronavirus 2 delta and omicron variants
Abstract We aimed to provide in vitro data on the neutralization capacity of different monoclonal antibody (mAb) preparations against the severe acute respiratory syndrome coronavirus 2 (SARS‐CoV‐2) delta and omicron variant, respectively, and describe the in vivo RNA kinetics of coronavirus disease 2019 (COVID‐19) patients treated with the respective mAbs. Virus neutralization assays were performed to assess the neutralizing effect of the mAb formulations casirivimab/imdevimab and sotrovimab on the SARS‐CoV‐2 delta and omicron variant. Additionally, respiratory tract SARS‐CoV‐2 RNA kinetics are provided for 25 COVID‐19 patients infected with either delta variant (n = 18) or omicron variant (n = 7) treated with the respective mAb formulations during their hospital stay. In the virus neutralization assay, sotrovimab exhibits neutralizing capacity at therapeutically achievable concentrations against the SARS‐CoV‐2 delta and omicron variant. In contrast, casivirimab/imdevimab had neutralizing capacity against the delta variant but failed neutralization against the omicron variant except for a very high concentration above the currently recommended therapeutic dosage. In patients with delta variant infections treated with casivirimab/imdevimab, we observed a rapid decrease of respiratory viral RNA at day 3 after mAb therapy. In contrast, no such prompt decline was observed in patients with delta variant or omicron variant infections receiving sotrovimab.
neutralization against the omicron variant except for a very high concentration above the currently recommended therapeutic dosage. In patients with delta variant infections treated with casivirimab/imdevimab, we observed a rapid decrease of respiratory viral RNA at day 3 after mAb therapy. In contrast, no such prompt decline was observed in patients with delta variant or omicron variant infections receiving sotrovimab. but remained susceptible to in vitro inhibition by sotrovimab (Xevudy, GlaxoSmithKline). 3,4 Previously published studies assessed the neutralization capacity of mAb at concentrations below those that can be theoretically achieved after recommended therapeutic dosages and do not provide clinical data on treatment or outcome. [3][4][5] To shed light on this aspect, we focused here on mAb concentrations in the range of the maximum theoretically achievable therapeutic concentration in a patient after mAb administration at recommended doses. These high concentrations were tested in a virus neutralization assay (VNT) against the delta and omicron variant. 6
| SARS-CoV-2 serology
Serum samples were collected as part of the clinical routine.
| Virus naturalization assays
For the in vitro neutralization assays with the delta and the omicron SARS-CoV-2 variants, mAb formulations were diluted to reflect potential plasma levels in treated patients. The calculation was performed based on current recommendations: For patients older than 12 years and weighing at least 40 kg, the recommended therapeutic dose is 600/600 mg for casirivimab/imdevimab 13 and 500 mg for sotrovimab. 14 Given an estimated blood volume of about 5 l in an average adult individual, maximum plasma levels will be 120/120 µg/ml for casirivimab/imdevimab and 100 µg/ml for sotrovimab. Based on this calculation, we BREHM ET AL. is certain to occur after infection with the respective isolate and allows for comparability of virus concentrations used. Briefly, neutralization tests were performed as described previously. 6 After incubation at 37°C for one hour, the serum/virus mixtures were transferred to 96-well plates containing 5.0 × 10 6 cells/ plate of Vero cells (ATCC CRL-1008) seeded the previous day.
Following incubation for 96 h at 37°C, supernatants were discarded. The plates were fixed in 4% formaldehyde and stained with crystal violet. The highest dilution protecting two of three wells from CPE was taken as the neutralizing antibody titer.
| Statistical analysis
Analysis of viral RNA kinetics was done by comparing nasopharyngeal RNA concentration on the day of mAb treatment, day 3 (±1 day), and day 7 (±1 day) within the three different treatment groups (Kruskall-Wallis test) and were performed using GraphPad Prism software version 9.0.0 (GraphPad Software).
| Patients and clinical data
All 25 COVID-19 patients were symptomatic for less than 7 days and were at high risk for progressing to severe disease due to underlying medical comorbidities. In addition, all patients were either shown to be seronegative for anti-SARS-CoV-2 Spikeprotein antibodies at baseline, were unvaccinated or were at high risk of poor antibody responses following SARS-CoV-2 vaccination (e.g., due to immunosuppression or immunocompromise). Of those 25 patients, 72% The patients showed rather similar baseline characteristics with respect to age, sex, previous COVID-19 vaccinations, and immunosuppression (Table 1).
| Clinical outcome in patients receiving mAb formulations
None of the patients in our study had disease progression to severe
ACKNOWLEDGMENTS
Open Access funding enabled and organized by Projekt DEAL.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
DATA AVAILABILITY STATEMENT
The data that support the findings of this study are available from the corresponding author upon reasonable request. | 2022-06-06T15:10:00.032Z | 2022-06-03T00:00:00.000 | {
"year": 2022,
"sha1": "04ba9f023b19f960faae63b40ee1af56eb23fd3f",
"oa_license": "CCBYNCND",
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9347884",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "a5cc557dcb9db810485da9cb5b99e5bc396afc02",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
237108225 | pes2o/s2orc | v3-fos-license | COVID-19 infection in pediatric subjects: study of 36 cases in Conakry
The aim of this study was to evaluate the main clinical and evolutionary features of SARS-CoV-2 infection in children aged 0-18 years who were suspected and diagnosed for COVID-19 during routine consultations in the pediatric ward of the Ignace Deen National Hospital in Conakry. This retrospective study targeted all children admitted to the Pediatrics Department during the study period and focused on children whose clinical examination and/or history indicated a suspicion of SARS-CoV-2 infection. Only children with a positive reverse transcriptase-polymerase chain reaction (RT-PCR) test were included. Clinical and paraclinical data were rigorously analyzed. Anonymity and respect for ethical rules were the norm. Medical records were used as the data source and a questionnaire was developed for collection. The analysis was done using STATA/SE version 11.2 software. The mean age of the patients observed was 9.66±1.32 years, with a sex ratio of 1.25. The history of the patients found that 36.11 had already been in contact with a COVID-19 positive subject, of which 8 or 22 had close relatives treated for COVID-19 and 5 had been with classmates treated for COVID-19. Fever and physical asthenia, runny nose and throat pain were respectively found in 58.33%, 50% and 30.55% of patients with irritability in 25%. Asymptomatic children were 30.55%. The diagnosis was confirmed after a positive RT-PCR test. Thoracic computed tomography (CT) scan was normal in 80.55% of the children. They were given mostly azithromycin 15mg/kg, zinc and chloroquine sulfate 5mg/kg. The mean age of the patients observed was 9.66 years, with a sex ratio of 1.25. The history of the patients found that 36.11 had already been in contact with a COVID-19 positive subject, of which 8 or 22 had close relatives treated for COVID-19 and 5 had been with classmates treated for COVID-19. Fever and physical asthenia, runny nose and throat pain were respectively found in 58.33%, 50% and 30.55% of patients with irritability in 25%. Asymptomatic children were 30.55%. The diagnosis was confirmed after a positive RT-PCR test. Thoracic computed tomography (CT) scan was normal in 80.55% of the children. They were given mostly azithromycin 15mg/kg, zinc and chloroquine sulfate 5mg/kg.
Abstract
The aim of this study was to evaluate the main clinical and evolutionary features of SARS-CoV-2 infection in children aged 0-18 years who were suspected and diagnosed for COVID-19 during routine consultations in the pediatric ward of the Ignace Deen National Hospital in Conakry. This retrospective study targeted all children admitted to the Pediatrics Department during the study period and focused on children whose clinical examination and/or history indicated a suspicion of SARS-CoV-2 infection. Only children with a positive reverse transcriptase-polymerase chain reaction (RT-PCR) test were included. Clinical and paraclinical data were rigorously analyzed. Anonymity and respect for ethical rules were the norm. Medical records were used as the data source and a questionnaire was developed for collection. The analysis was done using STATA/SE version 11.2 software. The mean age of the patients observed was 9.66±1.32 years, with a sex ratio of 1.25. The history of the patients found that 36.11 had already been in contact with a COVID-19 positive subject, of which 8 or 22 had close relatives treated for COVID-19 and 5 had been with classmates treated for COVID-19. Fever and physical asthenia, runny nose and throat pain were respectively found in 58.33%, 50% and 30.55% of patients with irritability in 25%. Asymptomatic children were 30.55%. The diagnosis was confirmed after a positive RT-PCR test. Thoracic
Introduction
Faced with an unknown virus, the concerns and preoccupations of health actors in all fields are growing. The International Committee on Virus Taxonomy has defined SARS-CoV-2 and its associated disease as Coronavirus Disease 2019 (COVID-19) [1]. Given its rapid spread worldwide, the World Health Organization (WHO) declared the disease at COVID-19 as a pandemic on March 11 [2]. To date, there have been 37,575,650 positive cases and 1,077,849 deaths at all ages worldwide [3]. In Guinea, there are 11,188 positive cases with 70 deaths [4], with a current population of 14,515,653. To date in our country, we do not have collective epidemiological and clinical data specific to children. The disease can be found at any age given the highly reported mode of contamination and spread by air. There is no evidence of intrauterine infection caused by vertical transmission [5,6]. Amniotic fluid; umbilical cord blood; and neonatal throat swab samples from COVID-19 infected mothers were negative for COVID-19 [7,8]. In addition, there is increasing evidence of neonatal pneumonia induced by SARS-CoV-2 infection [9]. Some authors including Lu X et al. Liu W et al. based on case series, suggest that children appear to be less affected than adults [10,11]. Although the authors believe that the clinical features, course and outcome of the disease in children and young adults appear to be significantly less severe than in older people, we must point out that there is a serious lack of data regarding the epidemiological and clinical features of COVID-19 in pediatric subjects. Fever and a mild cough are the most commonly described symptoms at the onset of illness in children [12]. Other clinical features include sore throat, rhinorrhea, sneezing, myalgia, fatigue, diarrhea and vomiting. Diagnosis is based on a combination of clinical and paraclinical arguments (laboratory abnormalities, chest imaging and RT-PCR). Saliva may be an ideal specimen type for the diagnosis of COVID-19 infection in children and its use improves the chances of diagnosis. Based on the results of the antibody test, confirmatory RT-PCR and clinical evaluation, hospital treatment or home isolation measures are initiated, with contact tracing measures as per protocol. Based on observations reported elsewhere in the world, the course of the disease and its outcome in children and young adults appear to be significantly less severe than in older people. This study aims to evaluate the main clinical and evolutionary features of SARS-CoV-2 infection in children aged 0 to 18 years who were suspected and diagnosed for COVID-19 in the pediatric ward of Conakry.
Methods
This is a retrospective study carried out over a period of 5 months from 01 st April to 30 th September 2020. The work targeted all children admitted to the Pediatrics Department during the study period and focused on children whose clinical examination (fever, physical asthenia, cough, runny nose) and/or history suspected an SARS-CoV-2 infection (contact with a parent or a COVID-19 positive person). Only children with a positive RT-PCR test were included. The clinical data analyzed were age, gender, COVID-19 contact, comorbidity, pneumonia symptoms and clinical signs of complications (neurological deterioration, tachypnea, and hypoxia). CT features were also evaluated. Children with upper respiratory tract infection (i.e. pharyngeal congestion, sore throat and fever) without radiographic involvement were included in the "mild" symptom category. Children with radiologic signs of "pneumonia" and no complications were classified as "moderate" symptoms. On the other hand, children with a mild or moderate clinical picture, with manifestations suggesting disease progression (i.e. deterioration of neurological status, tachypnea, hypoxia, were considered "severe". Recovery was considered to be a favorable/satisfactory course. Anonymity with no implication of any potential risk to patients and respect for ethical rules was standard. There was no connection between patients and researchers. Patient records were used as the data source and the analysis was done using STATA/SE version 11.2 software.
Results
The mean age of the patients observed was 9.66±1.32 years, the male sex was the most represented with a proportion of 55.55% versus 44.44% female and a sex ratio (male to female) equal to 1.25 ( Table 1). The history of the patients shows that 13 children (36.11%) had already been in contact with a COVID-19 positive subject, of which 8 (22%) had close relatives (father, mother, siblings) who were confirmed positive, treated and cured or under treatment. Similarly, 5 or 13.88% of the children reported having been in contact with classmates who had been treated for COVID-19. In addition, 23 patients, or 63.88%, were unaware of having been in contact or not with a carrier of the virus ( Table 2). All of the children had had malaria in the past. Epilepsy was noted as a co-morbidity in one child, and 4 other children or 11.11% had positive HIV serology. Fever and physical asthenia, cough and runny nose, sore throat were respectively found in 58.33%, 50%, and 30.55% of the patients. Irritability was noted in 25% of patients and 16.66% reported myalgia. However, almost half, 30.55% of the children studied in this series were asymptomatic on COVID-19 (Table 3). The rapid diagnostic test (RDT) was positive in both symptomatic and non-symptomatic children on COVID-19. The RT-PCR allowed us to confirm the diagnosis of COVID-19 infection in all patients (Table 4) even though only 94.44% presented a non-specific inflammatory syndrome on biological examination. Similarly, lymphocyte and monocyte blood levels were significantly elevated in all children (Table 5). Chest CT scan revealed frosted glass opacities in 19.44% and Kerley's line in 11.11% of patients. Also noteworthy was the normality of the chest CT scan in 80.55% of patients (Table 5). All patients were treated with azithromycin and zinc; 34 or 94.44% benefited from chloroquine sulfate and in the 5.55% HIV positive patients, we had maintained antiviral treatment. The evolution was marked by the healing of all the children studied in this series (Table 6).
Discussion
A recent work by Ludvigsson F. Jonas [13], suggests that infection with CA-SARS-CoV-2 occurs in children, but appears to have a lower incidence, a milder course and a better prognosis in children. Our observation of a mean age of 9.66 years and a sex ratio=1.25 is consistent with the findings of Balasubramanian S et al. [14] which exposes the occurrence of the disease in children and at the same time explains why children are less affected than the elderly. The effectiveness of the immune system in young children could be a determining factor in the rarity and benignity of the disease in these children. The gender of the patient does not seem to have an interest in the contamination of the disease. All the male predominance observed in our study could be the consequence of the inclusive character and ignorance of attitudes, guided by a carefree adolescence of young boys.
The history of the patients, which revealed that thirteen (13) had already been in contact with a subject diagnosed and cured for COVID-19, also revealed that 61.53% of these children were probably contaminated by the parents (treated and cured for COVID-19) although the initial RT-PCR test of the children was negative. Furthermore, since a person cured of the disease is no longer a subject of contamination unless re-infected, we consider that these children were probably contaminated at the same time as their parents even though they had not initially presented clinical and paraclinical arguments in favor of COVID-19 infection. If this is the case, it is agreed that there is sometimes a delay in clinical and paraclinical expression of the disease in children. Also, 38.46% of children who have been in contact with a carrier subject have probably been contaminated in a school setting given the notion of contact with a classmate who was followed for COVID-19. Because children are younger and immature, they cannot formally comply with the prescribed barrier measures.
Supervisors should increase vigilance and insist on observation and isolation of all children in the event of a positive result from one of the classmates.
Several RT-PCR tests should be performed and each child's chart should be checked. Dong et al. study of 2143 children identified by laboratory tests using a combination of clinical symptoms and exposure status revealed that 34.1% had laboratoryconfirmed disease, while the remainder had clinically suspected disease [15]. Contrary to our observation, all children were diagnosed positive on the basis of the RT-PCR laboratory test. However, our clinical observations were consistent with the typical symptoms of acute respiratory infections and included fever, physical asthenia, cough, sore throat, sneezing, myalgia. The proportion of asymptomatic children found in the literature [14] and observed in our study, extinguishes the thesis of a milder evolution of the disease and a better prognosis in children. However, taking into account the results of some authors who report that even asymptomatic subjects can be contagious [16,17], we believe that this state of affairs would constitute a circle. In our circles, people do not go to the hospital to be screened, but rather to be treated when they experience symptoms. In spite of the 2019 coronavirus health crisis we are currently experiencing, very few people deliberately go to the hospital to get tested. In fact, some people do not believe the disease exists, otherwise for those who do, the disease would not have an impact on the health of people in black Africa. These resilient situations delay diagnosis and therefore increase the risk of contamination and spread of the disease in all sectors, even in schools.
The occurrence of infection and its clinical expression in the four (4) children with HIV, as well as the asymptomatic children (30.55%) despite a positive RT-PCR test, indicates the involvement of immunity in the pathophysiology of the disease. None of our patients presented severe symptoms but only 80.55% had mild symptoms and 19.44% presented moderate symptoms. Almost all children (94.44%) presented a non-specific inflammatory syndrome on biological examination. This could be considered biological evidence of the storm cytokines marker of inflammation of the systems. The RT-PCR test aided the diagnosis. The chest CT scan carried out in all children provided some illustrations of the evolution of the disease. Chloroquine sulfate in application in our country was administered to almost all children with an indication at a dosage of 5mg/kg. On the other hand, children living with HIV on antiretroviral drugs did not benefit from chloroquine adjustment, except for azythromycin, and the rest of the treatment was symptomatic and then a strictly adhered to diet. Although several published studies denounce the ineffectiveness of the molecule in the treatment of COVID-19 infection, in our context, it is attributed to the cure of a plethoric number of patients of all ages, hence the use with respect to the prescribed precautions especially in small and large children. The evolution has been marked by the cure of all children. The lack of collective epidemiological and clinical data relating exclusively to children in our country was the main limitation in this study. A national study of pediatric SARS-CoV-2 infection in the pediatric setting would provide a better picture and would likely allow us to learn more about the epidemiology and collect new data. Similarly, a massive screening in schools across the country could be an asset in our fight to eradicate the disease.
Conclusion
This study identified 30.55% of children who were asymptomatic on COVID-19 and the rest with mild to moderate symptoms. The evolution was satisfactory in all patients observed, even in those with underlying viral infection. The delayed onset or even absence of symptoms in children suggests an increased risk of contamination and spread because diagnosis is sometimes late and therefore management is delayed. In cases where there is a close relative (father, mother, siblings) who is positive for the disease, we propose immediate observation of the children close to him/her, even if there are no clinical or paraclinical arguments in favor of the disease in the child. The identification of children with co-morbidities and their strict and rigorous management should be a central concern for clinicians.
What is known about this topic
SARS-CoV-2 infection can be found at any age but remains more common in adults than in children. Table 1: repairing children based on symptoms and admission information Table 2: patient distribution by history and diagnosis Table 3: repair of children according to the frequency of clinical and therapeutic data Table 4: distribution of children by disease expression and diagnostic test Table 5: distribution of children according to biological and imaginary data Table 6: post-therapeutic issues | 2020-12-10T09:03:16.862Z | 2020-12-08T00:00:00.000 | {
"year": 2020,
"sha1": "9f2fa430a50ad8dc3fc189a92a8cb3435b6662dc",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.11604/pamj.supp.2020.37.1.26573",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f727e5f1cb96f78dbb38e3016ea8d6dc15aba6cd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
40630819 | pes2o/s2orc | v3-fos-license | Laparoscopic partial cystectomy with mucosal stripping of extraluminal duodenal duplication cysts.
Duodenal duplication cysts are rare congenital anomalies. Duodenal duplication should be considered in the differential diagnosis of patients who present with abdominal symptoms with cystic structures neighboring the duodenum. Here, we present an 8-year-old girl with a duodenal duplication cyst treated with partial cystectomy with mucosal stripping performed laparoscopically. Laparoscopic surgery can be considered as a treatment option for duodenal duplication cysts, especially in extraluminal locations.
INTRODUCTION
Duodenal duplication cysts are rare congenital anomalies, which can appear during the neonatal period or later in life depending on the degree of the gastric outlet obstruction [1] . Classical treatment for duodenal duplication cysts is total resection, but in cases requiring pancreaticoduodenectomy, less-invasive approaches have been proposed [2][3][4] . Here, we describe the laparoscopic technique for partial resection of duodenal duplication cysts in an 8-year-old girl.
CASE REPORT
An 8-year-old girl had suffered from intermittent abdominal pain, nausea and vomiting for 2 mo. The patient had no other underlying diseases. The abdomen was flat and no definite mass was palpable. The laboratory studies were normal. The patient did not have jaundice and had normal serum bilirubin level. The tumor markers were not checked before surgery. Abdominal ultrasonography (US) showed 6 cm × 5 cm lobulated retroperitoneal cystic mass, septated between the duodenum and the pancreas. Upper gastrointestinal series revealed luminal narrowing of the second portion of the duodenum ( Figure 1A). Abdominal computed tomography (CT) and biliary pancreas magnetic resonance imaging (MRI) showed a multiseptated cystic mass suspected of originating from the uncinate process of the pancreas. Extra compression from this lesion seemed to be causing narrowing of the duodenal lumen ( Figure 1B and C). Retroperitoneal lymphangioma of the pancreas was primarily suspected, along with other differential diagnoses including solid and papillary epithelial neoplasms, pancreatoblastoma, unusual cystadenoma, and pancreatic pseudocyst.
Laparoscopic exploration was performed. The patient was placed in the supine position under general anesthesia and an optical umbilical port was placed under direct vision followed by three additional ports ( Figure 2). A Snowden-Pencer Snake Retractor (CareFusion, San Diego, CA, United States) was inserted through port site 3 for liver retraction. After performing a laparoscopic Kocher maneuver, a multiloculated cystic mass was identified in the second portion of the duodenum. The cystic mass originated from the mesenteric border of the duodenum and adhered to the uncinate process of the pancreas ( Figure 3A). After adhesiolysis between the cyst and the pancreas, clear demarcation of the cystic surface was identified ( Figure 3B). An arterial branch supplying the mass originating from the gastroduodenal artery was ligated with a 5-mm hemoclip and divided ( Figure 3C). The proximal border of the mass was easily dissected from duodenum with an ENDOPATH Electrosurgery Probe Plus II System with a Hook electrode (Ethicon Endo-Surgery, Cincinnati, OH, United States), but the distal border was directly attached to the duodenal wall, forming a common wall. A harmonic scalpel (Ethicon Endo-Surgery) was used to resect the mass from the duodenum and the remnant mucosa was cauterized with the ENDOPATH system. The lesion formed a common wall with the duodenum, without communication or fistula. No intraoperative complications were encountered.
The patient was discharged on Postoperative Day 9 without any complications. Upon histopathological review, a compatible duodenal wall with partially denuded epithelium was consistent with duodenal duplication (Figure 4).
DISCUSSION
Patients with duodenal duplication cysts present with recurrent nausea, vomiting, abdominal mass, abdominal distension, pancreatitis, and gastrointestinal bleeding [4,5] . Duodenal duplication can be diagnosed with various imaging modalities. The "double-layered wall" of the duodenum seen with US, CT, and endoscopic ultrasound (EUS) are used to reach a diagnosis of duodenal duplication [6][7][8] . In the present case, the patient had complained of recurrent nausea and vomiting. However, in the US findings, the double-layered wall was not significant, and the lesion seemed to originate from the pancreas in the CT and MR images. Moreover, the lesion showed multiseptations, which turned out to be the folding patterns of the cyst wall upon surgical exploration, and made it difficult to distinguish from other pancreatic tumors, including pancreatic lymphangioma.
The ideal treatment for duodenal duplication cysts is complete surgical resection if their location allows it without endangering the biliopancreatic ducts [4] . However, in cases of duodenal duplication cysts involving important nearby structures, for example, the pancreas or biliary ducts, major surgical procedures like pancreaticoduodenectomy may be required for total resection. This major procedure has a high complication rate resulting in poor quality of life especially in children, therefore, less-invasive approaches, for instance, partial resection or internal marsupialization, have been proposed [2][3][4] . We have performed partial cystectomy with mucosal stripping without duodenotomy using laparoscopic devices. However, in children, the small abdominal cavity and relatively small organs are limitations to laparoscopic approaches. Compared with conventional open surgery, laparoscopic surgery is less invasive and has more cosmetic advantages if it is performed by an experienced surgeon with the proper equipment.
Endoscopic therapy for duodenal duplication has been suggested recently for minimally invasive treatment. However, the endoscopic approach has limitations for extraluminal cysts. Endoscopic internal derivation can-not remove the mucosal layer where malignancy mainly occurs, therefore, we propose that laparoscopic surgery is a safer method, especially for cases with extraluminal locations [2,9,10] .
In summary, although duodenal duplication cysts are rare, they should be considered in the differential diagnosis of patients who present with abdominal symptoms with cystic structures neighboring the duodenum. Laparoscopic partial cystectomy with mucosal stripping can be considered as a treatment option for duodenal duplication cysts even in children. Figure 3 Laparoscopic procedures and intraoperative findings. A: Laparoscopic intraoperative findings of duodenal duplication cyst after Kocher's maneuver. Duplication cyst (white arrow heads) and mesenteric side of posterior wall of duodenum (black arrow heads); B: Demarcation of the mass surface after adhesiolysis. An arterial branch from the gastroduodenal artery supporting the mass was also noticed; C: Resection line with harmonic scalpel (white line). | 2018-04-03T04:53:11.782Z | 2014-01-28T00:00:00.000 | {
"year": 2014,
"sha1": "0a13c8761ae145393ef17a15c948423e175c9edd",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v20.i4.1123",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "50cbe3151739948684a6dff6e1c99395bc8b8191",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
210925991 | pes2o/s2orc | v3-fos-license | A Brief Educational Intervention Enhances Basic Cancer Literacy Among Kentucky Middle and High School Students
Kentucky experiences the highest overall cancer incidence and mortality rates in the USA with the greatest burden in the eastern, Appalachian region of the state. Cancer disparities in Kentucky are driven in part by poor health behaviors, poverty, lack of health care access, low education levels, and low health literacy. Individuals with inadequate health literacy are less likely to participate in preventive measures such as obtaining screenings and making healthy lifestyle choices, thus increasing their chances of developing and dying from cancer. By increasing cancer literacy among youth and adults, it may be possible to decrease cancer disparities across Kentucky. This study aimed to establish connections with middle and high schools in Kentucky that would facilitate pilot implementation of a brief cancer education intervention and assessment of cancer health literacy among these student populations. A baseline pretest cancer literacy survey consisting of 10 items was given to 349 participants, followed by the delivery of a cancer education presentation. Immediately following the presentation, participants were given a posttest with identical items to the pretest. Participants were primarily Caucasian (89.4%), female (68.7%), and in 10th through 12th grade (80.5%). Significant (p < 0.0001) increases in both average and median percent of correctly marked items were observed between the pretest and posttest (average, pretest = 56% versus posttest = 85%; median, pretest = 60% versus posttest = 90%). The scores for all individual items increased after the brief intervention. The results demonstrated a significant increase in cancer literacy levels immediately after the pilot educational intervention. We suggest that it may be possible to improve cancer literacy rates in Kentucky by integrating cancer education into middle and high school science and/or health education curricula. This could ultimately drive changes in behaviors that may help lower cancer incidence and mortality rates. Plans for future interventional studies measuring long-term cancer knowledge retention and resultant behavioral changes among middle and high school students as well as the feasibility of integrating cancer education into middle and high school curricula are also discussed. Electronic supplementary material The online version of this article (10.1007/s13187-020-01696-3) contains supplementary material, which is available to authorized users.
Introduction
Cancer is a leading public health problem in the United States (U.S.); there are over 1.7 million new cases each year and over 600,000 cancer deaths [1]. Although cancer is widespread and generally non-discriminatory, disparities in incidence and mortality exist across varying population groups, including residents of specific geographic regions. Notably, Kentucky ranks first in the nation in overall cancer incidence and mortality and experiences over 26,000 new cancer cases each year and over 10,000 cancerrelated deaths [2]. Rural eastern Kentucky residents face some of the highest cancer incidence and mortality rates in the country [3,4]. Residents of rural counties in Kentucky, specifically the Appalachian region, are 8% more likely to die from a preventable or screenable malignancy [5].
Cancer disparities in Kentucky are attributed to different factors, including elevated rates of inadequate exercise, poor diet, and smoking [4,[6][7][8]. Additionally, when compared to the national average, a higher percentage of Kentucky residents are at or below the federal poverty line, which greatly limits their access to health care [6]. The mountainous terrain Electronic supplementary material The online version of this article (https://doi.org/10.1007/s13187-020-01696-3) contains supplementary material, which is available to authorized users. of rural, eastern Kentucky and the region's geographic isolation can make travel to the nearest preventive care facility, which may be several hours away, difficult [7]. Some residents may not have the time or the financial security to take a leave of absence from work to receive screenings or treatments in a facility far from home. Kentucky also struggles with low education levels, ranking 47th in the U.S. for educational attainment, serving as a barrier to health literacy [9].
Health literacy-the ability to understand health care information to make appropriate health decisions-is essential to taking the necessary precautions to protect oneself from health issues, yet one in three U.S. adults have limited health literacy [10][11][12]. Health literacy includes a general knowledge about the mechanisms of disease, possible treatments, and preventive measures. Health literacy has three dimensions [13]. The first, functional literacy, is measured based on an individual's reading and writing skills that enable them to comprehend health information, such as basic facts on the biology of cancer. This is a surface level understanding of health literacy, as it does not take patient behavior into account. The second, interactive health literacy, includes how an individual is able to take an active role regarding their own health. Finally, critical health literacy is an individual's ability to accept health-related advice and make appropriate decisions [13]. When considering cancer literacy in particular, it is important to take into account each dimension, as the ability of a patient to engage in proper screenings and treatment extends past functional health literacy [13]. Patients with low health literacy may have lower participation in cancer prevention activities, which may result in lower levels of cancer treatment and increased risk [14]. The desired outcome of increased cancer-related health literacy is that morbidity and mortality rates would decrease as patients begin to participate in preventive cancer behaviors [13].
Youth represent both a vulnerable population that are at risk of beginning harmful activities that can increase cancer risk (e.g., smoking, tanning) and a population that may be more amenable to cancer prevention and control interventions. These interventions include those associated with improving cancer literacy, which have the potential to lower cancer incidence and mortality rates [15][16][17]. With this in mind, the purpose of this pilot study was to establish connections with middle and high schools in Kentucky that would allow for the assessment of aspects of basic, functional cancer literacy in students prior to and immediately after participation in a brief cancer education intervention. Increasing cancer literacy among Kentucky's youth could be an important long-term strategy for reducing cancer rates in the state.
Methods
This pilot cancer literacy intervention occurred in participants' schools during normal school hours, typically during a regularly scheduled science or health class. The target population was middle and high school students. Participants were recruited from four high schools and one middle school in Kentucky that chose to participate in the intervention; three of the high schools were located in the rural, Appalachian area of the state and the remaining schools were located in urban, central Kentucky ( Fig. 1). Engagement with each school occurred through initial communication with individual science or health teachers or with school guidance counselors. The schools and participants were anonymized. General demographics, including gender, race, ethnicity, and grade level, were collected from each participant. All pilot study procedures were approved by the University of Kentucky Institutional Review Board (Protocol 44637). Parental consent was waived. Student assent was obtained through engagement of the questionnaire after participants were informed of the study aims and methods and were assured that their identities would be anonymized.
Participants completed a paper-based demographic questionnaire and a 10-item pretest survey, observed a 30-to 45min PowerPoint presentation (given by NLV), and then completed a 10-item posttest survey-identical to the pretestimmediately following the intervention. Participants had access to both the pre-and posttests during the duration of the intervention, and both tests were collected together following completion of the posttest. Because the intervention was given within a classroom setting, all students in attendance participated in the assessments and educational presentation. Given that all students who were present participated, the overall preand posttest response rate was 100%, but not every participant answered each question as they could skip questions.
The presentation topics included basic cancer biology principles, cancer risk factors, cancer statistics in the U.S. and Kentucky, and modifiable behaviors that can reduce the risk of cancer. The survey items were developed to test participants' understanding of these topics; three of the questions (3, 6, and 7) were adopted from a previous study [18]. The demographic questionnaire and the pre/posttest are provided as supplemental material (Appendix 1).
One-way frequencies for all respondents were calculated for the demographic variables. The overall sample average and median percent of correctly marked items for both the pretest and posttest were calculated. A paired t test was used to test the null hypothesis that the difference between the average of the percent of correctly marked pretest and posttest items was equal to 0. The percent of correctly marked items was calculated for the entire sample and for demographic subgroups with similar hypothesis testing, along with 95% confidence intervals. Statistical analyses were performed in SAS 9.4 (Cary, NC).
Results
Participants (N = 349) were predominantly Caucasian (89.4%) and not of Hispanic or Latino descent (91.3%); these demographics closely match the overall demographics of Kentucky [19]. Over two-thirds of the participants were female (68.7%) and the majority (80.5%) were in 10th, 11th, or 12th grade ( Table 1). The average percent of correctly marked items increased from 56% (95% confidence interval [CI], 51%, 61%) on the pretest to 85% (95% CI, 81%, 89%) on Fig. 2 Overall pretest versus posttest scores on a 10-item cancer literacy survey. Participants (N = 349) were given a 10-item pretest before attending a 30-to 45-min cancer education presentation and afterwards participants completed a 10-item posttest that was identical to the pretest. The percent of items correctly answered were plotted the posttest; median scores increased from 60% on the pretest to 90% on the posttest (Fig. 2). We observed a significant increase in the average percent of correctly marked items and percent responsiveness for each item from pretest to posttest. Item one ("What is cancer?") had the lowest percent responsiveness (4.9%), indicating that the majority of participants were aware of this concept before the intervention. Item 9 ("How does Kentucky compare to other states in cancer rates?") had the highest percent responsiveness (75.7%), indicating that participants were not aware of this concept before the intervention (Table 2). Items 1 and 5 were answered correctly by greater than 80% of participants on both the pretest and posttest, while items 1, 4, 5, 6, 7, and 8 were answered correctly by greater than 70% of participants on both the pre-and posttest, suggesting ceiling effects for these items (Table 2). There was a statistically significant (p < 0.0001) increase in the overall pretest versus posttest average and percent responsiveness scores for each school, gender, and grade level (Table 3).
Discussion
This pilot study established connections with schools, which allowed for an examination of the effects of a brief cancerrelated educational intervention on cancer literacy levels among middle and high school students in Kentucky. The students were enrolled in schools that are geographically located in urban central and rural eastern Kentucky. There was a significant increase in the overall test scores following the pilot intervention. All items were responsive; there was a significant increase in individual test scores following the brief intervention. This indicates that participants' cancer literacy increased, although the responsiveness was greater for some items. These data suggest that a brief educational intervention about cancer can increase middle and high school participants' basic literacy of the disease.
Other studies have also demonstrated increases in cancer literacy knowledge levels as a result of educational interventions. A 2015 study of Mexican students' knowledge of cervical and breast cancer used an educational strategy to increase clinical-focused cancer literacy; the results demonstrated a 21.2% increase in correct responses from pretest to posttest [20]. A 2018 study measuring health literacy in the context of cervical cancer screening in Japanese women found that an educational intervention increased health knowledge of the adult participants [21]. These studies point to the possibility of self-care improvements, including behavior changes that can lower cancer risk and increase how often patients seek care, alongside improved knowledge of a particular disease [19,21]. The pilot intervention herein has similar potential.
This exploratory study should be interpreted cautiously and in context with its limitations. First, as a cross-sectional study of a convenience sample, the results may not be generalizable. It is difficult to know whether the results may be representative of all students in Kentucky or more broadly representative of students within the greater U.S., and, likewise, it is not clear whether these results could be generalized to adult populations. Second, participants had access to the pre-and posttest during the duration of the intervention and such access could have influenced their performance on the posttest. Third, the design of the study makes it difficult to determine the long-term educational effects of the intervention. Because the posttest was administered immediately after the intervention, it is impossible to discern from this pilot study whether the students retained the material or simply recalled it from their short-term memory. Lastly, several items were answered correctly by > 70% of the sample population on the pretest, suggesting a ceiling effect for these items, which limits the data range/variability. Although several items from our survey were validated in a previous study [18], the validity and reliability of our survey has not been confirmed. Despite the study's limitations, this pilot work provides preliminary evidence that cancer literacy among youth may significantly increase even with a brief educational intervention. Future studies will need to determine whether students retain the knowledge they obtain from any cancer education they receive. Based on the successful connections established with the five schools enrolled in this study, we have established connections with additional schools in Kentucky. Work is now underway to measure cancer knowledge retention several months after the brief intervention developed herein. We are also integrating additional measures that will determine whether participants change any behaviors over time as the result of the intervention. Lastly, we are also collecting data to understand the feasibility of incorporating cancer topics into science and/or health curricula at the newly participating schools.
Conclusion
Cancer rates in Kentucky are elevated compared to general rates in the U.S. The use of educational interventions, especially among youth, could help increase cancer literacy. Such interventions can help students understand the basics of cancer, which could aid decision-making around modifiable cancer risk factors and health-seeking behaviors. As such, we recommend that school systems integrate evidence-based cancer education modules into their science or health education curricula. | 2020-01-28T15:22:59.876Z | 2020-01-28T00:00:00.000 | {
"year": 2020,
"sha1": "a60bcaaf4c54b1c811d7550ebe89f92836af23ef",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13187-020-01696-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a8bee4bdfe4d746cb9dc46131247bb2aef8ba719",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259150377 | pes2o/s2orc | v3-fos-license | Antimicrobial resistance and mechanisms of epigenetic regulation
The rampant use of antibiotics in animal husbandry, farming and clinical disease treatment has led to a significant issue with pathogen resistance worldwide over the past decades. The classical mechanisms of resistance typically investigate antimicrobial resistance resulting from natural resistance, mutation, gene transfer and other processes. However, the emergence and development of bacterial resistance cannot be fully explained from a genetic and biochemical standpoint. Evolution necessitates phenotypic variation, selection, and inheritance. There are indications that epigenetic modifications also play a role in antimicrobial resistance. This review will specifically focus on the effects of DNA modification, histone modification, rRNA methylation and the regulation of non-coding RNAs expression on antimicrobial resistance. In particular, we highlight critical work that how DNA methyltransferases and non-coding RNAs act as transcriptional regulators that allow bacteria to rapidly adapt to environmental changes and control their gene expressions to resist antibiotic stress. Additionally, it will delve into how Nucleolar-associated proteins in bacteria perform histone functions akin to eukaryotes. Epigenetics, a non-classical regulatory mechanism of bacterial resistance, may offer new avenues for antibiotic target selection and the development of novel antibiotics.
Introduction
The discovery and widespread use of antibiotics have greatly advanced modern medicine, significantly improving the treatment of bacterial infections. However, long-term exposure to antibiotics can pose a serious risk of antimicrobial resistance (AMR), where pathogenic microorganisms become resistant to the drugs. The emergence of AMR is a growing concern, particularly with the increasing detection of clinical resistant bacteria. According to the 2019 U.S. Antibiotic Resistance Threat Report, antibiotic-resistant bacteria and fungi are responsible for over 2.8 million infections and 35,000 deaths annually in the USA alone (Centers for Disease Control and Prevention, 2019). Furthermore, predictive statistical models from the Institute for Health Metrics and Evaluation at the University of Washington, estimate that there may have been 4.95 million deathes worldwide in 2019 due to AMR (Antimicrobial Resistance Collaborators, 2022). Clearly, AMR has become a critical threat to global public health security, compounded by the onset of the post-antibiotic age and the inappropriate use of antibiotics.
Despite more than 80 years of antibiotics use, bacteria have evolved AMR mechanisms over billions of years that allow them to escape the impact of antibiotics (Hall and Barlow, 2004). The classical AMR mechanisms include chromosomal resistance, changes in cell membrane permeability, enzyme production, target modification or mutation, active efflux pump system changes, and horizontal or vertical transfer of AMR genes ( Figure 1) (Cox and Wright, 2013;Yelin and Kishony, 2018). These mechanisms primarily involve well-documented biochemical mechanisms and gene alterations, which are diverse, specific and heritable. However, in addition to genome changes, environmental factors and genetic context also impact the development of AMR. Antibiotics can have multiple activities, including as a resistant inducer, an inducer of resistance determinant dissemination, and an antibacterial agent (Depardieu et al., 2007). Studies demonstrate that antibiotics can induce epigenetic changes in bacterial resistance, indicating the role of epigenetics (Motta et al., 2015). While much research has focused on classical AMR mechanisms, these mechanisms fall short in explaining the emergence and spread of drug resistance due to factors such as bacterial adaptive evolution, heterogeneity, and late retention (Depardieu et al., 2007;Depardieu et al., 2007;Zhang, 2014;Becker et al., 2018;Nolivos et al., 2019;Lv et al., 2021;Yuan et al., 2022). Therefore, epigenetics may provide useful answers to these questions.
There has been growing interest in non-classical models of epigenetic-mediated bacterial AMR in recent years. In this review, we will explore the latest research on AMR in the field of epigenetics, with a focus on how epigenetic regulation influences the emergence of AMR, as well as how epigenetic regulators can reverse epigenetic phenomena and eliminate AMR. This is critical for understanding the mechanisms of AMR and for developing the potential of epigenetic regulators as direct or indirect targets for new drug therapies.
What is epigenetics?
Epigenetics refers to the study of the heritable phenotypic changes in an organism that are caused by environmental factors and genetic context, without any alterations to the DNA sequence. Epigenetic research is broadly divided into two categories (Willbanks et al., 2016): (1) Regulation of selective gene transcription, which includes DNA methylation, histone modification, chromatin Mechanism of antimicrobial resistance and its transmission. Transformation is the intra-and inter-species exchange of naked DNA released by cell lysis or gene sequences actively effluxed by some bacteria. Conjuction is the direct transfer of DNA molecules (such as plasmids) from donor bacteria to recipient bacteria through the pipeline formed by sex pilus. Transduction is the transfer of DNA from donor bacteria to recipient bacteria by bacteriophages (Depardieu et al., 2007;Zhang, 2014;Yuan et al., 2022). Membrane vesicle fusion means that vesicles secreted which includes nucleic acids, enzymes and drug resistance genes and other substances, can enter another bacteria or host cells through direct fusion with host cell membrane or endocytosis (Depardieu et al., 2007;Zhang, 2014;Yuan et al., 2022). The outer membrane porin mediates the entry and exit of antibiotics into and out of bacteria as a permeability barrier. When the porin is missing or reduced, some antibiotics reduce influx and the host bacteria become resistant. The production of antibiotic hydrolases, inactivating enzymes and modifying enzymes can lead to the inactivation of antibiotics. The mutation or modification of related targets makes it impossible for antibiotics to bind to the corresponding sites to play a bactericidal or bacteriostatic role (Depardieu et al., 2007;Zhang, 2014;Yuan et al., 2022). remodeling and DNA phosphorothioation; (2) Post-transcriptional gene regulation, which includes regulation by non-coding RNAs (ncRNAs), RNA modification, and nucleosome positioning.
Prokaryotes have a circular, double-stranded DNA chromosome without histones, which distinguishes them from eukaryotes and ancient karyotes. This lack of key elements, such as histones and nucleosomes, that can modify DNA structure makes the epigenetic regulation mode of prokaryotes relatively simple.
2.1 DNA modification 2.1.1 DNA methylation In contrast to eukaryotes, bacteria lack a complete nucleus, which initially led to the theory that DNA methylation was the only type of bacterial epigenetic mechanism (Ghosh et al., 2020). Bacterial DNA methylation has been extensively studied over the past half century, revealing its involvement in chromosome replication, DNA degradation, mismatch repair, gene expression regulation, and other important physiological activities (Table 1) (Heusipp et al., 2007;Muhammad et al., 2022). Bacteria have three major forms of DNA methylation: 5-methylcytosine (m 5 C), N6methyladenosine (m 6 A), and N4-methylcytosine (m 4 C). DNA methyltransferase (MTase) add methyl groups to specific DNA locations, such as the C5 or N4 position of cytosine and the N6 position of adenine ( Figure 2) (Dunn and Smith, 1955;Holliday and Pugh, 1975). The most commonly known DNA MTases are associated with the restriction-modification (R-M) system, which is a widely known defense mechanism in bacteria. While m 5 C and m 6 A are found in most bacteria, m 4 C is specific to bacteria and archaea (Sańchez and Casadesuś, 2020).
DNA phosphorothioation
In addition to DNA methylation, DNA modifications also include DNA phosphorothioation (PT) modification, which is a lesser known defence system that works in a way similar to that of the R-M system (Table 1) (Wang et al., 2019;Muhammad et al., 2022). PT modification, in which the nonbridging oxygen in the phosphate moiety of the DNA sugar-phosphate backbone is replaced by sulfur, was originally developed via chemically synthesized for decades (Tong et al., 2018). However, some research have discovered that PT modification can occur naturally in bacteria (Zou et al., 2018). Previously, it has been reported that DNA PT system consists of two parts: a five-gene dndABCDE cluster function as the M component to control DNA modification in a stereo-and sequence-selective manner, whereas products of the dndFGH cluster function as the R component to distinguish and restrict non-PT-protected foreign DNA (Tong et al., 2018). Among them, dndA possesses cysteine desulfurase activity and assembles DndC in bacteria . The IscS (a DndA homolog) can perform the same function as DndA to collaborate with DndBCDE in generating DNA PT modification . DndB can bind to the promoter region of the dnd operon to regulate the transcription of dnd genes. dndCDE function as modification genes: DndC is an iron-sulfur cluster protein that has ATP pyrophosphatase activity; DndD has ATPase activity and possibly provide energy for PT modification, and DndE is involved in binding nicked dsDNA (Hu et al., 2012;Wang et al., 2019). According to some research, the defence mechanism of PT modification has been revealed roughly (Figure 3). At first, the DndB function as a regulator to make response of environmental or cellular cues, and binds to the promoter region of the dnd operon. The DndA/IscS, DndC, DndD and DndE form a protein complex. Under the action of DndA/IscS, L-cysteine is used as a substrate to generate a persulphide group. Then, the sulphur is transferred to the DndACDE complex to complete the DNA PT modification (Wang et al., 2019). DNA PT modification has been reported in many bacteria. Except for function the similar way as the R-M system, DNA PT modification also plays important roles in antioxidant defenses, cellular redox homeostasis maintenance, environmental stress resistance, antibiotic resistance and cross talk with DNA methylation modification (Xie et al., 2012;Gan et al., 2014;Wu et al., 2017;Wu et al., 2020;Xu et al., 2023).
Histone modification
Histone modification is a significant epigenetic modus that plays an important role in regulating gene expression. German scientist Kossel discovered histones in the nucleus in 1884, but it wasn't until the 1960s that their biological significance began to be investigated in depth (Doenecke and Karlson, 1984;Verdin and Ott, 2015). Histones are structural proteins that make up eukaryotic nucleosomes, which are essential for maintaining chromosomal structure and negative regulation of gene expression (Muhammad et al., 2020). Histone modification can involve methylation, acetylation, phosphorylation, and ubiquitination, each of which performs different functions (Zhang et al., 2020). Notably, bacterial genomes are packed into nucleoids through nucleoid-associated FIGURE 3 DNA phosphorothioation modification simple diagram. Based on R-M system, DNA PT modification recognize and restrict non-PT-protected foreign DNA, such as plasmids. The sulfur is transferred from L-cycsteine to DndA, and then to the cysteine residues in DndC and through DndDE complex protein to insert into the DNA backbone (Tang et al., 2022). DndB function as a negative regulator controlling the expression of dndCDE. DndFGH function as a restriction module to affect the acquisition of exogenous DNA (Wang et al., 2019). Position of DNA methylation. Adenine can add methyl at N6. Cytosine can add methyl at either endocyclic (C5) or exocyclic (N4) (Kumar et al., 2018).
proteins (NAPs) in distinct cytoplasmic regions, rather than having a membrane-bound nucleus like eukaryotic cells (Muhammad et al., 2022).
Mounting evidence supports the idea that NAPs play crucial roles in DNA structuring and can perform functions similar to eukaryotic histones (Swinger and Rice, 2007;Stojkova et al., 2019;Amemiya et al., 2021). These structural proteins have important regulatory functions, including in bacterial virulence and pathogenesis (Table 2). NAPs form numerous aggregated structures with bacterial genomic DNA and participate in processes such as replication, separation, translation, and repair of prokaryotic genomic DNA. Among the primary NAPs studied are histone-like protein (HU), leucine-responsive regulatory protein (Lrp), virulence factor transcriptional regulator (MgaSpn) and Histone-like nucleoid-structuring (H-NS) (Casadesús and Low, 2006;Xiao et al., 2021;Ziegler and Freddolino, 2021;Ramamurthy et al., 2022;Stojkova and Spidlova, 2022).
RNA modification
RNA modification is an emerging area of research that has gained significant attention in recent years, which is conceptually analogous to the modifications of DNA and protein. Along with DNA methylation, RNA modification is widely found in both bacteria and eukaryotes, and over 100 types of RNA modifications have been identified, including m 6 A, N1methyladenosine (m 1 A), m 5 C, and 2-methylthiocytidine (ms 2 C) (Lopez et al., 2020). These modifications have been shown to play a critical role in regulating RNA stability, localization, transport, splicing, and translation, ultimately affecting gene regulation and biological function (Shi et al., 2019). RNA modifications are distributed on various RNA molecules, including transfer RNA (tRNA), messenger RNA (mRNA), ribosomal RNA (rRNA) and other small RNA species such as ncRNAs. RNA modification is almost found in tRNA (Jackman and Alfonzo, 2013). Though, not as common as in tRNA, rRNA contain numerous distinct types of post-transcriptional modifications, especially rRNA methylation. Research has shown that rRNA methylation can impact antibiotic resistance development, as many antibiotic targets are located on the ribosome and ncRNAs frequently adopt central roles in regulatory networks (Laughlin et al., 2022;Wang et al., 2022;Papenfort and Melamed, 2023). Of those, RNA methylation and ncRNAs modification have been reported as the most frequent type of modification in a wide range of bacteria (Table 1). In this section, we will discuss the research of rRNA methylation and ncRNAs in bacterial resistance.
Ribosomal RNA methylation
rRNA, a conserved macromolecule, is a structural component of the most abundant cellular molecule, the ribosome. In bacteria, ribosomes are composed of 16S, 23S, 5S rRNA and proteins. In eukaryotic cells, ribosomes are composed of 28S, 5S, 5.8S, 18S rRNA and proteins. In ribosomes, the rRNA is the main structural component and the core of structure and function, including (1) Synthesizing amino acids into peptide chains under the guidance of mRNA; (2) Providing binding sites for a variety of protein factors; (3) Having the activity of peptidyl transferase; (4) Providing binding sites for tRNA; (5) Targets of some antibiotics (Korobeinikova et al., 2012;Tafforeau, 2015;Srinivas et al., 2023). These functions are under tight transcriptional control to serve to meet cellular needs. Therefore, rRNA from all organisms undergoes post-transcriptional modifications that increase the diversity of its composition and activity.
Methylation of rRNA is a ubiquitous feature, and takes place during ribosomal biogenesis either by enzymes guided by an antisense small nucleolar RNA (snoRNA) or conventional protein enzymes (Lopez et al., 2020). Generally, rRNA methylation may promote the conformational rearrangement of rRNA, and regulate ribosome biogenesis and post-transcriptional modification . There are 25 rRNA modifications have been found in the 23S rRNA, including 13 methylations in Escherichia coli (E. coli) (Sergeeva et al., 2015). Wang et al. found that the absence of a single methylation in 23S rRNA affected 50S assembly and impaired translation initiation and elongation . In addition, rRNA methylation has emerged as a significant mechanism of AMR in pathogenic bacterial infections, such as aminoglycoside and macrolide resistance (Bhujbalrao et al., 2022;Srinivas et al., 2023).
Bacterial Species NAPs Functions References
Francisella tularensis HU Regulates the adaptive growth of bacteria and resistance to oxidative stress (Stojkova et al., 2018;Pavlik and Spidlova, 2022) Streptococcus pneumoniae
Fis
Regulates the supercoiling response to bacterial growing in macrophages and virulence (C et al., 2006)
Non-coding RNAs
Post-transcriptional gene regulation, which includes ncRNAs, is another important epigenetic modification. There are various types of ncRNAs: including housekeeping ncRNAs such as tRNA, rRNA, and regulatory ncRNAs such as micro RNA (miRNA) and long non-coding RNA (lncRNA) (Gusic and Prokisch, 2020). These RNAs play significant roles in transcription and translation, and in eukaryotes, they are involved in regulatory processes such as development, cell death, and chromosomal silencing. Although three regulatory RNAs contained E. coli 6S RNA, Spot 42 and the eukaryotic 7SK RNA were first discovered by sequencing in the 1970s, but were uncharacterized until decades later (Griffin, 1971;Delihas, 2015). Until the 1980s, the E. coli micF RNA gene was the first regulatory RNA discovered and characterized. Recent research has shown that ncRNAs regulate various cellular processes in bacteria, including multidrug resistance, glucose metabolism, and biofilm formation (Hirakawa et al., 2003;Vanderpool and Gottesman, 2004;Zhao et al., 2022). As a result, the regulatory mode of ncRNAs has become a major focus in the bacterial regulatory network.
Bacterial epigenetics mediating antibiotic resistance
Bacteria have evolved to adapt to the environment over time, leading to increased antimicrobial resistance (AMR) or tolerance upon long-term exposure to antibiotics. Interestingly, bacteria can quickly restore susceptibility after returning to a normal antibiotic exposure ( Figure 4). It is evident that gene mutations alone can not adequately explain this phenomenon.
Recent research has shown that bacteria can change the phenotypes of AMR through epigenetic intrinsic heterogeneity and transiently without the need for gene mutations (Foster, 2007;Adam et al., 2008). In order to adapt the environmental stress and ensure survival, bacteria has envolved molecular mechanisms for generating variation, such as Helicobacter pylori (H. pylori), Haemophilus influenzae (H. influenzae) and Neisseria gonorrhoeae (N. gonorrhoeae) (De et al., 2002;Srikhanta et al., 2009;Srikhanta et al., 2011). One mechanism is phase-variation, which is to randomly switch the expression of individual genes to generate a phenotypically diverse population to adapt to challenges (Seib et al., 2020). Genes can phase-vary by various of genetic mechanisms. Some studies consider that phase-variation is the high frequency reversible on/off switching of gene expression to evade antibiotic effects (Srikhanta et al., 2011). It has been reported that one way by which bacteria modulate the genes related to phase variation is via DNA hypermethylation or hypomethylation. However, variation in the length of hypermutable simple sequence repeats (SSRs) are a important source of phase variation, which facilitates adaptation to changing environments, immune and antibiotic escape of pathogens (Zhou et al., 2014;Pernitzsch et al., 2021). Recent studies have found that RepG (regulator of SSRs) ncRNA mediates the G-repeat length (rather than ON/OFF) and gradual control of lipopolysaccharide biosynthesis to affect AMR in H. pylori (Pernitzsch et al., 2021). Therefore, phenotypic variation, selection, and inheritance are necessary for evolution of bacteria. In this chapter, we summarize studies discussing the role of epigenetics in regulating AMR.
DNA methylation
Bacterial DNA methylation plays a vital role in epigenetic regulation by controlling gene expression, genome modification, virulence, mismatch repair, transcriptional regulation, cell cycle control, and AMR (Marinus and Casadesus, 2009). The most wellknown DNA MTases are associated with the defense mechanisms in bacteria known as restriction-modification systems (R-M systems). Epigenetic effects on adaptive resistance. When bacteria are continuously exposed to sub-inhibitory concentrations of antibiotics, they undergo adaptive evolution and gradually acquire resistance, which can be inherited. When antibiotics are withdrawn, the bacteria with adaptive resistance phenotype will immediately return to sensitivity (Marinus and Casadesus, 2009;Ghosh et al., 2020). Persistent bacteria are only a small part of the bacterial community that is stunted or slow to grow. Persistent bacteria can survive without mutation when exposed to antibiotic pressure (Marinus and Casadesus, 2009). These indicate that bacterial adaptive resistance is epigenetically regulated.
R-M systems prevent lethal cleavage of intracellular DNA by identifying their own DNA and methylating the same sequence as the restriction endonuclease cleavage site (Ghosh et al., 2020). However, foreign DNA such as plasmids carrying AMR genes, transposons, and insertable sequences cannot be methylated and will be recognized and degraded by endonucleases of the R-M systems. This defense mechanism can be circumvented if the foreign DNA carries a homolog methylase with the same specificity, and the sequence will be inserted into the genomic locus rather than degraded (Casadesuś and Low, 2006;Ishikawa et al., 2010). This mechanism could explain why plasmids, phages, transposons, integrons, and gene islands can insert into bacterial genomes and contribute to the widespread dissemination of AMR genes.
The R-M systems are classified into four types (I, II, III and IV) based on their functional localization of restriction endonuclease (Rease), activity of MTases, and requirement for specific subunits or cofactors (Roberts et al., 2003). The R-M systems have reported to function as a barrier to horizontal gene transfer in many bacteria ( Figure 5) (Vasu and Nagaraja, 2013;Kumar et al., 2018). Li et al. found a carbapenem-resistant hypervirulent Klebsiella pneumoniae (K. pneumoniae) strain with a bla kpc harboured conjugative plasmid and a pLVPK-like plasmid from the patient, and the type I R-M system on plasmids protected the plasmids from cleavage . Bubendorfer et al. concluded that R-M systems inhibited genomic integration of exogenous sequencs, while they pose no effects to homeologous recombination in H. pylori (Bubendorfer et al., 2016).
The type I and III includes genes encoding the DNA MTase mod. Many studies have described that mod gene-mediated DNA methylation can regulate phase-variable expression associated with various resistant clinical strains (Table 3) (Phillips et al., 2019). For instance, the ability of N. gonorrhoeae to form biofilms is affected by allele modA13 ON/OFF switching (Srikhanta et al., 2009); Neisseria meningitidis susceptibility to ceftazidime and ciprofloxacin result from ON/OFF of modA11 and modA12 OFF switching (Jen et al., 2014). A typical H. influenzae expressing modA2 MTase produces more biofilms in an alkaline environment than modA2-deficient populations, and these biofilms have a larger biomass and less apparent structure (Brockman et al., 2018). Bacterial biofilms and AMR are closely connected. Biofilms are organized multicellular communities surrounded by an extracellular polymeric substances and can decrease bacterial metabolism, growth rate, and resistance to antibiotic penetration, all of which contribute to biofilm resistance (davies, 2003). Even in Streptococcus suis, Tram et al. found biaphasic switching of phase-variable DNA MTase ModS2 results in the expression of distinct phase varions. Proteins involved in general metabolism increased expression in ModS2 ON. Adversely, a glyoxalase/bleomycin resistance/extradiol dioxygenase family protein which has been described as involved Overview of the function of bacterial DNA methylation. The R-M systems function as a barrier to recognize host genome and defenses foreign DNA, such as phage, plasmid (Phillips et al., 2019). Unlike R-M systems, orphan methyltransferases exist with no association with any restriction enzymes, and always function as regulators of DNA replication, gene transfer. Particularly, some orphan methyltransferases are not essential for most bacteria (Srikhanta et al., 2009). FinOP system regulates the conjugal transfer operon (tra) of plasmids. Specifically, traJ activates the transcription of tra operon (encodes the elements of pilus and products required for mating and DNA transfer). Synthesis of TraJ is controlled by FinP, a regulator that blocks traJ mRNA translation, and by FinO, a regulator that maintains the stability of FinP RNA-traJ mRNA complex (Jen et al., 2014). Dam methylation function as a conjugation repressor by activating FinP RNA synthesis. During the cell division process in bacteria, the essential FtsZ protein polymerizes into a Z-ring like structure at the future division site (Brockman et al., 2018). MipZ protein, which co-ordinates the initiation of chromosome replication with cell division, is important for the assembly of the Z-ring. MipZ interacts with the partitioning protein ParB, which then binds to the ParS locus near the chromosomal origin (davies, 2003). CcrM methylation activates the transcriptions of ftsZ and mipZ. When lacking the CcrM enzyme, the syntheses of FtsZ protein and MipZ protein are strongly downregulated, leading to a severe defect in cell division. In Caulobacter crescentus DccrM strain, most DccrM cells are filamentous with high cell length variability and frequent membrane defects (davies, 2003).
in resistance to beta-lactam and glycopeptide antibiotics was upregulated in strains that did not express ModS2 OFF (Tram et al., 2021).
In addition to the well-known R-M systems, there exists a group of bacterial DNA MTases called orphan MTases, which function independently without association with any R-M system (Ishikawa et al., 2010). Orphan MTases are unique, as they do not have functional counterparts in the restriction enzyme (Reases) family. The common categories of orphan MTases include DNA adenine methyltransferase (Dam), cell cycle regulated methyltransferase (CcrM) and DNA cytosine methyltransferase (Dcm). Bacteria exhibit complex stress responses when exposed to antibiotics, leading to the phenomenon of adaptive resistance. Recent research has revealed that these three orphans MTases play a crucial role in regulating adaptive resistance and the genetic pathways involved in drug sensitivity.
DNA adenine methyltransferase
Dam was the first orphan MTase identified in E. coli, where it modifies 5′-GATC-3′ sites (Marinus and Morris, 1973). Studies have shown that Dam-mediated DNA methylation is crucial for bacterial survival under antibiotic stress, and E. coli K12 Ddam strains exhibit increased sensitivity to beta-lactams and quinolones (Cohen et al., 2016). Epigenetic factors, such as Dam methylation or the regulation of efflux pump expression, have been suggested to contribute to adaptive AMR (Mazzariol et al., 2000;Casadesuś and Low, 2006;Adam et al., 2008). Adam et al. treated E. coli XL1-Blue strains with nalidixic acid and found that the expression of dam increased bacterial survival by approximately five-fold. This increased resistance was consistent with a two-fold rise in the expression of efflux pumps (Adam et al., 2008). Recent research has confirmed that the non-essential dam gene can be a potential target for enhancing antibiotic resistance. Chen et al. demonstrated that the dam deletion strain of E.coli MG1655 exhibited lower effective concentrations (EC50) than the wild-type strain when exposed to 20 antibiotics in five categories (Chen and Wang, 2021). This confirms that Dam plays a vital role in regulating drug sensitivity and can be utilized as a target for enhancing AMR. Dam in Salmonella enteritidis (S. enteritidis) has been found to repress the transcription of traJ, which encodes a transcriptional activator of the transfer (tra) operon of the pLST (Camacho and Casadesuś, 2002). In addition, Dam activates the transcription of finP, which encodes a ncRNA that contributes to repression of traJ expression (Gorrell and Kwok, 2017). Evidence exists to suggest that in a strain with chromosomal mechanisms of quinolone resistance, a synergistic sensitization effect can be observed when the Dam methylation system and the recA gene were suppressed (Diaz et al., 2023).
Cell cycle regulated methyltransferase
CcrM is a significant orphan MTase that modifies 5′-GANTC-3′ sites, first discovered in Caulobacter crescentus (C. crescentus). Unlike the ubiquitous Dam enzyme, CcrM expression is limited to the last stage of chromosome replication (Albu et al., 2012). In C. crescentus, at least four genes are directly affected by the methylation status of GANTC, including ftsZ, which is necessary for cell division, ctrA and dnaA, the primary regulators of the cell cycle (Reisenauer and Shapiro, 2002;Collier et al., 2007). FtsZ is an essential regulatory protein for cell division and proliferation, forming a z-ring structure at the division site. In C. crescentus DccrM strain, ftsZ expression is significantly downregulated, leading to a severe defect in cell division (Gonzalez and Collier, 2013). The Escherichia coli Orphan methyltransferases dam -Alter the pap promoter to influence the affinity of the lrp regulatory protein for DNA, (Hernday et al., 2002;Zamora et al., 2020) vertical transmission of heritable transfer elements carrying AMR genes is dependent on cell division and proliferation. When CcrM regulates the expression of the cytoskeleton ftsZ gene, it can affect bacterial division and proliferation and impact the vertical transfer of AMR genes.
DNA cytosine methyltransferase
Dcm is a typical DNA MTase in E. coli and has two targets: 5′-CCAGG-3′ and 5′-CCTGG-3′ sites. As a result, Dcm can protect the DNA sequences from restriction enzyme ECORII activity even if the R-M system is disturbed (Goḿez and Ramıŕez, 1993). In bacteria, Dcm is typically associated with the transcription of active genes. However, the methylation of promoter DNA is frequently associated with gene silencing in higher eukaryotes (Zemach et al., 2010). The role of Dcm in prokaryotes remains unclear, but Kahramanoglou et al. suggested that Dcm controls gene expression in the stationary phase in E. coli (Kahramanoglou et al., 2012). Militello et al. demonstrated that the AMR transporter SugE was overexpressed in an E. coli Ddcm strain, indicating that Dcm may affect the drug tolerance of SugEmediated medicines by altering the level of sugE gene expression (Militello et al., 2014). Furthermore, Dcm promotes plasmid loss and protects against post-segregational killing by EcoRII (which cleaves DNA at the same site as Dcm methylates) (Takahashi et al., 2002;Ohno et al., 2008).
DNA phosphorothioation
The DNA PT modification, a novel R-M system, has been discovered widely in bacteria and archaea. As a defense barriers, DNA PT modification plays a vital part in bacterial AMR. Nonetheless, the potential role of the DNA PT modification in AMR is still unclear. By analyzing the functions of DNA PT modification in AMR with a serious of clinical pathogenic bacteria, Xu et al. demonstrated DNA PT modification reduced the distribution of horizontal gene transfer (HGT)-derived AMR genes in the genome, meanwhile the modification could suppress HGT frequence (Xu et al., 2023). To understand the mechanism of antibiotic resistance genes (ARGs) in drinking water supply systems, Khan et al. found the relative abundance of dndB and ARGs increased in the effluent, as well as, considered that DNA PT modification protected mcr-1 and bla NDM-1 carrying bacteria from chloramine disinfection during the water treatment process (Khan et al., 2021). DNA PT modification can recognize and cleave unmodified exogenous DNA, such as HGT, ARGs and phage. Therefore, the modification is significant for bacteria to resist foreign invasion and maintain own genetic stability. Up to now, there is few systematic studies on AMR base on DNA PT modification, while we need to study the impact on AMR further.
Nucleoid-associated protein modifications
NAPs can perform histone-like functions in bacteria and affect DNA structure and transcription, unlike histones in eukaryotes. Gram-negative and Gram-positive bacteria have different NAPs, but most research focuses on Gram-negative bacteria. NAPs are essential global regulators that play a significant role in AMR (Table 4), as demonstrated in Salmonella. Yan's research suggests that the Fis protein, known as a global regulator in S. Typhi, can mediated persistence by controlling glutamate metabolism (Yan et al., 2021). Additionally, the H-NS DNA binding protein can act as a transcriptional inhibitor to silence genes expression, control plasmid conjugative transfer, silence foreign genes, and inhibit conjugative transfer to reduce fitness costs (Dorman, 2007;Dorman, 2014). Cai et al. found that the IncX1 plasmid, which carries the tigecycline resistance gene tet (X4) and encodes the H-NS protein, results in little to no fitness cost in E. coli and K. pneumoniae. It's also noteworthy that some plasmids can help host bacteria form biofilms and enhance virulence .
Compared to DNA methylation, histone modification has greater plasticity. The H-NS protein can regulate the expression of genes encoding efflux pumps in multidrug-resistant Acinetobacter baumannii (A. baumannii) and down-regulate the expression of AMR genes for beta-lactams, aminoglycosides, quinolones, chloramphenicol, trimethoprim, and sulfonamides (Rodgers et al., 2021). Similarly, deleting hns lowers the expression of biofilm-related genes in A. baumannii (Rodgers et al., 2021). A recent study found that H-NS affects the stability of bla NDM-1 -bearing IncX3 plasmid and inhibits its plasmid conjugative transfer in E. coli (Liu et al., 2020). These indicate the complexity and breadth of the regulatory network controled by H-NS for genes involved in AMR and persistence.
In view of the biofilms play a major role in some chronic and recurrent infections and are associated with the failure of antibiotic therapy, antibiotic therapy is the first -line treatment of bacterial infections (Devaraj et al., 2018). The DNA-binding (DNABII) protein family includes two well-known NAPs, integration host factor (IHF) and HU. These proteins bind to DNA with high affinity and bend it, thereby playing essential roles in the structure and function of the bacterial nucleoid (Browning et al., 2010). While IHF binds to specific DNA sequences, HU does not. In addition to their structural functions, IHF and HU are also crucial for biofilm formation and the integrity of community structure (Devaraj et al., 2015). In uropathogenic E. coli, both subunits of IHF aid in biofilm formation, while HupB (HUb), one of the subunits of HU, is required for biofilm formation (Devaraj et al., 2015). IHF and HU could be potential therapeutic targets for biofilm therapy, as antimicrobial agents and the host immune system have difficulty attacking biofilms. A research has found that the HU protein subunit HupB, post-translationally modified by lysine acetylation and methylation, is a breakthrough in treating multidrug-resistant Mycobacterium tuberculosis (M. tuberculosis) (Ghosh et al., 2016). Mutating a single post-translational modification site eliminates a drug-resistant cell subset of isoniazid-resistant M. tuberculosis (Sakatos et al., 2018). Additionally, it has been reported that using anti-Porphyromonas gingivalis (P. gingivalis) HUb antibodies to specifically target the oral Streptococcus biofilm for preventing P. gingivalis organisms from entering into preexisting biofilms formed by oral Streptococcal species . Therefore, HU, for instance HupB, could be a promising therapeutic target for bacterial therapy. Recent research has reported that targeting HU, Zhang et al. used Gp46 (an HU protein inhibitor from phages) to inhibit HU of many resistant pathogens by occupying DNA binding site, and preventing chromosome segregation during cell division ).
RNA modification 3.3.1 Ribosomal RNA methylation
RNA modifications, such as rRNA methylation, have emerged as important mechanisms associated with AMR. Ribosomes are a common target for antibiotics. Methylation of specific sites in rRNA can prevent antibiotics from binding to their target sites, thereby leading to antibiotic resistance. Thus AMR via rRNA methylation is one of the most common strategies adopted by multidrug resistant pathogens. One such example is 16S rRNA methylation, which is a major mechanism of aminoglycoside resistance in clinical pathogens (Tada et al., 2013;Liu et al., 2015). Two different methylation sites in 16S rRNA lead to different aminoglycoside-resistant phenotypes. Methylation of residue A1408 confers resistance to kanamycin and apramycin in E. coli, but sensitivity to gentamicin, while methylation of residue G1405 confers resistance to kanamycin and gentamicin, but sensitivity to apramycin (Liu et al., 2015). The multidrug resistance gene cfr, found in Staphylococcus, encodes an MTase that modifies the A2503 site in 23S rRNA, leading to resistance to antibiotics such as amide alcohols, lincomycins, oxazolidinones, pleuromutilin, and streptogramin A (Long et al., 2006). In S. pneumoniae, U747 methylation mediated by RlmCD promotes efficient G748 methylation by the MTase RlmA II in 23S rRNA, affecting the susceptibility to telithromycin (Shoji et al., 2015). Another research indicated the erythromycinresistance MTase methylates rRNA at the conserved A2058 position, and imparts resistance to macrolides, such as erythromycin (Bhujbalrao et al., 2022). Up to now, the number of rRNA MTases related to AMR mechanisms have increased, but the source of MTases and the exact mechanisms of AMR are still unclear.
Non-coding RNAs
Advancements in high-throughput sequencing technology and bioinformatics have facilitated the discovery of various ncRNAs and their functions in bacteria. Recent studies have found that exposure to environmental stress, especially antibiotics, bacteria produce specific ncRNAs profiles, which may regulate the expression of downstream genes. When bacteria sense antibacterial stress, a large number of ncRNA regulators are upregulated, and one of their roles is to improve bacterial adaptation in a dynamic environment (Morita and Aiba, 2007). Thus, ncRNAs play an essential role in the bacterial regulatory network that controls the expression of bacterial genes through regulating proteins and target mRNAs. In comparison to regulatory proteins, ncRNAs are considered a better class of regulatory molecules for controlling gene expression (Toledo et al., 2007).
ncRNAs play an essential role in the regulation of bacterial gene expression and can affect AMR mechanisms. Although ncRNAs are
Species
Nucleoidassociated proteins
Genes been regulated Functions References
Salmonella typhi Fis gltK, gltJ, gltL, gltS, gltH and gltP Regulate glutamate metabolism to reduce persister formation (Yan et al., 2021) Salmonella typhi H-NS, Hha, StpA pathogenicity islands (SPIs), pef Inhibite the expression of SPI2 to improve the fitness, (Hurtado et al., 2019) Escherichia coli Fis fimS, fimA, fimB, acs, acnB, fum Function as a negative regulator in the fimS phase variation, enhanced growth ftness under acetate metabolism, regulate biofilm formation (Jindal et al., 2022;Saldaña et al., 2022) Escherichia coli H-NS pilx1-11, taxB, taxC, actX, parB Facilitate horizontal plasmid transfer, affect the stability of plasmid (Liu et al., 2020) Escherichia coli HU, IHF fim, pap Promote biofilm formation, Gp46 function as HU inhibitor (Justice et al., 2012;Devaraj et al., 2015; Shigella H-NS virB Silence the virB promoter and influence virulence plasmid trasnsfer (Colonna et al., 1995) Acinetobacter baumannii H-NS aidA, abaI, kar, fadD, bla OXA-23 , bla OXA-51-like , bla ADC , bla GES-14 , carO, pbp1, and advA Regulate the expression of genes encoding efflux pumps and the formation of biofilm; modulate the expression of resistance-related genes (Rodgers et al., 2021) Klebsiella pneumoniae H-NS tet (X4), Modulate the fitness cost of plasmids, promote the virulence and biofilm formation, Mycobacterium tuberculosis HU, HupB eis, arsR, marR, tetR Regulate the sensitivities of aminoglycosides, alter gene expression and phenotypic state in a subpopulation (Zaunbrecher et al., 2009;Ghosh et al., 2016;Sakatos et al., 2018;Rodgers et al., 2021) Porphyromonas gingivalis HU ssP, fimA Disperse oral streptococcus biofilm and prevent P. gingivalis entry into oral Streptococcus biofilm a major form of post-transcriptional gene control in bacteria, some research indicate ncRNAs also influence transcription (Rodgers et al., 2023). For instance, Majdalani et al. found that RprA ncRNA reduced type IV secretion-mediated transfer of pSLT (Salmonella virulence plasmid) (Papenfort and Melamed, 2023). In particular, RrpA controls the transcription and translation of ricI, which encodes a membrane protein that interacts with and suppresses the anchor protein Trav of the type IV secretion apparatus (Majdalani et al., 2001). It is reported that antisense vicR (a kind of ncRNAs) is transcribed from the opposite strand of vicR mRNA and regulates the biofilm formation of Streptococcus mutans via affecting the production and function of VicR protein (Lei et al., 2018). The incomplete complementary pairing of most ncRNAs with the target mRNA sequence can lead to two results: (1) Blocking the ribosome binding sites and suppressing translation; (2) Secondary structure melting, exposing the nucleose binding site and translation start site, leading to translation activation (Vogel and Sharma, 2005;Fröhlich and Vogel, 2009). Moreover, since the instabilized base pairing between the ncRNAs and their target mRNAs, the RNA chaperone protein Hfq, binding protein Fino/ ProQ family, CsrA/RsmA family and other regulators usually facilitate imperfect base pairing between ncRNAs and mRNAs, leading to regulate the translation initiation frequency or the stability of target mRNAs (Liao and Smirnov, 2023;Wang et al., 2023;Yu and Zhao, 2023). In this chapter, we will explore some research on ncRNAs that regulate the mechanisms of AMR from two perspectives.
Translation suppression
ncRNAs regulate bacterial cell wall or membrane to alter the sensitivity of antibiotics. Bacteria can control membrane permeability by regulating the expression of outer membrane proteins OmpF, OmpA, and OmpC. Studies have shown that ncRNAs such as MicF, MicA, and MicC inhibit the expression of these mRNAs by partial complementary pairing, interfering with antibiotic exposure (Chen et al., 2004;Udekwu et al., 2005). Therefore, ncRNAs represent a promising target for the development of new strategies to combat AMR in bacteria.
ncRNAs have been shown to affect AMR by targeting the efflux pumps. For instance, overexpression of SdsR has been found to decrease the mRNA and protein levels of the TolC,which encodes the outer membrane protein of many multidrug resistance efflux pumps, resulting in increased sensitivity to fluoroquinolones in E. coli (Kim et al., 2015;Parker and Gottesman, 2016). However, in Shigella sonnei, overexpression of SdsR leads to lower mRNA levels of tolC and increased survival rates at sub-MIC norfloxacin (Gan and Tan, 2019). Pseudomonas aeruginosa (P. aeruginosa) is a common source of hospital infections and has important adaption abilities to various environmental exposures (Jurado et al., 2021). A recent study found that overexpressing of AS1974 ncRNA restores the sensitivity of MDR clinical strains by downregulating the expression of MexC-MexD-OprJ, a component of the multidrug efflux system (Law et al., 2019). On the other hand, overexpression of PA08051 and PA2952.1 ncRNAs leads to upregulation of the drug efflux system mexGHI-opmD, resulting in increased resistance of aminoglycoside (Coleman et al., 2020;Coleman et al., 2021).
Bacterial biofilms, which are microcolonies formed by adhesion on solid surfaces or between bacteria, can secrete extracellular matrix to create a natural barrier. This multicellular-like lifestyle allows resistance to environmental and cell-intrinsic stresses, such as antibiotics exposure. For example, Falcone et al. found that based on RNA-seq analysis, the ErSA ncRNA of P. aeruginosa complementary pairs with amrZ mRNA to influence the expression of AmrZ, promoting biofilm development (Falcone et al., 2018). The RNA-binding protein ProQ has been shown to regulate mRNA-expression levels by interactions with 5′ and 3′ UTRs (Holmqvist et al., 2018). In an early study found that ProQ was necessary for robust biofilm formation, and this phenotype was independent of ProP (Sheidy and Zielke, 2013). Infections caused by Staphylococcus aureus (S. aureus) are often associated with adverse therapeutic outcomes due to various reasons, such as an antibiotic penetration barrier by bacterial biofilms (Singh et al., 2016). By sensing and responding to multifarious environmental exposure, bacteria carry out corresponding adaptive regulation. For instance, the teg58 ncRNA have specific interaction with argGH mRNA (arginine biosynthesis genes) to repress arginine synthesis and biofilm formation in S. aureus (Manna et al., 2022). Raad et al. found that during stationary phase of E. coli, the 3' UTR-derived FimR2 ncRNA interacted with CsrA, antagonizing its posttranscriptional functions of flagellar and fimbrial biosynthesis, and firmly strengthening the control of bacterial motility and biofilm formation (Raad et al., 2022). ncRNAs affect AMR by regulating the functions of plasmids carrying resistance genes, including fitness and conjugation. HGT refers to the transfer of genes between unrelated species, which increases genetic diversity and accelerates bacterial evolution (Gogarten and Townsend, 2005). Conjugative plasmids are typical representatives of HGT and promote the spread of AMR among pathogens. Due to plasmid reception, intergration, replication and the expression of genes, the antibiotic-resistant plasmids produce fitness costs in host bacteria (San and Maclean, 2017). Therefore, it seems that plasmids gradually lost over time during bacterial evolution without corresponding antibiotic exposure. In contrast to this conjecture, antibiotic-resistant plasmids can stably persist in host bacteria for long periods without any antibiotics . There may be some mechanisms that regulate the bacteria fitness cost. Some research have found that ProQ/FinO family proteins encoded by the IncI2 plasmid carrying mcr-1, balanced mcr-1 expression and bacteria fitness by inhibiting plasmid copy number (Yang et al., 2021). As well as, the RNA-binding protein ProQ has identified three distinct domains, one is a large conserved N-terminal Fino-like domain (Gulliver et al., 2022). The FinO-like domain facilitates binding to the RNA, shares similar structural and functional characteristics with the FinO RNA chaperone in IncF plasmid (Pandey et al., 2020). FinO was named so to reflect its fertility inhibition function observed in IncF plasmid conjugation (Finnegan and Willetts, 1972). These plasmids regulate conjugation through RNA antisense mechanisms, whereby the cis-encoded ncRNA FinP inhibits protein synthesis of conjugative transfer regulator TraJ (Timmis et al., 1978;Van Biesen and Frost, 1994;El et al., 2021). The synthesis of TraJ is inhibited, and leads to higher conjugation of plasmids without FinO (El et al., 2021). El Mouali et al. found that the binding protein FinO encoded in virulence plasmid of Salmonella also regulated the replication of a cohabitating plasmid carrying antibiotic gene, which may suggest cross-regulation of plasmids in RNA level (El et al., 2021).
Translation activation
ncRNAs affect AMR by activating translation. ncRNAs commonly down-regulate gene expression, however, also have the ability to activate genes by multifarious mechanisms in bacteria. Several ncRNAs act as direct translational activators by preventing the formation of translation-inhibited stem-loop structures through antisense pairing translation in the 5′mRNA region (Fröhlich and Vogel, 2009). After being activated by the main regulators LuxO/ HapR of the quorum sensing system, the Qrr ncRNA (quorum regulatory RNAs) of Vibrio species binds to the chaperone Hfq and regulates downstream gene expressions (Hammer and Bassler, 2007). One of the pathways is the HapR-independent pathway: the Qrr ncRNA interaction with vca0939 mRNA prevents formation of inhibitory stem-loop structures, allows access to ribosomes and promote translation (Hammer and Bassler, 2007). Moreover, after the translational activation, vca0939 encodes GGDEF proteins and induces virulence factors and biofilm formation (Camilli and Bassler, 2006).
Epigenetic drugs as treatment of antimicrobial resistance
Epigenetic drugs are small molecules that have been designed or studied based on epigenetic mechanisms, such as selective transcription or post-transcriptional regulation of genes. Some epigenetic drugs have been found to alter gene expression by inhibiting specific enzymes. Given the current situation of AMR, epigenetic drugs have important implications for the treatment of infectious diseases caused by multidrug-resistant bacteria. For instance, low concentrations of SAM analogues, such as SGC0946, JNJ-64619178, and SGC8158 were found to inhibit the activity of C. difficile-specific DNA adenine MTase, selectively affecting biofilm and spore production and quickly eradicating C. difficile infection (Zhou et al., 2022). Moreover, UVI5008, a derivative of the natural substance psammaplin A, was found to reduce the DNA gyrase activity of methicillin-resistant S. aureus, and reverse AMR by damaging the bacterial cell wall (Franci et al., 2018). Similarly, epigallocatechin-3-gallate (EGCG) can damage the integrity of the cell wall and reverse the resistance of imipenem, tetracycline, and amoxicillin in S. aureus (Sudano et al., 2004;Zeferino et al., 2022). With the deepening of research, Serra et al. thought that EGCG directly interfered with the assembly of curli fimbriae into amyloid fibrils and reduced the synthesis of CsgD (activator of curli fimbriae and cellulose biosynthesis) by promoting the expression of RybB ncRNA, ultimately inhibited the formation of cell membranes and affected biofilm-mediated antibiotic resistance and host defense (Serra et al., 2016) As well as, EGCG was found to be a suitable natural drug targeting LuxS/AI-2 system of H. pylori by high-throughput screening and molecular dynamics simulation (Ashok et al., 2023). Zhang et al. found that EGCG prevented Shigella flexneri biofilm extracellular polysaccharide from forming through reducing the expression of mdoH gene (Zhang et al., 2023). These findings suggest that epigenetic drugs have the potential to be used as a treatment for patients with multidrugresistant bacterial infections.
Conclusions
AMR is an ancient and natural phenomenon, that has evolved in bacteria over millions of years. While biochemical and genetic alterations are known to contribute to AMR, non-classical mechanisms such as epigenetics have recently gained attention. Bacterial epigenetics, which involves modifications to DNA and rRNA, ncRNAs, as well as nucleoid-associated proteins, has been shown to regulate the formation and enrichment of AMR. This regulatory mechanism controls gene expression switching, phase variation, bacterial tolerance, and persistent bacteria. The epigenetic regulatory mechanisms of bacteria are complex which may have long term implications. Although our current understanding of bacterial epigenetics is still limited, recent advances in sequencing technologies are enabling high-resolution mapping of epigenetic landscapes in prokaryotes, which is expected to shed light on the complex regulatory mechanisms of AMR. With the advent of the postantibiotic era, the discovery of epigenetic mechanisms in multidrugresistant pathogens also helps to search for antibiotic potentiators or provide new targets for the development of newer drugs.
Author contributions
XW and DY researched data for the manuscript. LC provided conceptualization and was responsible for the first draft of the manuscript. XW provided conceptualization, review, comment and editing. All authors discussed the results and reviewed and commented on the manuscript. All authors contributed to the article and approved the submitted version.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2023-06-14T13:06:12.659Z | 2023-06-14T00:00:00.000 | {
"year": 2023,
"sha1": "d02caaa71abbb52f0292b0b43b42bbfec8ecd15c",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "d02caaa71abbb52f0292b0b43b42bbfec8ecd15c",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
264509573 | pes2o/s2orc | v3-fos-license | Advances in Mesenchymal stem cells regulating macrophage polarization and treatment of sepsis-induced liver injury
Sepsis is a syndrome of dysregulated host response caused by infection, which leads to life-threatening organ dysfunction. It is a familiar reason of death in critically ill patients. Liver injury frequently occurs in septic patients, yet the development of targeted and effective treatment strategies remains a pressing challenge. Macrophages are essential parts of immunity system. M1 macrophages drive inflammation, whereas M2 macrophages possess anti-inflammatory properties and contribute to tissue repair processes. Mesenchymal stem cells (MSCs), known for their remarkable attributes including homing capabilities, immunomodulation, anti-inflammatory effects, and tissue regeneration potential, hold promise in enhancing the prognosis of sepsis-induced liver injury by harmonizing the delicate balance of M1/M2 macrophage polarization. This review discusses the mechanisms by which MSCs regulate macrophage polarization, alongside the signaling pathways involved, providing an idea for innovative directions in the treatment of sepsis-induced liver injury.
Introduction 1.Sepsis-induced liver injury
In 2016, the Sepsis-3 Workgroup introduced revised definitions for sepsis and septic shock, aiming to enhance accuracy and clinical identification.Sepsis is now defined as a critical condition wherein organ dysfunction arises from a dysregulated host response to infection.Septic shock is identified by the clinical requirement of vasopressors to maintain a mean arterial pressure equal to or above 65 mmHg, accompanied by a serum lactate level exceeding 2 mmol/L, without evidence of hypovolemia (1).Liver injury is a familiar organ damaged in patients with sepsis.It can be viewed as a primary dysfunction that occurs within the first hour after the initial injury, which is usually associated with liver hypoperfusion.This can result in diffuse intravascular coagulation and multiple organ failure (2).Concurrently, sepsis can induce dysfunction in the intestinal microcirculation, facilitating the infiltration of intestinal toxins and bacteria into the liver through the portal vein, thereby initiating a cascade of hepatic inflammation.Within the context of sepsis, the liver becomes a hub of intensified oxidative stress reactions, generating byproducts that activate neutrophils and exacerbate hepatic damage (3).There is evidence that liver injury and failure, particularly as a severe complication of sepsis, contribute directly to disease progression and patients' death (4).
Sepsis is most common in people with weakened immune systems, such as the elderly, infants and patients with certain underlying medical conditions.Sepsis will become more common as the number of elderly patients grows (5).The average annual increase percentage in sepsis incidence was 13%-13.3%(6).Currently, the main way to improve outcomes is to recognize sepsis early and treat it appropriately in the initial hours.However, targeted and effective treatment strategies are still lacking.As a result, it is urgent to investigate a treatment for sepsisinduced liver injury.
Macrophage
Kupffer cells, the specialized macrophages residing in the liver, distinguish themselves from monocyte-derived macrophages by their distinct localization and rapid accumulation within the damaged liver.As resident tissue macrophages, Kupffer cells exhibit a mature phenotype that demonstrates remarkable plasticity.Their functional activity dynamically evolves in response to the specific metabolic and local immune environment (7).Macrophages exhibit a classification into two distinct phenotypes: M1 and M2.M1 macrophages, also known as classically activated macrophages, possess the ability to release elevated levels of proinflammatory cytokines, including TNF-a, IL-6, IL-12, and inducible nitric oxide synthase (iNOS).In contrast, M2 macrophages, alternatively activated macrophages, display a different profile.They tend to release lower levels of proinflammatory cytokines and instead exhibit higher levels of anti-inflammatory mediators, such as IL-10 and transforming growth factor beta (TGF-b) (8).Therefore, inhibition of excess polarization of M1 macrophages and promotion of polarization of M2 macrophages in patients with sepsis may help improve the condition (9).Emerging data has revealed significant heterogeneity even within the traditional M1 and M2 macrophage classifications, underscoring the oversimplification of the M1/M2 categorization.In fact, the M2 phenotype has been further subdivided into distinct subtypes, namely M2a, M2b, M2c, and M2d, reflecting the diverse functional states and responses exhibited by these macrophages (Figure 1).The subtypes have unique cell surface marker proteins and distinct functions, and they are induced by various regulators (10).This refined categorization highlights the intricate and multifaceted nature of macrophage polarization, emphasizing the need for a more comprehensive understanding of the various subpopulations and their specific roles in immune regulation, inflammation, and tissue repair.
The role of macrophages in sepsis 1.3.1 Bacterial clearance
The liver is an essential component of the inflammatory response and is crucial for germ limitation and toxin elimination in sepsis (11).In animal models, more than 60% of the total number of bacteria can be eliminated from the blood and limited in the liver after 10 minutes, and more than 80% of them can be limited in the liver after 6 hours (12).In bacterial infections, lipopolysaccharide (LPS) serves as a crucial inflammatory trigger.Notably, the liver plays a pivotal role in eliminating LPS from the circulation (13).When the liver sustains damage, its capacity for efficient bacterial clearance becomes compromised.This impairment in bacterial clearance increases the risk of sepsis and systemic infection can spread unchecked.Thus, liver damage becomes a significant factor contributing to the heightened vulnerability to sepsis (14).The mediation of bacterial phagocytosis and clearance within the liver involves a diverse range of cells.These cells operate as the initial line of defense against the translocation of bacteria through the bloodstream.The active participation in this process is undertaken by Kupffer cells, liver sinusoidal endothelial cells (LSECs), and stellate cells (15).Kupffer cells, as resident macrophages inhabiting the hepatic sinusoids, are cells with an exceptional capacity for phagocytosis.These specialized cells play a crucial role in the liver's immune defense system by efficiently removing bacteria and soluble bacterial products from the bloodstream (2,16).
Platelets and neutrophils work along with Kupffer cells to remove bacteria from the blood.Platelets release many antimicrobial molecules and play a direct role in infection defense.Platelets can also enhance the killing capacity of Kupffer cells (16).By secreting Kupffer cell chemokines, neutrophils move and gather in the hepatic sinusoids during sepsis.Then, neutrophils and platelets interact to jointly promote the release of neutrophil extracellular traps to trap and clear pathogens (17).The impaired bacterial clearance observed in the liver during sepsis is attributed to a combination of factors, including the direct impact of reduced platelet count on immune responses, the damage inflicted on the reticuloendothelial system responsible for bacteria phagocytosis and clearance, and the compromised function of neutrophils, leading to reduced phagocytosis and intracellular killing capacity (18).
Liver-mediated pro-inflammatory response
In sepsis patients, the liver serves as a prominent site of inflammatory responses triggered by bacterial endotoxins.Additionally, the liver itself can contribute to the production and release of inflammatory mediators.Meanwhile, other organs may have significant and deadly inflammatory reactions because of the damaged liver (19).
Kupffer cells are in charge of generating inflammatory cytokines and mediating liver injury in the early stages of sepsis.Upon encountering harmful bacteria or endotoxins, Kupffer cells respond by augmenting the release of several early proinflammatory mediators.These include IL-1, IL-6, IL-8, TNF-a, IFN-g, and monocyte chemotactic protein 1 (11).Studies have shown that inducing Kupffer cell exhaustion through the administration of gadolinium chloride before cecal ligation and puncture (CLP) can have beneficial effects during the early stages of sepsis in rats.This exhaustion of Kupffer cells leads to a reduction in the secretion of pro-inflammatory cytokines.Additionally, it helps improve hepatic microcirculation disorders, reduce hepatic cell apoptosis, and prevent the development of liver injury.However, hepatic bacterial clearance impaired due to Kupffer cells loss, finally the survival of septic rats was remarkably decreased (20).The modulation of Kupffer cell differentiation represents a novel strategy with the potential to suppress inflammation and protect organs from injury.
Inflammation and chemokine production are also mediated by hepatocytes, hepatic stellate cells, and LSECs.Pathogen proteins are identified by hepatic stellate cells and LSECs through pattern recognition receptors such as the Toll-like receptor (TLR), enabling them to assume the role of liver antigen-presenting cells.In collaboration with Kupffer cells, these cells orchestrate a series of immunological events in sepsis.They activate hepatic natural killer T cells, classical T cells (CD4+ and CD8+ T lymphocytes), recruit neutrophils to the liver, and initiate both local and systemic inflammatory responses (2).
Liver-mediated immunosuppression
Liver has a special innate immune microenvironment and plays a crucial part in surveillance for immune homeostasis (21,22).Due to its unique dual blood supply, the liver is subjected to a constant exposure to circulating antigens, pathogens, and pathogenassociated toxins.These agents gain access to the liver through multiple routes, including the gastrointestinal tract, portal vein, and systemic circulation via arterial blood (23).Thus, liver cells act as gatekeepers to initiate or suppress immune responses as needed.The liver harbors a significant population of intra-tissue macrophages, primarily represented by Kupffer cells located within the hepatic sinusoids.These Kupffer cells serve as the predominant phagocytic cells in the liver and constitute more than 80% of the macrophage population in a healthy human liver.Additionally, it contains lymphoid (such as natural killer cells, T cells, or B cells) and myeloid (such as neutrophils or macrophages) immune cells, which collectively compose both innate and acquired immune responses (22).
The complexity of sepsis ranges from the initial stage of inflammatory response to the later stages of immunosuppression (24).The initial phase manifests as systemic inflammatory response syndrome, which include systemic inflammation, cytokine storm and multi-organ damage (25).When the body's immune cells detect bacteria or endotoxins, a potent inflammatory response is triggered.If the process is not properly controlled, monocytes become unable to respond to further endotoxin attacks, and they begin to produce Macrophages exhibit remarkable plasticity and can undergo polarization into either M1 macrophages or M2 macrophages in response to specific microenvironmental cues.M1 macrophages are characterized by the secretion of various pro-inflammatory cytokines and inflammatory mediators, contributing to tissue damage and robust inflammatory responses.M2 macrophages has been subdivided into distinct subtypes, including M2a, M2b, M2c, and M2d.Under certain conditions, M2a can be transformed into M2d.Activated M2a macrophages can be involved in type II inflammation.M2b macrophages can be involved in the supression of inflammation and immunity response.M2c macrophages can be involved in matrix deposition and tissue remodeling.M2d macrophages can be involved in tumor angiogenesis and immunosupression.
anti-inflammatory cytokines (TGF-b, IL-10), the spesis will enter a state of immunosuppression.Endotoxin tolerance is one of the main mechanisms of immunosuppression in sepsis.It is the diminished reactivity to endotoxin challenge following the initial exposure (26,27).Endotoxin tolerance can protect body from lethal endotoxin attack and prevents infection and ischemia-reperfusion injury.In the meantime, endotoxin tolerance has a significant impact on patients' vulnerability to reinfection, which in septic patients can be fatal (24).Uncontrolled inflammatory responses produce cytokine storms that lead to abroad tissue damage and pathological manifestations states such as sepsis (27).
Mesenchymal stem cells
Mesenchymal stem cells (MSCs) possess an immense capacity for self-renewal and multi-differentiation.These remarkable cells can be sourced from various tissues, such as bone marrow, adipose tissue, umbilical cord, and placental tissue (28, 29).MSCs exhibit a variety of advantageous characteristics in inflammatory diseases, including the ability of homing, reduce inflammatory response, regulate immune homeostasis, mitigate organ damage, and stimulate tissue regeneration (30).Based on the characteristics, MSCs have widespread application in the fields of cell therapies and bioengineering (28, 29).As a result, MSCs appear to be promising as one of the treatments for sepsis (31).
How MSCs regulate macrophage polarization 2.1 Paracrine effects of MSCs
MSCs can reduce inflammatory response, promote tissue repair and regeneration through paracrine soluble factors (32).MSCs possess the capability to modulate the polarization of macrophages and regulate the secretion of inflammatory factors.Through the paracrine signaling of prostaglandin E2 (PGE2).PGE2 exerts inhibitory effects on the nucleotide-binding and oligomerization domain-like receptor 3 (NLRP3) inflammasome, resulting in the reduction of inflammatory cytokine secretion, including IL-1b, IL-6, and IL-18.By attenuating the activation of the NLRP3 inflammasome, MSCs can effectively mitigate acute liver inflammation, thereby contributing to the amelioration of liver injury and the restoration of immune homeostasis (33).MSCs control the Hippo-YAP pathway of macrophages in a mouse model of inflammatory liver injury by secreting PGE2 to prevent the phosphorylation of mammalian Ste20-like kinase 1/2 and large tumor suppressor 1, to boost the translocation of YAP from cytoplasm to nucleus.By directly interacting with YAP and bcatenin to activate NLRP3, the Hippo pathway then regulates XBP1 to reduce NLRP3/caspase-1 activity and IL-1 production, which in turn facilitate hepatic macrophages to polarize toward M2 phenotype (34).MSCs produce PGE2, which not only inhibits inflammatory cytokine secretion but also facilitates the polarization of hepatic macrophages towards the M2 phenotype (35,36).This process is mediated by the activation and phosphorylation of the transcription factor signal transducers and activators of transcription 6 (STAT6).PGE2, through its interaction with specific receptors on macrophages, triggers the activation and phosphorylation of STAT6, a classical mechanism involved in M2 macrophage polarization.This shift towards the M2 phenotype promotes an anti-inflammatory environment and supports tissue repair processes in the liver, further contributing to the resolution of inflammation and the promotion of liver recovery (33,35) (Figure 2).
When the expression of MSCs-derived TSG-6 (TNF-astimulated gene 6) was inhibited, there was a notable impact on the macrophage population in the pancreatic and liver tissues of rats with severe acute pancreatitis.Specifically, the presence of iNOS+ M1 macrophages significantly increased, while the abundance of CD163+ M2 macrophages significantly decreased.Inhibition of TSG-6 expression disrupts this regulatory mechanism, leading to an imbalance in macrophage phenotypes and potentially exacerbating the inflammatory state associated with severe acute pancreatitis in the pancreatic and liver tissues of rats (37).Within the inflammatory state of the host, the generation and secretion of IL-4 by bone marrow-derived MSCs occur, triggering the activation of host liver macrophage reprogramming.Simultaneously, the upregulation of Wnt-3a expression is induced by MSCs.Facilitating the shift from the M1 pro-inflammatory phenotype to the M2 anti-inflammatory phenotype (38).Through the coencapsulation of hepatocytes and human umbilical cord MSCs (HNF4a-UMSCs), a series of beneficial effects have been observed.This co-encapsulation approach has demonstrated the capacity to diminish liver damage induced by LPS/Dgalactosamine, elevate the survival rate of mice with acute liver failure (ALF), and enhance the survival, proliferation, and metabolic function of hepatocytes.These positive outcomes are achieved by facilitating the polarization shift of macrophages from the M1 to the M2 phenotype, and leveraging paracrine mechanisms (39).Through the augmentation of HNF4a expression, an elevation in the transcriptional activity of IL-10 is achieved, consequently promoting the polarization of M2 macrophages via the activation of the IL-10/STAT3 pathway (40).Using paracrine soluble substances, especially through controlling macrophage activity, can help to reduce excessive inflammatory reactions (Figure 3).
Exosomes of MSCs
The International Society for Extracellular Vesicles (ISEV) endorses the term "extracellular vesicle" (EV) as the universal terminology for naturally released particles originating from cells.These vesicles are characterized by a lipid bilayer membrane, lack the ability to replicate and do not possess a functional nucleus within their structure (41).EVs are commonly categorized into three subtypes based on their size and biogenesis: exosomes (Exos), microvesicles, and apoptotic bodies.Exos are generated through the fusion of multivesicular bodies with the plasma membrane, resulting in their release into the extracellular space (42).EVs can MSCs can paracrine PGE2 and affect the macrophages polarization through a variety of ways.PGE2 inhibits NLRP3 inflammasome to lessen the secretion of inflammatory cytokines.PGE2 control the Hippo-YAP pathway of macrophages and increase the expression of p-AMPK and SIRT1, then regulates XBP1 to reduce NLRP3 inflammasome activity, which promotes macrophages polarization toward M2 phenotype.In addition, PGE2 can stimulate the phosphorylation and activation of STAT6, induces macrophages to M2 polarization.
FIGURE 3
MSCs attenuate sepsis-induced liver injury through paracrine soluble factors.Immune cells, such as neutrophils and Kupffer cells, accumulates in the hepatic sinuses and produce a large number of cytokines, resulting in a cytokine storm.MSCs can play a pivotal role in attenuating sepsis-induced liver injury through the immunoregulation effects of paracrine soluble factor.carry complex macromolecular substances such as proteins and nucleic acids, and introduced into recipient cells to take a variety of biological effects (8).Exos serve as both providers of biologically active molecules and essential carriers to protect the molecules and deliver them to the appropriate targets.Exos are preferentially endocytosed in the injured tissue because Exos uptake is reliant on the acidity of the intracellular and microenvironment and tissue injury is typically characterized by acidosis (43).
The immunomodulatory effects displayed by MSC-Exos, akin to those exerted by MSCs, have been demonstrated both in vitro and in vivo settings (44, 45).MSC-Exos were found to repair injured liver tissue in ALF model mice and reduce the expression of NLRP3 inflammasome and caspase-1, IL-1b and IL-6 in acute liver failure (46), thereby promoting macrophage polarization toward the M2 phenotype (34).MSC-Exos accumulated in the liver 6 hours after injection in the mouse model of partial hepatectomy and was primarily absorbed by liver macrophages, MSC-Exos exert their hepatoregenerative effects through the modulation of macrophage phenotypic transformations (45).Anti-inflammatory-related miRNA-299-3p had been found up-regulated in TNF-a pretreatment of umbilical cord MSC-derived Exos.Their high expression may contribute to the reduction of blood levels of alanine aminotransferase (ALT), aspartate aminotransferase (AST), pro-IL-1 and pro-IL-18, pro-inflammatory cytokines, attenuation of liver injury, and inhibition of NLRP3 inflammation-associated pathway proteins (47).The miRNA-17, abundant in MSC-Exo cargo, can also suppress NLRP3 inflammasome activation by targeting thioredoxin-interacting protein (TXNIP) in macrophages (48).The miRNA-182-5p was significantly enriched in MSC-Exos.By preventing the production of the forkhead box transcription factor 1 (FOXO1) in macrophages, the miRNA-182-5p of MSC-Exos reduced the expression of the TLR4 and triggered an anti-inflammatory response (45).
Experiments conducted in vitro have demonstrated that MSC-Exos decrease inflammatory responses and may control macrophage polarization by preventing hypoxia-inducible factor 1 (HIF-1) from mediating glycolysis, significantly inhibiting M1 polarization and promoting M2 polarization (49).It was discovered by Zhang Y et al. that Kupffer cell M2 polarization is dependent on the presence of IL-10 within MSC-EVs, as opposed to free IL-10.The EVs carrying IL-10 were collected by Kupffer cells, subsequently inducing the expression of PTPN22.This, in turn, facilitated macrophage polarization towards the M2 phenotype, leading to a reduction in liver inflammation and damage (50).
In the study conducted by Siyuan et al., it was demonstrated that miRNA-148a, which is enriched within the extracellular vesicles (Exos), exerts regulatory effects on Kruppel-like factor 6 (KLF6).Through this regulatory interaction, miRNA-148a exhibits the capability to suppress M1 macrophages while simultaneously promoting the polarization of M2 macrophages.This modulation is achieved by inhibiting the STAT3 pathway (51).However, Hui et al. found that MSC-Exos induced macrophage polarization toward M2 with arginase-1 high expression mainly through transporting the activated STAT3 (52).The mechanism of regulating STAT3 pathway affecting macrophage polarization needs to be further studied.
Homing
Stem cell homing refers to the process in which autologous or exogenous stem cells can migrate to target tissues and colonize under the action of various factors (53).In a mouse model of sepsisinduced liver injury, the use of iron oxide-based synthetic nanoparticles containing MSCs (SPION-MSCs) was found to facilitate the polarization of macrophages towards the M2 phenotype.The introduction of SPIONs did not compromise the fundamental characteristics of MSCs.Instead, it stimulated the expression of Haem oxygenase 1 within MSCs, allowing for the regulation of their activity within an inflammatory environment (9).Following infusion, SPION-MSCs exhibited a rapid homing to the lungs and subsequently became trapped in the liver for a period exceeding 10 days.In contrast, their residence in other organs was infrequently observed.Importantly, the promotion of M2 macrophage polarization was attributed to the phagocytosis of SPION-MSCs by these macrophages.This phenomenon suggests that the interaction between SPION-MSCs and M2 macrophages plays a significant role in facilitating the polarization of macrophages towards the M2 phenotype (54).Additionally, the expression of TNF receptor-associated factor 1 by SPION-MSCs was found to be crucial for the promotion of macrophage polarization and the subsequent reduction of sepsis in mice (9).
In conclusion, the regulation of macrophage polarization by MSCs can occur through various mechanisms, including the secretion of paracrine soluble factors, the release of Exos, and the process of homing (Table 1).Consequently, this regulatory capacity holds great promise as a therapeutic approach for addressing sepsisinduced liver injury.
3 MSCs-regulated signaling pathways of macrophage polarization 3.1 NF-kB signaling pathway NF-kB is a universal transcription factor and a critical regulator of gene expression during severe infections, including sepsis (57).It is one of important transcription factor associated with the activation of M1 macrophages (58).Studies demonstrated that inhibition of NF-kB activation by MSCs can remarkably reduce sepsis-induced liver injury (59).Therefore, inhibition of NF-kB pathway by MSCs may be a significant molecular mechanism in the treatment of sepsis-induced liver injury.P50 NF-kB protein can inhibit NF-kB signaling pathway, and activate the M2 polarization (60).The miRNA-27b supplied by MSC-derived exos could decrease the inflammatory response and prevent sepsis by downregulating p65 NF-kB, which can activate the NF-kB signal pathway (61).Jie et al. found that small EVs from MSCs can limit the phosphorylation of the NF-kB pathway (62).Thus, EVs acting on the NF-kB pathway may be one of the effective ways to treat sepsis.
JAK/STAT signaling pathway
The Janus family of kinases (JAK) encompasses four major members, namely JAK1, JAK2, JAK3, and Tyk2.These proteins, belonging to the tyrosine kinase family, exhibit high homology and share similar structural characteristics (63).Many cellular functions are reliant upon the pivotal role played by the STAT (Signal Transducer and Activator of Transcription) family, consisting of essential members such as STAT1, STAT2, STAT3, STAT4, STAT5A, STAT5B, and STAT6 (64).To regulate the expression of associated genes, the JAK enzymes are capable of phosphorylating STAT proteins, giving rise to what is commonly MSC-Exos, when stimulated with the combination of TNF-a and IFN-a, enhance macrophage polarization to the M2 phenotype through the upregulation of exosomal CD73 and CD5L.
referred to as the JAK-STAT signaling pathway.This intricate pathway exerts significant control over immunological responses, cell growth, proliferation, and differentiation processes (63).
An investigation found that the potential functions of the JAK/ STAT pathway in regulating the systemic inflammatory response elicited by septic challenge were examined in vivo.The researchers observed that JAK2 exhibited rapid activation in septic rats, with maximal activation occurring in hepatic tissues after 6 hours.Notably, in septic rats induced by CLP, they demonstrated that the JAK/STAT pathway could potentially exert control over the development of organ damage in various organs.These findings shed light on the role of the JAK/STAT pathway in the pathogenesis of sepsis and suggest its involvement in orchestrating the inflammatory response and subsequent organ injury during septic conditions (65).A study found that inhibiting the JAK2/STAT3 signaling pathway might diminish the production of proinflammatory cytokines including TNF-a and IL-6, as well as mitigate multiple organ failure in severe sepsis (66).Lentsch et al. made an intriguing discovery regarding the dysregulated activation of the transcription factor NF-kB in STAT6-deficient mice.This dysregulation led to an augmented production of pro-inflammatory cytokines and chemokines in the liver, including MIP-1, MIP-2, IP-10, and MCP-1.Additionally, upon endotoxin stimulation, STAT6deficient animals exhibited a higher accumulation of neutrophils and leukocytes within the liver.This enhanced accumulation of immune cells may potentially contribute to organ damage (67).
Mesenchymal stem cells stimulate the phosphorylation and activation of STAT6 by paracrine PGE2, which in turn induces macrophages to M2 polarization.Increasing M2 macrophages by MSC treatment can activate the IL-4/STAT6 signaling pathway to control the acute-phase response in the liver (35).
AMPK/SIRT1 signaling pathway
AMP-activated protein kinase (AMPK) is an important regulator of energy metabolism at the cellular level.Sirtuin (SIRT) is a NAD +dependent protein deacetylase, SIRT1 is one of the most concerned members, it plays a key role in the regulation of inflammation, immune response, metabolism and apoptosis/aging.In the aspect of maintaining energy homeostasis, AMPK and SIRT1 often show synergistic effect, and also interact to regulate each other's expression.The target of AMPK/SIRT1 is a classical upstream signaling pathway of oxidative stress and crucial for maintaining metabolic homeostasis (68).Jagged1 treatment significantly raised the amount of PGE2 that MSCs secreted.PGE2 then increased the expression of p-AMPK and SIRT1, which in turn caused XBP1s to be deacetylated and the NLRP3 inflammasome to be inhibited in macrophages (69), thereby promoting macrophage polarization toward the M2 phenotype (34).
Notch signaling pathway
Recent studies have highlighted the participation of the Notch pathway in critical processes such as liver regeneration and repair, liver fibrosis, and metabolism.These findings suggest that the Notch signaling pathway plays a significant role in maintaining liver homeostasis and responding to physiological and pathological changes within the liver (70).Notch signaling pathway is crucial in macrophage polarization (71).It can up-regulate miRNA-148a-3p expression in macrophages, when miRNA-148a-3p can accelerate M1 polarization of macrophages (72).Through activation of NF-kB, activated Notch1 and expression of the Notch target genes remarkably regulate the production of TNF-a, IL-6, and IL-10 (71).MSC transplantation remarkably reduced Notch1 receptor in liver failure rats, suppressing the M1 polarization of macrophage.The impact of MSCs on hepatocyte regeneration may be influenced by the down-regulation of Notch signaling (73).Further investigations into the intricate mechanisms underlying Notch pathway regulation hold promise for developing novel therapeutic strategies targeting liver-related disorders.
MSCs treatment sepsis-induced liver injury
As the liver serves as the primary defense against infections and also plays a crucial role in drug metabolism, it is susceptible to injuries induced by both infections and drugs.In a study conducted, mice were intravenously administered with MSCs one hour before being subjected to a CLP challenge.Following the CLP challenge, there was a significant increase in the levels of AST and ALT.However, the intervention involving the administration of MSCs effectively mitigated the elevated levels of AST and ALT, alleviated pathological injury of the liver and enhanced the survival rate of mice in the sepsis model (74).
When MSCs were administered into a mouse model with CLPinduced sepsis, there was a notable attenuation in the expression of TNF-a and IL-6, while concurrently witnessing an upsurge in the expression of IL-4 and IL-10.This not only mitigated the pronounced hepatic swelling and necrosis observed in the liver but also led to a decline in the elevated levels of AST and ALT.Additionally, there was a discernible reduction in the presence of Bax-and Caspase-3-positive apoptosis cells, coupled with an enhanced glycogen deposition within the liver, ultimately contributing to an improved survival rate (59,75).It's noteworthy to mention that SPION-MSCs exhibited a more pronounced ameliorative effect on these pathological symptoms in both CLP and LPS sepsis mouse models in comparison to when MSCs were used in isolation (9).
Conclusion
In summary, the modulation of macrophage polarization by MSCs offers a promising therapeutic approach for sepsis-induced liver injury.The paracrine secretion of soluble factors, exosomes, and the ability of MSCs to home to the liver contribute to their beneficial effects in reducing liver injury and promoting tissue repair.Further understanding of the signaling pathways involved and optimization of MSC-based therapies will pave the way for their clinical application in treating sepsis-induced liver injury, offering new hope for patients facing this challenging condition.
TABLE 1
The ways of MSCs regulate macrophage polarization. | 2023-10-27T15:16:40.249Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "6e6f7fed6f158d631b6dfbb75db351bf3816e940",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "73522bd47755c1d16b71d5983800dbe4314c934a",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259370766 | pes2o/s2orc | v3-fos-license | The Turing Quest: Can Transformers Make Good NPCs?
We explored the generation of NPC dialogue using a zero-shot prompting method as well as the ability of LMs to self-evaluate and score dialogue with few-shot learning.
Introduction
Over the past decade, there has been a growing interest in applying deep learning models to Natural Language Generation (NLG) for open-domain dialogue systems and conversational agents. In parallel, the gaming industry has been striving to create more immersive experiences for players by enhancing their interactions with non-playable characters (NPCs). However, the potential of utilizing state-ofthe-art deep learning models, such as Transformerbased models, to create NPC scripts remains largely unexplored.
Pre-trained Transformer-based language models (PLMs) like OpenAI's GPT-3 (Brown et al., 2020) and ChatGPT (Schulman et al., 2022) have demonstrated impressive conversational abilities (Milne-Ives et al., 2020). In certain contexts, the text generated by these models can be nearly indistinguishable from human-written text (M Alshater, 2022) without the aid of external tools or watermarks (Gambini et al., 2022). The use of these models in real-world applications has been expanding in areas such as customer service automation (Zou et al., 2021), educational conversational agents (Molnár and Szüts, 2018), and mental health dialogue systems (Abd-Alrazaq et al., 2019).
Despite their growing prevalence, the effectiveness and generalization capabilities of PLMs in various contexts remain uncertain. One such uncharted domain is the creation of "non-playable characters" or NPCs in video games.
When comparing chatbots to NPCs, the latter can be considered as a narrative-driven variant of goal-oriented chatbots. However, NPCs and chatbots serve different purposes and operate in distinct environments. Generating NPC scripts presents unique challenges, as the dialogue must be consistent with the game's plot, genre, and the NPC's character to maintain player immersion and suspension of disbelief (Kerr and Szafron, 2009). According to Lee and Heeter (2015), NPC believability hinges on "the size and nature of the cognitive gap between the [NPC that] players experience and the [NPC] they expect". Players anticipate NPCs with individualized and possibly dynamic traits, which should be reflected in their dialogue. While incorporating personality into dialogue systems is well-studied (Qian et al., 2017) (Smestad and Volden, 2019) (de Haan et al., 2018), the challenge of generating goal-oriented, believable NPC scripts that align with a game's narrative and thematic elements, while preserving player immersion, remains substantial.
The ability to automatically generate contextually appropriate dialogue for a specified character could have an effect on the design paradigms of future video games. While manually scripted narratives and plot points will continue to hold their value, developers could augment player immersion by allowing an array of NPCs to dynamically respond to a player's in-game progression.
Traditionally, game design involves scripted dialogues only for NPCs that contribute directly to a quest or story line, thereby limiting the extent of player interaction. It is not often possible for a player to initiate a conversation with a companion about an ongoing quest or solicit their views, creating an impression that, from an NPC's perspective, the player's existence is confined to the quests they undertake.
Simply implementing an interactive companion system necessitates writing dialogues for every quest for all possible companions-a laborintensive task. Expanding this system to encompass a majority of a game's NPCs would further compound these challenges, increasing the amount of labour to an unreasonable degree. The vast amount of dialogue required for each narrative stage would significantly exceed typical time and resource constraints of most developers. Despite the potential enrichment of the player experience, the practicality of creating such an immersive, dialogue-rich environment using solely human-authored dialogue in game development remains questionable.
In this study, we investigate the application of Transformer-based models like GPT-3 to the task of creating NPCs and generating believable scripts. To this end, we develop an NPC construction pipeline capable of generating dialogue based on the NPC's attributes alone. Our pipeline com-prises three key modules: a) a Feature Characterization Schema that classifies NPCs based on personality traits and world descriptions, b) an Automatic Prompt Creation process that employs the schema to generate tailored prompts for conditioning language models, and c) a Dialogue Generation phase that uses the customized prompts to generate scripts with Transformer-based PLMs. Figure 1 provides an example of dialogue generated through this pipeline. We also devise and automate an evaluation metric for NPC dialogue quality, drawing inspiration from related literature (Brown et al., 2020). Lastly, we propose the Turing Quest: a test using human judges to assess the believability and quality of generated NPC scripts.
Related Work
In recent years, there has been a growing interest in dialogue systems and conversational agents. However, the exploration of dialogue generation for NPCs in video games, despite their similarities to chatbots, remains limited. Although most video games in the past decade include NPC dialogue, research on automating its creation using Artificial Intelligence (AI) is still in its infancy.
NPC Dialogue generation. In the early 2000s, efforts in NLP to create better NPC dialogue relied on hand-crafted algorithms and manually authored grammars (Schlünder and Klabunde, 2013) (Ryan et al., 2016). Schlünder and Klabunde (2013) succeeded in generating greetings that players perceived as more polite and appropriate than in-game greetings. However, their rule-based method relied on labor-intensive, discrete human-defined steps that were difficult to scale into full branching conversations. With recent advancements in goaloriented chatbots utilizing machine learning techniques such as reinforcement learning (Liu et al., 2020) and dialogue generation through deep reinforcement learning (Li et al., 2016) (Li, 2020), automating NPC dialogue generation becomes increasingly feasible.
The introduction of AI into games has led to the application of various AI techniques and algorithms to enhance gameplay experiences through improved bots (Nareyek, 2004) and adaptive experiences (Raifer et al., 2022). There has been significant research into using machine learning to create bots that provide challenging and entertaining opponents for players (Håkansson and Fröberg, 2021). However, this trend of applying machine learning to different game design tasks does not extend to dialogue generation for NPCs.
Although pre-trained language models such as GPT-3 continue to expand their applicability, generalization remains an unsolved problem. While PLMs like GPT-3 have shown natural language generation capabilities (Topal et al., 2021), research into NLG with Transformer-based models trained on NPC dialogue has revealed that the generated dialogue "compared rather poorly to human-written [dialogue]" in terms of purpose and coherence (Kalbiyev, 2022). Nevertheless, generalization difficulty for LMs is not unique to NPC dialogue (Ye et al., 2021). We hypothesize that NPC dialogue is not merely another generalization problem but a distinct task. This hypothesis is supported by the inadequacy of chatbot evaluation metrics (Peras, 2018) when applied to NPC dialogue.
NPC Dialogue Metrics. Metrics proposed for chatbots do not directly translate to suitable metrics for NPC dialogue. While chatbot success is often determined by how "human" they sound and their ability to maintain a conversation with a human (Turing, 1950), NPC dialogue is always directed and goal-oriented. Generating dialogue for NPCs presents unique challenges compared to text generation in fictional settings. The generated dialogue must be consistent with the game world and the NPC's specific traits and personality, and it should ensure coherence and contextual relevance in relation to the player's input. No test equivalent to the Turing test or its alternatives, such as the Wino-grad schema (WSC) (Winograd, 1972;Levesque et al., 2011) exists specifically for NPC dialogue. To our knowledge, there is no standard metric to evaluate the quality of generated NPC dialogue. One suggested metric for NPC dialogue is "coherence, relevance, human-likeness, and fittingness" (Kalbiyev, 2022). While coherence, relevance, and human-likeness can be applied to chatbots, fittingness-defined by Kalbiyev (2022) as how well the response fits the game world-is unique to NPCs.
NPC Construction Pipeline
The objective of the NPC construction pipeline is to automatically generate coherent, contextually appropriate, and engaging utterances for an NPC, given the dialogue history between the NPC and a player, as well as the contextual information about the NPC and the game. The pipeline consists of three modules, which serve to a) characterize the NPC according to a generalized representation schema that captures crucial information about the NPC's role, personality, and game context, b) generate short prompts based on the characterization, providing contextually relevant pretexts for the language model (LM), and c) generate utterances based on these prompts using an LM optimized for NPC dialogue generation.
Module 1: Feature Characterization Schema
The first module in the pipeline involves developing a schema that characterizes a given NPC according to a number of game-and NPC-relevant features. Identifying the most concise set of features needed to define any NPC is a challenging task, as NPCs not only exhibit vastly different personalities but can also serve different purposes for the player and the game world. For example, in the action role-playing game, "The Elder Scrolls V: Skyrim" (Bethesda Game Studios, 2011), the NPC Balgruuf the Greater is a Jarl, i.e., a king or ruler who assigns quests to the player to maintain peace. In contrast, a character like KL-E-0 from "Fallout 4" (Bethesda Game Studios, 2015), a robot arms dealer in a post-nuclear apocalyptic world, has little concern for peace. Based on (Warpefelt, 2016), NPCs should possess both a ludic function and a narrative framing for their actions to be coherent and believable. That is, an NPC should fulfill a gameplay or mechanical purpose-i.e., a ludic function-while advancing the narrative through their actions.
To develop a characterization of NPCs that captures their differences across various games and genres, we should consider several important features, such as their relationship and role with respect to the player (e.g., buying and selling, providing quests, etc.) and their individual personality and values. Taking into account narrative purpose, ludic purposes, and the personality and characteristic differences of NPCs, we propose five gamespecific features to characterize and distinguish NPCs: Narrative Ludic function World Desc. NPC Role NPC Personality Game State NPC Objective Each of these five features either fulfills a ludic function or contributes to the game's narrative, and in some cases, a feature serves both purposes. This schema enables us to classify NPCs based on their in-game mechanics (Hunicke et al., 2004) while also capturing their role in the game's story. By incorporating these features into the NPC construction pipeline, we can create NPCs that not only adhere to the context and constraints of the game world but also exhibit distinct and engaging personalities, which can significantly enhance players' immersion and overall gaming experience.
World Description. A world description provides a summary of the story thus far, including information about the game world and its unique characteristics. Without this information, actions, thoughts, and utterances may be incoherent or unfitting, as they lack awareness of the setting and genre. This may result in dialogue or actions that conflict with the player's expectations. For instance, if Balgruuf from the previous example, originating from a fantasy adventure game, were placed in a sci-fi horror set in space, his actions, appearance, and dialogue would clash with the rest of the game. NPCs become "essentially incomprehensible if they are not framed according to the narrative" (Warpefelt, 2016). Ignoring information related to the setting, genre, and themes present in the NPC's world may affect the believability and fittingness of the NPC. More importantly, the narrative dissonance generated could shatter the willful suspension of disbelief -coined by Samuel Taylor Coleridge (1971)-and break the player's immersion in the game's world and story.
Role. Each unique NPC is created to fulfill a purpose. Continuing from the previous example, Balgruuf primarily functions as a questgiver-facilitating the player's progression through the main quest line and occasionally offering side quests to enrich the narrative experience. Omitting his role would fail to represent a critical function of his character. Defining the role of an NPC, whether as a vendor, quest giver, or storyteller, etc., is thus crucial. We selected these roles based on the typology of NPCs and the NPC model proposed in (Warpefelt, 2016). We adapted the types of NPCs from (Warpefelt, 2016) and simplified the set of NPC types to those that would feasibly have a conversation with the player while also merging entries that were similar in their roles. This resulted in eight types of NPCs, six neutral or friendly roles, and two non-friendly roles, as shown below, in The role an NPC occupies influences their expected dialogue. Although these roles are not mutually exclusive within a single NPC (e.g., some NPCs can be vendors at times while providing a quest at another time), at any given point during a dialogue with a player, the NPC occupies only one of these roles.
Personality. To describe any given NPC, it is necessary to elaborate on their personality and unique characteristics that distinguish them from other characters. These characteristics include physical attributes and appearances, psychological and personality traits such as the strength of the OCEAN personality traits proposed in (Digman, 1990), likes and dislikes, etc. This feature focuses on the details of the NPC's character, such as their occupation, beliefs, and other related details. NPCs are characters at their core, making it essential to incorporate these details into their depiction.
Game State. This describes the progression of the game and changes to the NPC's location. The NPC's dialogue may change based on the objectives completed by the player and the current state of the in-game world. The addition of this feature allows us to focus on the NPC during any single time frame during the course of the game. This enables better classification of dynamic NPCs that change over the course of the game and react to the player's actions. This feature also allows specifying details such as the current location of the NPCs and the scope of information the NPC possesses. Game state serves both a narrative and ludic purpose; for example, a shopkeeper may offer more goods depending on the player's actions, and the NPC's location also aids in framing their actions and dialogue, as a vendor may only offer certain goods in specific towns.
Objective. The NPC Objective is the purpose of the NPC apart from the player. According to Dennett Daniel (1981), personhood consists of six different themes: Rationality, Intentionality, Stance, Reciprocity, Communication, and Consciousness. Providing an NPC with a role satisfies intentionality, as each action should be motivated by what the NPC was designed to achieve. However, giving them goals and aspirations allows the NPC to have a stance and perhaps even consciousness (Kalbiyev, 2022). If a blacksmith's objective is to raise enough money for their family, they should act and speak accordingly. Their actions and dialogue should not solely reflect their personality but also their objective. This feature allows the schema to capture complex and dynamic NPCs with intricate values and goals not fully represented by their role or personality. The addition of this feature enables the NPC to have a greater purpose than merely serving as an outlet for exposition or facilitating a game function.
With these features, we propose that each unique NPC can be encapsulated and represented wholly, as shown in figure 2. Each one of these features is independent of one another, allowing for modularity when designing NPCs. However, clashing combinations may still exist regardless of the mod-
World
A fantasy world of Dragons and magic; Skyrim Role Questgiver Personality Nord, Jarl of Whiterun, Loyal, Noble, Blonde, reasonable State Sitting on throne in dragonsreach.
Contemplating the war and recent reports of dragons Goal The safety and prosperity of the people of whiterun and a solution to the looming dragon threat.
Module 2: Prompt Creation
Prompt creation was designed with the feature representation schema in mind. Providing the LM with sufficient information about an NPC is crucial to ensure that the generated dialogue remains consistent with the character's identity. These requirements are akin to the challenges faced by the feature representation schema. Consequently, the prompt creation module integrates the various features present in the schema and uses them as a prompt. The first line of each prompt begins with the sentence "You are an NPC in a game", followed by optional details such as a name, some details about the world that the NPC inhabits, the role of the NPC, basic personal characteristics, their current state (e.g., sitting outside thinking about their daughter), and finally their goal(s). Most of these categories are optional, except for the NPC type (i.e., their role), which must always be present. By incorporating these features, the prompt creation module empowers users to guide the LM in generating diverse NPCs with individualized personalities, allowing for greater customization without the need for prior fine-tuning or training.
NPC Header. Utilizing this prompt creation method, we created the NPC header, a representative example is depicted in figure 3. This header plays a pivotal role in dialogue generation by providing essential information about the character. For our needs, we also created a player header using the same information used in the NPC header, guiding the LM to mimic a player's behavior and facilitate automated dialogue generation. The generated player dialogue is less creative and more prone to repetition compared to human-written dia- logue. This issue is beyond the scope of this paper, as our focus lies on NPC dialogue generation.
Module 3: Dialogue Generation
Dialogue generation was executed automatically and iteratively. The prompt was structured as a combination of the header and the current dialogue history. The header section is continually swapped depending on which agent's dialogue-NPC or player-is currently being generated. By placing the header at the top of the prompt and swapping it for the active agent, PLMs can generate dialogue that is coherent with the current speaker and their traits.
First Sentences. In early development-stage results, GPT-3 demonstrated difficulty in generating effective first sentences. Combined with the inherent challenge of generating human-like responses, this led to a significant drop in the overall quality of dialogue-often resulting in both NPC and player generating blank lines or constantly repeating the same responses. A workaround was developed by employing a small set of hand-written first sentences based on the genre and NPC type. This workaround allowed the conversation to avoid immediate repetition while minimizing interference with dialogue generation.
Repetition. In our preliminary testing, we found that PLMs struggle to avoid repetition when the player dialogue is similar to a past query or sentence. This often caused the NPC's response to be similar or even identical to its previous response.
To circumvent this issue, we implemented a dynamic frequency penalty. The dynamic frequency penalty incrementally increases when the NPC or player generates a response that already exists in the conversation. After detecting a repetition and incrementing the frequency penalty, the LM at-tempts to regenerate with the same prompt, excluding the repeated sentence. This process occurs up to three times or until a new sentence is generated before resetting the frequency penalty to the original value before any increments. This technique significantly reduced overall repetitions and drastically decreased the occurrence of loops appearing early in the conversation.
Evaluation
To assess the performance of the NPC construction pipeline and the resulting generated dialogue, we designed a comprehensive evaluation metric that examines dialogue quality based on coherency, believability, degree of repetition, alignment of the NPC's dialogue with their role, and fittingness of the NPC's dialogue within their world. These categories draw from and adapt Kalbiyev (2022)'s metric for evaluating video game dialogue. Each metric is assigned a score between one and five, with the sum of these scores indicating the overall quality of the dialogue. Self-diagnosis harnesses the capacity of Transformer-based language models to detect patterns within text and their few-shot learning performance to enable rapid, automated evaluation of dialogue without prior fine-tuning. We conducted a human evaluation of 66 different NPC scripts to assess the accuracy and reliability of our self-diagnosis approach. After each conversation was evaluated and scored, we found a correlation between parameters and their average score. By including our full NPC header, we were able to generate dialogue of higher quality. We then conducted a single-blind test where human judges were asked to determine whether an NPC script was generated by AI or written manually by a human.
Self-Diagnosis
We investigated the ability of pretrained language models, such as GPT-3, to understand, evaluate, and diagnose dialogue when given a specific nontrivial query (e.g., "whether an NPC behaved coherently"). Schick et al. (2021) demonstrate that PLMs can identify socially undesirable attributes in text, such as racism and violence. We propose that this self-diagnosis capability is not only applicable to socially undesirable attributes but also enables PLMs to self-diagnose a broader and more general set of attributes, themes, and behaviors without fur-ther fine-tuning. For simple questions, such as if a genre was clearly distinguishable in text, PLMs perform accurately in a zero-shot environment without examples and further guidance. This behavior is supported by Sanh et al. (2022). However, this performance does not hold when dealing with more complicated and potentially subjective questions. Our self-diagnosis approach consists of providing examples of different scoring dialogue for each metric that needed further clarification. By scoring dialogue", we mean, for example, giving the LM a prompt like "What a perfect score looks like" or "What a 3 should look like". In preliminary tests, we found that simply inputting a script and posing a question led to relatively reliable results; however, the output occasionally did not align with human responses or logic. By formulating the question more precisely and asking for a numeric response rather than a free-form sentence response, we were able to obtain a numeric answer more accurately. To account for potential variability in the responses, we set the temperature to 0 for each test, yielding a deterministic model devoid of stochastic behavior. We leveraged the PLM's few-shot learning abilities by adding three examples of different scoring sample dialogue before the prompt. This approach aligns scores obtained through self-diagnosis more closely with human scores on queries that a PLM would otherwise have difficulties with.
The Turing Quest
To evaluate the performance of our NPC Construction pipeline and the degree to which the resulting generated dialogue appears human-written, we propose a test tailored to NPC dialogue-the Turing Quest. Inspired by the Turing test (Turing, 1950), the goal of this test is to determine whether a generated NPC script can be distinguished from humanwritten dialogue by human judges. A script passes the Turing Quest if the judge deems it human-written, and fails if perceived as AI-generated. Conducting this test on multiple NPC script samples helps assess the proficiency of state-of-the-art PLMs in generating convincing NPC dialogue. The Turing Quest is a self-administered questionnaire. For each script, it asks the judge to determine if the NPC's dialogue is written by a human or an AI. Since the scope of this test is to determine the believability of an NPC's dialogue, the player's dialogue can be manually written by a human.
For our test, six NPC scripts were evaluated by 12 individual judges. Four of the six scripts were generated by GPT-3, one was manually written, and the final script was sampled from the game Skyrim. Our test group comprised twelve people familiar with video games and NPCs. From the responses of our judges, we determined the average passing rate was 64.58% for all AI-generated scripts. The best performing generated script had a pass rate of 75%. Interestingly, 75% of judges believed that the dialogue sampled from Skyrim was AI-generated and 50% thought the same for the manually written script. This could highlight the expectations of players regarding the current state and abilities of LMs and conversational agents. These findings provide strong empirical evidence that our pipeline, when applied to PLMs, is capable of producing NPC scripts that resemble and perhaps even surpass human-written NPC dialogue.
Parameter Search and Model Selection
We conducted a comprehensive random grid parameter search to identify the optimal model and parameters for generating high-quality NPC dialogue. Three key parameters influenced the quality and score of the generated dialogue: the language model, temperature setting, and the integration of our NPC construction pipeline prompt.
Utilizing different versions of GPT-3 (OpenAI's text-davinci-002, text-curie-001, and text-babbage-001 models) and a range of temperatures (0 to 1, incremented by 0.1), we compared the quality of dialogue generated with our full prompt and a minimal version without the world description, NPC Personality, game state, and NPC objective sections. We repeated the experiment with another NPC role to ensure generalizability 1 . Our analysis revealed a significant decline in quality from the text-davinci-002 to text-curie-001 models, and an even more pronounced decrease between text-curie-001 and text-babbage-001. This is consistent with recent research which has shown that larger and more complex models, such as GPT-3's text-davinci-002 model, have the ability to learn and generalize more complex patterns from larger and more diverse datasets, resulting in better performance across a wide range of natural language processing tasks (Brown et al., 2020).
Furthermore, the recently proposed InstructGPT framework by Ouyang et al. (2022) allows for targeted fine-tuning of pre-trained language models to better suit the task at hand. This approach involves providing additional instructions during finetuning, such as providing task-specific prompts or data augmentation techniques, which results in improved performance for downstream tasks. With the success of InstructGPT, it is becoming increasingly clear that language models can be further optimized for specific use-cases by adjusting their architecture or fine-tuning process. Thus, it is reasonable to assume that newer and more advanced models, such as text-davinci-003, should generally perform better than their predecessors. Finally, our analysis shows that full-prompt models outperformed minimal prompt ones, with an average 4.06 point higher score, demonstrating the effectiveness of our prompting method.
A Pearson correlation test (excluding the atypical data point with a temperature of 0) showed a positive correlation between temperature and score, r(8) = .7055, p = .022646. Higher temperature values yielded better results, with the highest aver-age scores at temperatures of 0.9 and 0.8.
Based on these findings, we recommend using advanced Transformer-based LMs like OpenAI's GPT-3 "text-davinci-002" at a temperature around 0.9, along with our NPC construction pipeline, for optimal NPC script generation.
Results
Self-Diagnosis: To assess the reliability of the self-diagnosis module, we manually evaluated 66 NPC scripts using the same metrics applied in selfdiagnosis. A Pearson correlation test showed a strong positive correlation between self-diagnosed and human-evaluated scores, r(64) = .8092, p < .00001. This demonstrates the module's consistency and correlation with human evaluation scores.
Turing Quest Results: Our NPC construction pipeline, when using the recommended parameters, generates dialogue that not only passes as humanwritten but also scores highly on the evaluation metric. On average, our generated dialogue was thought to be hand-written 64.58% of the time with the best performing script passing as human written 75% of the time. The generated NPC scripts exhibit goal-oriented behavior and adherence to the in-game world and genre, maintaining player immersion. The Turing Quest results further confirm the high quality of the generated dialogue.
Conclusion
We developed a novel pipeline capable of automatically generating NPC scripts comparable or of superior quality to human-written NPC dialogue using Transformer-based PLMs. We then created a self-diagnosis module which provides a method to evaluate and compare the quality of NPC dialogue quantitatively. Finally, our proposal of the Turing Quest allows us to determine the capabilities of a language model when applied to the task of NPC dialogue generation and whether a script passes as human-written. While the NPC construction pipeline allows for modularity even in between responses, that aspect was not explored in depth in this paper. We will explore dialogue generation for dynamic NPCs with evolving roles or attributes in future research.
Limitations
The dialogue generated for the player exhibits a higher degree of repetition and has a tendency to-wards looping. This limitation exists as we did not focus on generating player dialogue as that is a different problem of its own. To account for this limitation, both the self-diagnosis and the Turing Quest only evaluate the NPC's dialogue.
Currently, the maximum context window for the dialogue history portion is limited by the max tokens of a given model minus the tokens required for the NPC header. Despite being a rare occurrence, it is possible that the dialogue history becomes so long that the model may not be able to generate any responses as there is no more remaining space. We did not experience this problem; however, a workaround would be to discard the oldest dialogue history entry as needed. This approach however may cause the NPC to lose out on information that it would otherwise be able to leverage in dialogue.
Ethics Statement
The presence of bias within NPC models/systems poses a significant risk particularly as the demographic of young individuals, still in the age of development, who enjoy playing video games continues to expand. In 2006, 92% of children in the ages of 2-17 had played video games (Dogan, 2006). 97% of players under the age of of 18 play more that an hour of games daily (Granic et al., 2014). According to recent statistics, the global demographic of active video game players is projected to increase over 5% year-over-year (Dogan, 2006), reaching over 3 billion active players worldwide in 2023 2 . This means, in the future, video games will reach more young children and adolescents. If the presence of bias is not addressed, it could subconsciously normalize problematic behaviours seen in games in children as humans are a product of both nature and nurture (Plomin and Asbury, 2005). This in turn may lead to more biases being overlooked or ignored by the next generation of researchers, creating a vicious cycle. | 2023-07-09T13:15:26.119Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "1852cb7abf74cde720f15f57d70e2b261b46fec2",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "1852cb7abf74cde720f15f57d70e2b261b46fec2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
245547427 | pes2o/s2orc | v3-fos-license | Hydrodynamic Behavior Analysis of Agitated-Pulsed Column by CFD-PBM
ABSTRACT Recently, a newly designed type of energy-input extraction column, agitated-pulsed column (APC), has been achieved high mass-transfer efficiency due to the small-size droplets and high dispersed-phase holdup. In view of the lack of feasible population-balance-model (PBM) kernel functions in APC, parameter optimization is conducted by a simplified PBM method. Then the optimized PBM kernel functions are implemented in the computational fluid-dynamics (CFD) code to investigate local two-phase flow behaviors in a 25 mm APC. The results show that the CFD-PBM successfully predicts the drop-size distribution measured in the experiments. CFD-PBM also gives good prediction of Sauter mean diameter and dispersed-phase holdup. The local flow behaviors are illustrated to understand the effects of operating conditions on the hydrodynamic performance. This work demonstrates a possibility for prediction of drop-size distribution by the combination of simplified PBM and CFD simulation.
Introduction
Solvent extraction is an important separation technique for a range of industries, in which one liquid phase is dispersed as droplets in another phase by gravity and energy input. [1] Various types of solvent extraction columns have been widely used in nuclear, hydrometallurgy, and chemical sectors. [2] Because of the complexity of two-phase interaction, the mass-transfer performance inside is significantly affected by the energy input such as stirring, [3] rotation, [4] oscillation, [5] and pulsation. [6,7] Recently, a newly designed type of energy-input extraction column, agitated-pulsed column (APC), [8] has been successfully used in separation of chiral molecules [9,10] and 5-hydroxymethylfurfural, [11] which exhibits excellent mass-transfer characteristics. It is measured to be 17 and 25 theoretical stages per meter for the solute transfer from direction dispersed-to-continuous phases and continuous-to-dispersed phases, respectively. The high masstransfer efficiency is attributed to that the reduction of droplet diameter leads to a high dispersed-phase holdup and thus increases the interfacial area of contact between the phases.
Knowledge of the hydrodynamic parameters including drop size and holdup is crucial for design of liquid-liquid extraction columns as they are related to the interfacial area for mass transfer and the allowable throughputs. [12,13] Recently, computational fluid dynamics (CFD) has been considered as a powerful mathematical tool to performance modeling and help in scale-up of extractor. [14][15][16][17][18][19][20][21][22] Moreover, the coupling of CFD and populationbalance model (PBM) provides an opportunity to reproduce the two-phase flow characteristics more realistically. [23] The main challenge of the application of PBM is to determine its kernel functions including the drop-breakage frequency, daughter droplet-size distribution, and coalescence rates.
In the past decades, different kernel functions have been derived theoretically to express the behavior of droplet breakup and coalescence. [24][25][26][27] Although several kernel functions have been applied to the CFD-PBM simulation of two-phase flow in extraction columns, [28,29] validation was done only for Sauter mean diameter, which may be insufficient to identify which one is feasible in CFD-PBM simulation. [30] Considering there is a lack of feasible PBM kernel functions in the PDDC, Zhou et al. [31] developed a direct measurement method for the breakage of droplets by directly counting the breakup frequency in the real contactor. Empirical kernel functions were established for PBM. However, different parameters of kernel functions may be required for different internal structure and materials where expensive experiments are inevitable. [32] Thus, development of an inverse populationbalance model that can optimize the parameters of the existing models for extraction column is necessary. [33,34] In order to solve the inverse population-balance problem, a simplified PBM method was developed in previous studies. [35] Thus, the main objectives of this study are to combine this method with CFD-PBM simulation and get a deeper understanding of the flow characteristics in the APC, which can be helpful for optimization of operation as well as design of this type of extractor.
In this study, CFD-PBM simulations are conducted to investigate the local hydrodynamics and drop dynamics in APC. The PBM kernel functions obtained from literatures are improvement by a simplified PBM method. The drag law is modified to account for the effect of turbulence. Simulation results of drop-size distribution are presented and compared to the experimental measurements of our previous work. The results of simulated dispersed-phase holdup, x d , and Sauter mean drop diameter, d 32 are also used to compare with experimental data for verification. The local variations of liquid-liquid flow characteristics are discussed with simulation results as well as profiles of experimental dispersed-phase holdup.
Euler-Euler mode
In the present work, Eulerian-Eulerian model is used to describe the twophase flow, which assumes two phases as continuous medium by introducing the concept of local volume fraction, α. The conservation equations for the ith phase are given as: Continuity equation: Momentum conservation equation: where u * is phase velocity, ρ is density, τ represents the stress-strain tensor, p represents the pressure shared by all phases, g is gravitational acceleration, F * i is the external body force, F * i;lift is the lift force, F * i;vm the virtual mass force acting on the ith phase. R * ij is the interaction force between ith and jth phase. The subscript i represents phase i.
Because the two-phase condition is used in this study (the aqueous phase is set as the dispersed-phase d, and the organic phase is the continuous phase, c), volume fractions of the two phases should be satisfied as: Among the interaction force R * ij , drag force is the dominant force for liquidliquid interactions, while the virtual mass force and the lift force can be neglected. [20,36,37] The drag force term between the continuous and dispersed phase is defined in terms of the interphase exchange coefficient (F c,d ) as: in which the interphase exchange coefficient is calculated as: where d 32 is the Sauter mean diameter of the dispersed phase, which is calculated as: n i is the number density of drops of size d i . C D is the drag coefficient related to the relative Reynolds number Re: For the drag-coefficient calculating, the standard correlation of Schiller and Naumann is used [38 This basic drag correlation predicts droplets moving in a still liquid well but may not be suitable for droplets moving in turbulent liquid. [39] Thus, a modified drag law is used to take into account the effect of turbulence. [40] Re ¼ where C is an empirical introduced to represent the reduction of relative velocity due to turbulent flow. [14,39,40] The suggested value of 0.5 is set to C. [19,41] The variable μ t;m is the turbulent viscosity, which can be calculated from Equation (15).
Turbulence model
The multiphase turbulence in this study is described by the realizable k-ε mixture turbulence model, which includes solving the partial differential equations separately for turbulence kinetic energy k and its dissipation rate ε.
where S represents the strain rate tensor, G k accounts for the effect of mean velocity gradients in generation of turbulence kinetic energy, and G b represents the effect of buoyancy in generation of turbulence kinetic energy. The subscript m represents the mixture of the two phases. The mixture averaged values of the density and velocity are calculated from:
Population-balance model
The PBM describes the evolution of number-density of different drop sizes with source terms accounting for droplet breakage and coalescence and is given as: . u * and Γ t are the mean droplet velocity and diffusion coefficient of the number density, respectively. S d; t ð Þ is the source term accounting for the coalescence and breakup of the droplets and is generally expressed as: where D B d; t ð Þ and B B d; t ð Þ are, respectively, the death and birth rates due to breakage. D C d; t ð Þ and B C d; t ð Þ are, respectively, the death and birth rates due to coalescence. The general equations of the birth and death rates are given as: ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi ffi where g d ð Þ is the breakup rate and β d; d 0 ð Þ is the distribution of daughter droplets. λ d; d 0 ð Þ and h d; d 0 ð Þ are the coalescence efficiency and collision frequency, respectively, between drops of diameter d and d 0 : The product of two terms is defined as coalescence rate: In the literature, several kernels have been derived theoretically. One of the fundamental and widely accepted models for droplet breakup and coalescence is due to Coulaloglou and Tavlarides [25] :
Numerical procedure
The schematic representation of the experimental apparatus is shown in Figure 1(a) and the key dimensions of the pilot-plant column are shown in Table 1. The organic phase (continuous phase) used in this study is 30% (tributyl phosphate, supplied by Myer, Shanghai) in Shellsoll 2046. The aqueous phase (dispersed phase) used is tap water. Table 2 shows the physical properties of the experimental system used in this study.
A simulation domain of the laboratory-scale APC is designed as shown in Figure 1(b) to conduct the augmented CFD-PBM simulations. In this work, four agitated cells have been used to keep the computational time within reasonable limits. The 3D periodic grid is used because of the rotational symmetry of plates and stirred. A section part of 60 degrees is designed as shown in Figure 1(b). The two sides of the calculation domain are used as periodic boundary condition. The mesh is constructed with quadrilateral grids. A mesh size of 0.5 mm was used in the effective domain of column, and a mesh size of 1 mm was used at the inlet and outlet of the domain. The mesh independent test was conducted (Supplemental Materials, Figure S1) and mesh sizes of 251,645 cells are found to be optimum and used in the final simulations. All internal walls are set with a no-slip condition, and the nearwall region is modelled with standard wall function. The boundary conditions are properly set as shown in Figure 1(b). An annular domain is created around the impeller to simulate the agitation of impeller as per the moving reference frame (MRF) model. Moving-wall boundary condition is given for the wall of shaft. A pressure outlet condition is defined at the top boundary for organic outlet. The aqueous inlet, aqueous outlet, and organic inlet boundaries are defined as velocity inlet boundaries. A sinusoidal normal velocity is superposed on the inlet of organic phase to represent pulsation input that is given as where V c represents the flow rate of continuous phase, V p represents the pulsation velocity calculated on its amplitude A and frequency f. Although a universal size range has no remarkable influence on steady simulation results, appropriate droplet-size range for each condition would give a better simulation result. [30] The advisable drop-size range for each operating condition is estimated from experimental results and is listed in Table 3. Ten bins are used to represent the drop-size range. The initial dropsize distribution of dispersed-phase inlet is set as 100% maximum diameter. The exponent, q, is used in the discretization of the growth-term diameter coordinate: Simulations are performed in double-precision mode using commercial CFD software ANSYS STUDENT FLUENT 2019R2. The pressure-based solver with implicit formulation is used to numerically solve the model equations. A coupled SIMPLE scheme is chosen to realize pressure-velocity coupling. The time step size used in the model is 0.002 s, while all governing equations are discretized a second-order upwind method. The simulation results are considered to be converged only when all the residuals of the equations are less than 10 -4 .
Modified of kernel functions
Considering there is a lack of feasible droplet-breakage and coalescencefrequency functions in the APC, a simplified PBM method is developed to optimize the parameters of PBM kernel functions. [35] In the regression of parameters, the empirical correlation of energy dissipation rate ε is important, as it directly affects the droplet-breakage and coalescence frequency. So, precise energy dissipation rate ε correlation must be determined before the optimized PBM kernel functions can be implemented in CFD simulation. The average energy dissipation rate under different operation conditions obtained by CFD simulations is presented in Figure 2. Due to the inherent transient nature, simulation results are obtained by the time-average over a periodic cycle. To predict energy dissipation rate using this type of column, correlation is proposed based on Kumar and Hartland [42] and Coulaloglou and Tavlarides [25] : The average absolute relative deviation (AARD) of the calculated values by applying Equation (29) to the simulated results is used to evaluate the accuracy of the prediction. The parameters suggested by Kumar and Hartland [42] and Coulaloglou and Tavlarides [25] result in an AARD of 59.1%, which may be owed to the difference of internal structure and materials between this work and the previous work. In order to obtain a better prediction, the parameters in Equation (29) are refitted to reduce the AARD to about 1.87%. The correlation is given in Equation (31), and the comparison of the simulated energy dissipation rate with the predicted results is shown in Figure 2.
Based on Equation (31), the parameters of Coulaloglou and Tavlarides [25] are refitted with the simplified PBM proposed in our previous studies. [35] The sensitivity analysis of Coulaloglou and Tavlarides model to adjusted parameters is presented in Figure S2 (Supplemental Materials). The sketch of the regression with simplified PBM model is shown in Figure 3, and the optimized parameters are listed in Table 4. It can be seen that the parameters vary with types of column and physical properties (Supplemental Materials, Figure S3). Thus, one needs to choose the most appropriate parameters for a given system.
Validation and comparison
For validation, the simulated cumulative volume droplet-size distributions (DSD) are compared with our previous experimental results measuring at the middle part of column. [35] It can be seen from Figure 4 that an acceptable agreement in the DSD between CFD-PBM simulation and experimental data is obtained, which indicates the reliability of regressed parameters to calculate the drop-size distribution in APC. DSD predicted by the simplified PBM are observed to be more precise than that of CFD-PBM simulation. The deviations between CFD-PBM simulation and experimental results might be attributed to the assumption of homogeneous distribution of turbulent energy-dissipation rate in the regression of parameters differs from the real conditions. It also can be seen from Figure 4(d) that the proportion of droplets with small diameters simulated from CFD-PBM is overestimated compared with experimental data. The reason is probably that the non-homogeneous distribution of turbulent energy-dissipation rate shown in Figure 5 leads to energy focusing and enhances the breakup of smaller droplets. The dispersed holdup and Sauter mean drop diameter, d 32 , are also obtained by the droplet-size distribution to compare with the experimental results obtained in our previous work [35,44] It can be seen from Figure 6, which indicates that the simulated dispersed holdup and Sauter mean diameter with CFD-PBM have the same trend with the experimental results. Figure 7 depicts a more systematic comparison between experimental and simulated results, in which the AARD is always within ±15% for dispersed holdup and ±20% for Sauter mean drop diameter, respectively. The average absolute relative error, considering all data points, is found to be about 10.84% for dispersed-phase holdup and 9.40% for Sauter mean drop diameter, which suggests that the established computational approach is reliable to simulate the two-phase flow behaviors in APC.
Local behaviors of two-phase flow
The local behaviors of two-phase flow can provide useful insights into twophase flow characteristics in the APC, which is important to understand the effect of operation conditions on hydrodynamic performance. Figure 8 depicts the velocity field of the dispersed phase during a pulsation period at agitation speed of 300, 600 and 900 rpm. With increase of agitation speed, the flow gradient field becomes more intensive and recirculation appears in the dispersed phase (Figure 9). The maximum velocity is clearly observed at the impeller tip.
At high agitation speed (900 rpm), two big vortexes one above and one underneath the impeller are observed, which extend the residence time of dispersed-phase droplets leads to an increase of dispersed-phase holdup as well as axial dispersion. These are in agreement with the PIV-measurement results presented by Hlawitschka and Bart [45] with a Kuhni column. At the negative peak moment of the pulsation cycle (t = 0.25 s), the vortex underneath the impeller is stronger than the vortex above impeller, which is the inverse of that at the positive peak moment (t = 0.75 s). It is noted that as the agitation speed increases, there are insignificant changes in vortex scale, while the intensity is significantly enhanced.
For APC, droplet coalescence and breakage rates are closely linked to energy-dissipation rate ε. Figure 10 depicts the contours of energydissipation rate during one pulsation period at agitation speed of 300, 600, and 900 rpm. It can be seen that, with the increase of agitation speed, the turbulent energy-dissipation rate rapidly increases. The turbulence dissipation inside the APC is not homogenously distributed, and the relatively higher energy-dissipation rate can be observed at the edge of the impellers as well as the apertures between impellers and column walls under higher agitation speed. This agrees well with that of the velocity vector of the dispersed phase shown in Figure 8, which is primarily due to the remarkable changes of velocity magnitude and direction leading to a high turbulence kinetic energy and hence increases the energy-dissipation rate. Figure 10(a) shows the distribution of Sauter mean diameter at agitation speed of 300, 600, and 900 rpm. The droplet size decreases with increasing agitation speed due to the increase of turbulence energy-dissipation rate shown in Figure 10. Furthermore, the droplet distribution becomes more uniform at higher agitation speed, which is in agreement with the experimental results shown in Figure 10(b).
At the negative peak moment of the pulsing cycle (t = 0.25 s), the mean droplet diameter at the top domain is always found larger than that at the lower domain due to the consecutive introducing of maximum size droplets from the dispersed-phase inlet. [30] At the positive peak moment of the pulsing cycle (t = 0.75 s), the droplet swarm is entrained into the vortex, which increases the residence time of dispersed phase and enhances the droplet breakage. So, the local mean diameter during positive pulsation period is smaller than that during negative pulsation period. The distribution of dispersed-phase volume fractions at agitation speed of 300, 600, and 900 rpm are shown in Figure 11. It can be observed that the dispersed-phase holdup increases with the increase of agitation speed, which is in agreement with the experimental results shown in Figure 10(b). A more uniform distribution of dispersed holdup can be observed at higher agitation speed. This is attributed to the increase of drag force acting on the fine drops.
At the negative peak of the pulsing cycle (t = 0.25 s), the dispersed phase in the vicinity of the plates is carried away with the continuous phase into the following downward cell to prevent flooding of single cells and the entire column. At the positive peak pulsing cycle (t = 0.75 s), the upward flow of continuous phase and plate holes can prevent droplets from moving to the next plate, which causes an accumulation of the dispersed phase on the plate surfaces. Moreover, at high agitation speed (N = 900 rpm), the dispersed phase is involved into the vortex and mostly circles in a single cell of the column, leading to a longer residence time of dispersed phase and hence increases holdup.
The excellent mass-transfer performance in APC is attributed to the enhancement of droplet breakage and coalescence at high agitation speed. Figure 12 shows the droplet breakage and coalescence frequency distribution under high agitation speed (N = 900) during one pulsation period. The required droplet size is set to 0.74 mm, which is the experimental Sauter mean droplet diameter. It can be seen from Figure 12 that the droplet breakage frequency is most intensive at the impeller tip, which coincides with the distribution of the turbulent energy-dissipation rate shown in Figure 9. The maximum value of droplet coalescence frequency locates at the zones between the impellers and plates.
Conclusion
In this study, the optimized PBM kernel functions obtained by a simplified PBM method are implemented in the CFD to investigate the two-phase flow behaviors in the APC. The following conclusions can be drawn from this study: (a) The improvement of breakage and coalescence kernel parameters with simplified PBM is used in CFD-PBM simulation of APC. CFD-PBM results successfully predict the influence of varying pulsation intensity on droplet-size distribution with acceptable deviation, which validates the regressed parameters with the simplified PBM method.
(b) Favorable agreements between calculated results (with simplified PBM) and experimental data indicate the ability of simplified PBM method to predict dropsize distribution in APC, which is a result highly interesting for design purposes.
(c) CFD-PBM was found to give a good prediction for dispersed-phase holdup and Sauter mean diameter in APC with AARD of 10.8% and 9.4%, respectively.
(d) Comparison between the experimental and simulated local flow behaviors has shown the feasibility of the established Euler-Euler. Both local variations of hydrodynamics and velocity field provide valuable insights into the effect of operating conditions on hydrodynamic performance.
In the further work, the effect of physical properties on simulation should be considered, and the mass-transfer model needs to be implemented into CFD-PBM to investigate the mass-transfer performance in APC. | 2021-12-30T16:13:21.041Z | 2021-12-28T00:00:00.000 | {
"year": 2022,
"sha1": "e9f1dfb3e060a2e6583b907d8b91fbde3c395bb4",
"oa_license": "CCBY",
"oa_url": "https://figshare.com/articles/journal_contribution/Hydrodynamic_Behavior_Analysis_of_Agitated-Pulsed_Column_by_CFD-PBM/17696828/1/files/32393657.pdf",
"oa_status": "GREEN",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "2f21275d7ece8ea791a7c0fe1883b470fe0739a2",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
88512924 | pes2o/s2orc | v3-fos-license | An Improved Bayesian Semiparametric Model for Palaeoclimate Reconstruction: Cross-validation Based Model Assessment
Fossil-based palaeoclimate reconstruction is an important area of ecological science that has gained momentum in the backdrop of the global climate change debate. The hierarchical Bayesian paradigm provides an interesting platform for studying such important scientific issue. However, our cross-validation based assessment of the existing Bayesian hierarchical models with respect to two modern proxy data sets based on chironomid and pollen, respectively, revealed that the models are inadequate for the data sets. In this paper, we model the species assemblages (compositional data) by the zero-inflated multinomial distribution, while modelling the species response functions using Dirichlet process based Gaussian mixtures. This modelling strategy yielded significantly improved performances, and a formal Bayesian test of model adequacy, developed recently, showed that our new model is adequate for both the modern data sets. Furthermore, combining together the zero-inflated assumption, Importance Resampling Markov Chain Monte Carlo (IRMCMC) and the recently developed Transformation-based Markov Chain Monte Carlo (TMCMC), we develop a powerful and efficient computational methodology.
INTRODUCTION
The science of palaeoclimate reconstruction involves predicting prehistoric climate changes by studying fossil records of species abundances (assemblages) preserved in lake sediments and a 'modern, training data set' consisting of known records of species abundances and climate values at different sites in the 'modern time', where modern time is conventionally defined as the time period from the year 1950 till present. Broadly, methods of palaeoclimate reconstruction consist of two steps. The first step is to calibrate a relationship between the observed species abundances and the observed climates using the modern, training data. It is generally assumed that the species abundances depend upon climate, not the other way. In this sense, the calibration step is a 'forward' problem. Then, assuming that the calibrated relationship holds good even in the past ages where fossil records of the species are available but not the prehistoric climates, the calibrated relationship is 'inverted' to obtain reconstructions of the past climates. Thus, the problem of climate reconstruction is an inverse problem.
In the current scenario of the climate change discussion, the problem of palaeoclimate reconstruction has gained much importance. In this context, the Bayesian model-based attempt of the Irish climate reconstruction using pollen assemblages by Haslett, Whiley, Bhattacharya, Salter-Townshend, Wilson, Allen, Huntley & Mitchell (2006) (henceforth, HWB), is a particularly welcome contribution. The model builds upon the palaeoclimate model of Vasko, Toivonen & Korhola (2000) (henceforth, VTK) who considered the multinomial Dirichlet model for the compositional data of chironomid assemblages (non-biting midges, well-known for providing accurate information regarding past climates; see Battarbee (2000)), and used the unimodal Gaussian function to describe the responses of the different species to climate. By unimodal Gaussian response function we mean that the expectation of the number of any particular species is a bell-shaped function of climate; there is an optimum climate value at which the species is expected to thrive the most, and deviation from the optimum climate leads to an exponential decrease in the expected number of the species.
The main modeling contribution of HWB is to propose a nonparametric approach to modelling the species response function. The reason for considering a new approach to modeling the response surfaces is that the unimodal Gaussian response function is too simplistic and may not be adequate for most of the species since the species are expected to respond differently to environmental changes, indicating that the response functions may vary from species to species, apart from being complex in nature. For a detailed discussion regarding these issues, see Ohlwein & Wahl (2012).
But in spite of the commendable attempt and the sensible results related to Irish climate reconstruction, some issues related to the model of HWB should not be overlooked. Firstly, their nonparametric model for the response surface, which is based on lattice Gaussian Markov Random Field (GMRF) (see, for example, Rue & Held (2005)), introduces a lot of parameters (around 10,000) which makes computation burdensome. Secondly, for higher dimensional climate variables the climate grid may not be feasible to construct; moreover, this would involve too many parameters, rendering computation infeasible as well. Thirdly, the unknown past climate variables are assumed to take values in the region formed by the modern climate values, which need not be an appropriate assumption for general palaeoclimate problems.
In an effort to rectify these problems, Bhattacharya (2006) (henceforth, SB) modeled the response functions as a mixture of unknown number of Gaussian functions, while using the multinomial Dirichlet distribution to model the compositional data. He applied this model to the modern training data set consisting of (modern) chironomid counts obtained from 62 lakes of Finland along with the corresponding modern temperatures, also analysed by VTK. The results of leave-one-out cross-validation showed that in 83% cases the true temperature values are included in the 95% credible intervals associated with the posteriors of SB. This was a significant improvement over the model of VTK, which had just 43% coverage of the true temperature values.
However, before applying any potential palaeoclimate model to climate reconstruction, it is desirable to validate it as rigorously as possible. Indeed, with respect to the chironomid data neither the model of VTK nor that of SB satisfy the model adequacy test developed in Bhattacharya (2013) (see also Bhattacharya (2004)). It is shown in Bhattacharya (2004) that the model of HWB, involving the pollen data, also fails the model adequacy test, even though coverage of the observed climate values GDD5 (growing degree days above 5 • C) and MTCO (mean temperature of the coldest month) have been quite satisfactory. As demonstrated in Bhattacharya (2004) (Chapter 7), the model of HWB overfits the pollen data. In fact, although the predicted climates (modes of the posterior distributions) and the observed climates agree well with each other, the posterior distributions have large credible regions, indicating high uncertainty. Such large credible regions are responsible for the poor fit (overfit). Presumably, many of the parameters related to the response surfaces were not adequately informed by the data. Indeed, as can be seen from Figure 5 of HWB, many of the small lattice squares of the climate grid hardly contain any data point. Due to the Markov property of the GMRF assumption the parameters associated with such lattice squares do not depend upon distant lattice squares containing enough data; hence, these parameters do not have information from the data to reduce their posterior variabilities. Hence, the credible regions turned out to be too large, resulting in overfit.
In this paper, we shall conern ourselves with assessment of model adequacy via cross-validation of the training data. We shall not attempt actual climate reconstruction in this paper. In particular, we present a hierarchical zero-inflated multinomial model for the compositional fossil data and, following SB, propose a mixture of unknown number of Gaussian functions to model the response function of each species. The only difference between this model and that of SB is the zero-inflated multinomial model in place of the ordinary mutinomial model. But importantly, this apparently simple modification resulted in quite significant improvement of the results previously obtained by SB. Indeed, with our zero-inflated multinomial model and mixtures of unknown number of Gaussian functions, in the case of the chironomid data of VTK we have been able to include approximately 97% of the observed temperature values in our respective 95% highest posterior density (HPD) credible regions, 3 cases only marginally missing the HPD regions. More encouragingly, our model satisfies the model adequacy test proposed in Bhattacharya (2013). Generalising our ideas to the pollen data case of HWB we show that our model satisfies the test of adequacy even for the pollen data -the cross-validation exercise associated with the pollen data showed inclusion of approximately 95% observed climate values in the respective 95% HPD regions. Indeed, in the aforementioned previous works on palaeoclimate reconstruction, the count data, characterized by a large number of zeroes (about 59% zeroes in the chironomid case and about 37% zeroes in the case of pollen), rendered the ordinary multinomial distribution inappropriate.
Apart from the very much improved results, our model and methods facilitate very fast and efficient computation, which is crucial for palaeoclimate reconstruction where the data sets tend to be (at least moderately) large. For the cross-validation purpose we combine the Importance Resampling Markov Chain Monte Carlo (IRMCMC) methodology of Bhattacharya & Haslett (2007) with the recently developed Transformation based Markov Chain Monte Carlo (TMCMC) (Dutta & Bhattacharya (2013)) to further improve computational efficiency. A brief overview of TMCMC is provided in Section 3.1; here we just note that TMCMC allows updating high-dimensional parameter vectors using simple deterministic transformations of one-dimensional random variables having arbitrary distributions on some relevant support.
It is worth mentioning that recently Salter-Townshend & Haslett (2012) have developed a nested Dirichlet-Multinomial model for multivariate pollen counts data. Their work is motivated by Ohlwein & Wahl (2012); however, their need to use the integrated nested Laplace approximation (INLA) (Rue, Martino & Chopin (2008)) for the purpose of fast computation, also played a very significant role in their model-building procedure. In particular, Salter-Townshend & Haslett (2012) specify a model which exploits the nested structure within the pollen species based on botanic similarities; within each level of the nested structure the species proportions are assumed to be Beta/Dirichlet, and conditionally independent of the other levels consisting of the other species, given their GMRF prior on the two-dimensional climate grid (same as that of HWB, and so this model also precludes extrapolation and is difficult to generalize for high-dimensional climate variables) and other hyperparameters. At each level, the count data is then assumed to be zero-inflated Binomial/Multinomial, given the proportions at that level of the nested structure. The conditional independencies, although undesirable, are necessary for INLA implementation. Thus, although INLA has greatly sped up their computation, the method did demand sacrifice of model flexibility. Also, although INLA has been appropriate for the cross-validation summary statistics that Salter-Townshend & Haslett (2012) consider, it is perhaps the case that INLA, being a deterministic approach, can not approximate the posterior distrbutions of arbitrary discrepancy measures, for example, those that we consider in this paper; see also Banerjee (2008) for a brief discussion.
The rest of our paper is structured as follows. In Section 2 we propose our new model for the chironomid data. Fitting our model using MCMC is discussed in detail in Section 3, and our method of leave-one-out cross-validation using IRMCMC is provided in Section 4. Crossvalidation of the chironomid data and detailed analysis of the results of the cross-validation are presented in Section 5. The formal model adequacy test, along with its application to the chironomid data using posterior samples from the cross-validation exercise, are discussed in Section 6. In Section 7 we generalize our model and methods to the pollen data of HWB, while cross-validation of the pollen data and subsequently the model adequacy test are discussed in Sections 8 and 9, respectively. We finally conclude with some discussion on future work in Section 10. Additional details are provided in the supplement Mukhopadhyay & Bhattacharya (2013c), whose sections and figures have the prefix "S-" when referred to in this paper.
AN IMPROVED MODEL FOR THE CHIRONOMID DATA
Before proceeding we briefly review the data set, the full description of which can be found in Olander, Birks, Korhola & Blom (1999); see also VTK.
Brief description of the data set
As already mentioned in the introduction, chironomids are non-biting midges, and considered very suitable for past climate reconstruction. The modern, training data set analysed by VTK consists of counts of chironomid head capsules present in the top 1 cm surface-sediment from 62 lakes located mainly in northwestern Finnish Lapland. Recorded also are site-specific mean July air temperatures, estimated for each lake using 1961-1990 Climate Normals data from 11 nearby climate stations (2 in Norway, 5 in Finland, and 4 in Sweden) and applying consistent regional lapse rates and linear interpolation (see Olander et al. (1999) for details). After excluding rare species, 52 taxa of chironomid were finally selected.
Thus, the chironomid data of VTK consists of modern time assemblages for m = 52 species of chironomid, along with the mean July temperature values at each of n = 62 lakes (sites) in Finland. This modern, training data set has been used by Korhola, Vasko, Toivonen & Olander (2002) for reconstructing past climates of Finland using VTK's model.
In the following subsections of this present section we provide details of semiparametrically modelling this data. The same model will be generalised to the case of the pollen data of HWB in Section 7. In what follows, we begin with the zero-inflated Poisson model for the count data, finally deriving from it the zero-inflated multinomial model.
Hierarchical model specification starting with zero-inflated Poisson model
For i = 1, . . . , n and k = 1, . . . , m, let y ik denote the count of the k-th chironomid species available at the i-th site; let Y denote the complete count data set. Also, let x i denote the temperature at site i. Let X = {x 1 , . . . , x n } denote the complete set of temperature values. With these we consider the following mixture model for y ik : where λ ik > 0, 0 ≤ π ik ≤ 1, δ {0} denotes point mass at zero, and P(λ ik ) denotes the Poisson distribution with parameter λ ik . Further, In (2) Gamma(ξ ik , 1/ψ) denotes the Gamma distribution with mean ψξ ik and variance ψ 2 ξ ik , where ψ > 0 is a fixed constant. Here ξ ik and ψ are shape and scale parameters, respectively. In (3) β kj and γ kj stand for the j-th optimum temperature (j-th optimum of the k-th species) and the j-th tolerance level (a measure of temperature within the vicinity of the optimum temperature that the species can withstand); M k is the maximum number of optima and the tolerance levels of the k-th species. These will be further elucidated in Section 2.4.
Viewing species optima and tolerance levels as samples from Dirichlet processes
Writing θ kj = (β kj , γ kj ), we assume that for each k, Θ k = {θ k1 , . . . , θ kM k } is a sample from the Dirichlet process (see, for example, Ferguson (1973)): In (5), DP (αG 0 ) denotes the Dirichlet process with α > 0 representing the strength of the belief in the central distribution G 0 . Here we assume that under G 0 , the joint distribution of θ kj is normalinverse-gamma, given by The values of the parameters a, b, and µ β will be specified in the context of the application.
Response function
Introducing the allocation variables z ik (these can also be thought of as auxiliary or latent variables) helps ascertain whether the corresponding count y ik is zero or arose randomly from P(λ ik ).
Formally, z ik = 1 with probability π ik and 0 with probability 1 − π ik . Observe that showing that the response function of the k-th species at the i-th site is given by (7). Now, since the Dirichlet process is discrete with probability one, it follows that with positive probability, the parameters {θ kj ; j = 1, . . . , M k } are equal. A consequence of this is the reduction of (7) to the following: where, with θ * kj = (β * kj , γ * kj ), the set {θ * kj ; j = 1, . . . , M * k } is the set of distinct values among {θ kj ; j = 1, . . . , M k }, and N kj is the frequency of the occurrence of θ * kj . Of course, M * k j=1 N kj = M k . Since the number of, and the frequencies of coincidences among the parameters is random, it is clear that (8) is a mixture of Gaussian functions with unknown number of components. Moreover, it is also clear that all the m species have different response functions, with different number of mixture components. This is important, since different taxa may require different numbers of components to adequately model the response surface.
An alternative to our mixture representation of the response surfaces are spline based models for the same. For this modeling style, for different species, the orders of the splines (orders of the polynomial parts), the numbers and locations of the knots, must be treated as unknown and different. Although the part of the spline associated with the knots can be modeled using Dirichlet process, the same is not appropriate for modeling the polynomial part of the spline. The reason is that Dirichlet process can only force the polynomial coefficients to be equal with positive probability, but coincidences among the polynomial coefficients can not decrease the order of the polynomial.
As such, the polynomial part must be handled using complicated variable-dimensional MCMC methods, for example, reversible jump MCMC (RJMCMC). Since complicated RJMCMC has to be carried out for all the species, this would very significantly increase the computational burden.
But such computational difficulties can be overcome by a new, general, MCMC methodology for variable dimensional models, which is being developed by Das, Dey & Bhattacharya (2013). The methdogology, which we refer to as Transdimensional TMCMC (TTMCMC) is an extension of TMCMC for variable dimensional cases, and can update all the (random number of) parameters in a single block, using deterministic transformations of some arbitrary one-dimensional random variable. This would greatly assist in computation associated with spline-based response functions that we hope to pursue in the future.
From zero-inflated Poisson to zero-inflated multinomial
Letting y i· = m k=1 y ik , it follows that the joint distribution of y i = (y i1 , . . . , y im ) is zero-inflated multinomial, given by: Now note that p ik = λ ik :z i =0 λ i denotes the unknown proportion of the k-th species at the i-th site, whenever z ik = 0, that is, whenever y ik = 0. These proportions are clearly dependent since all of them are scaled by the same sum :z i =0 λ i . In fact, since a priori λ ik ∼ Gamma(ξ ik , 1/ψ), it follows that [{p ik : z ik = 0}] ∼ Dirichlet({ξ ik : z ik = 0}). In other words, even though the species parameters Θ k are considered independent at the Poisson level, the species proportions {p ik ; k = 1, . . . , m} are dependent at the multinomial level for each i = 1, . . . , n. Thus, we have the following Multinomial-Dirichlet structure: for i = 1, . . . , n, Although it is possible to express our Bayesian model in terms of the Dirichlet parameters p ik and then analytically integrate out the latter, so that λ ik no longer needs to be simulated by MCMC methods, there are two reasons to retain λ ik . Firstly, λ ik are the Poisson parameters associated with the first stage of our modeling, which does not condition on y i· ; hence it may be of interest to learn λ ik . Here note that the model in terms of p ik (even if p ik are retained), is not identifiable with respect to λ ik , since multiplying {λ ik ; k = 1, . . . , m} with some constant yields the same p ik .
Hence, if λ ik are of interest, the model must be expressed in terms of λ ik , not p ik .
Secondly, and more importantly, retaining these parameters expand the parameter space, which may allow free movement of the MCMC sampler, thereby facilitating improved mixing. One such instance is reported in Bhattacharya & Haslett (2007), where the MCMC sampler associated with the marginalized model failed to discover a minor mode of a bimodal cross-validation posterior associated with VTK's model, but the expanded model of VTK with the Dirichlet parameters allowed the MCMC sampler to explore the mode adequately. Since multimodality plays very important roles in both of our examples, we resort to modeling in terms of λ ik . Since we update the λ ik parameters in a single step using TMCMC, retaining these parameters does not cause computational burden.
We have pointed out that although the species parameters are independent at the Poisson level, dependence is induced at the multinomial stage, via conditioning on y i· . However, it is possible to induce dependence between the species parameters Θ k even at the Poisson level, by considering the hierarchical Dirichlet process (Teh, Jordan, Beal & Blei (2006)). In other words, we could assume that, for k = 1, . . . , m, θ k1 , . . . , γ > 0 and H is a specified distribution. The implication of such a hierarchical structure is that the parameters θ kj associated with the species response functions will be shared with positive probability by the various species, inducing dependence. However, in our set-up this would create severe computational difficulties. Again, such computation difficulties can perhaps be overcome by TTMCMC of Das et al. (2013). We intend to explore the issues related to the new modelling ideas and computational methods in the future.
A few remarks regarding this prior choice is in order.
It is natural to choose a subjective prior on the zero-inflation probabilities Π which depends upon climate. However, the zero-inflation probabilities directly affect the number of zeroes in the data, and so any subjective prior, which may depend upon the climate must be chosen with great care because mis-specification in this case can easily give rise to a conflict between the data and the prior. An instance of mis-specification may be that at several locations several taxa may be completely outside its range boundary which gives rise to excess zeroes, even though the climate on which the prior of π ik for such locations and species depend, may be optimal for those taxa. In this case the prior would not indicate excess zeroes, even though the observed data may contain excess zeroes, suggesting a conflict between the prior and the data. The objective prior U nif orm(0, 1) cuts down such risk, as is evident from Figure 3 and 9, which indicate that the observed values are fitted well by our model and the associated priors. Moreover, the U nif orm(0, 1) prior also serves to simplify the computations to a large extent, since the associated Gibbs step involves a simple simulation exercise from the relevant Beta distributions. It is worth mentioning that Π could be easily integrated out analytically from the joint posterior (10) to simplify the model, but since we are interested in the posterior of Π and since retaining these parameters may induce better mixing of our MCMC sampler, we did not marginalize the joint posterior with respect to Π.
MODEL FITTING USING MARKOV CHAIN MONTE CARLO (MCMC)
For MCMC purposes the full conditionals of the unknowns z ik and π ik are available in standard forms for sampling using simple Gibbs steps. It will also be observed that the full conditionals do not involve the complete likelihood thanks to the zero-inflated multinomial distribution, involving only those terms which are associated with strictly positive count data points. Since a large number of counts are zero, this provides the very important advantage of very fast and efficient computation. Updating θ kj using the Polya urn distribution as the proposal for Metropolis-Hastings steps as in SB turned out to to be quite effective here. Finally, we update Λ in a single block using TMCMC to further enhance computational efficiency. Before proceeding further we first provide a brief overview of TMCMC.
Overview of TMCMC
TMCMC enables updating an entire block of parameters using deterministic bijective transformations of some arbitrary low-dimensional random variable. Thus very high-dimensional parameter spaces can be explored using simple transformations of very low-dimensional random variables.
In fact, transformations of some one-dimensional random variable always suffices, which we shall adopt in our examples. Quite clearly, the underlying idea also greatly improves computational speed and acceptance rate compared to block Metropolis-Hastings methods. Interestingly, the TMCMC acceptance ratio is indepenent of the proposal distribution chosen for the arbitrary lowdimensional random variable. For implementation in our cases, we shall consider the additive transformation, since it is shown in Dutta & Bhattacharya (2013) that many fewer number of "move types" are required by this transformation compared to non-additive transformations.
To elaborate the additive TMCMC mechanism, assume that a block of parameters ζ = (ζ 1 , . . . , ζ r ) is to be updated simultaneously using additive TMCMC, where r (≥ 2) is some positive integer.
In our examples we shall choose g(·) to be N (0, 1) density, so that η is simulated from a truncated normal distribution. We then propose, for j = 1, . . . , r, ζ ± a j η with equal probability (although equal probability is a convenience, not a necessity), where (a 1 , . . . , a r ) are appropriate scaling constants. Thus, using additive transformations of a single, one-dimensional η, we update the entire block ζ at once. In our examples, we select the tuning parameters (a 1 , . . . , a r ) using information from several pilot runs of our TMCMC algorithm. In other words, we run our TMCMC algorithm several times for 20, 000 iterations, each time with a set of possible trial values of (a 1 , . . . , a r ); in fact, we begin with all the trial values set equal to 0.5, and then observing the mixing properties of the associated pilot run, we modify the trial values accordingly. We continue this for several pilot runs until the mixing is reasonable. We ascertain mixing informally using trace and autocorrelation plots of the sample path of the TMCMC.
The aforementioned procedure of selecting the tuning parameters, although yielded reasonable mixing, is evidently somewhat ad-hoc. A more rigorous method for choosing the tuning parameters in additive TMCMC can be based on the recenly developed optimal scaling theory for additive TMCMC by . Since show that the optimal acceptance rate for additive TMCMC under various set-ups is 0.439, one can tune the scaling constants to achieve about 44% acceptance rate. Note that for random walk Metropolis, the corresponding optimal acceptance rate is 0.234, much lower than that of additive TMCMC.
Comparisons between additive TMCMC and random walk Metropolis in terms of optimal scaling are thoroughly explored in .
In Section S-1 of the supplement we descibe an MCMC algorithm, which is a combination of Gibbs steps, Metropolis-Hastings and TMCMC steps, for updating the unknowns. The updating procedure will be used to cross-validate our model, which we discuss below.
LEAVE-ONE-OUT CROSS-VALIDATION
In order to assess the validity of our model we successively leave out data point i (that is, we leave out both x i and the assemblage y i ) from the training data set, and using the remaining data set along with y i , the latter regarded as the test data, attempt to predict x i . So, we must now include a new parameter, which we denote by x, corresponding to the left out climate value x i . Now, this new parameter x requires a prior. We set a prior N (µ x , σ 2 x ) for this new parameter.
As a referee suggests, one could also look upon x as the true measurement of the climate value at the i-th site, where x i is the observed value of the climate subject to a measurement error at site i.
From this perspective, the prior on x can be interpreted as the prior on the true measurement of the climate variable at site i. We write denotes the measurement error. The modified likelihood associated with this perspective is the original likelihood conditional on the observed climate values, multiplied with this normal likelihood contributed by the measurement error at the i-th site. The prior for x must then be duly multiplied with the joint likelihood and the priors for the other parameters to arrive at the form of the joint posterior. The observed climate x i coincides with the true climate x if and only if σ 2 ζ = 0, that is, when there is no measurement error.
In that case, the posterior of x coincides with our cross-validation posterior when x i is held out.
Indeed, we are not aware of any evidence to suggest that there is significant climate measurement error in either the chironomid data or the pollen data. Hence, for both the applications we shall assume that the observed climate values are the true climate values, and the prior on the new parameter corresponding to the held out climate value makes sense from this perspective.
Full conditional of x
The full conditional of x given the rest is given by where in ξ ik , x i must be replaced with x. For updating the one-dimensional variable x, random walk Metropolis with appoximately optimized scaling constant will be used. In fact, Dutta & Bhattacharya (2013) show that a TMCMC step for updating one-dimensional parameter coincides with a Metropolis-Hasting step; in this case, the additive TMCMC step is equivalent to a random walk Metropolis step. All the other variables will be updated in the way described in Section S-1.
Now observe that since we need to perform an MCMC run for each left out data point, n many computationally burdensome MCMC implementations are necessary, thus calling for innovative computational shortcuts. The usual importance sampling based ideas (see, for example, Gelfand, Dey & Chang (1992), Gelfand (1996)) do not work in inverse problem set-ups such as in our case.
In an inverse problem the response variable (say, y) is modeled conditional on some covariates (say, x), but prediction of some future x n+1 given y n+1 and the training data set is of interest. This is a much more complicated problem compared to the usual forward situation, where prediction of y n+1 is of interest, given the training data set and x n+1 . Details are provided in Bhattacharya & Haslett (2007). To meet the challenges of cross-validation in inverse problems, Bhattacharya & Haslett (2007) (see also Bhattacharya (2004)) proposed a very fast and efficient methodology by judiciously combining importance re-sampling (IR) and MCMC. Here we adopt their methodology, which has been termed IRMCMC by the above authors. Details, for our current problem, are provided in Section S-2 of the supplement.
CROSS-VALIDATION OF CHIRONOMID DATA
For our application we fixed α = 10, ψ = 1, µ β = 11.19, a = 11, b = 30, µ x = 11.19, M k = 10 for all k = 1, . . . , 52. These choices are motivated by VTK and SB who attempted to incorporate ecological knowledge into their priors; in particular, the choice α = 10, M k = 10 implies that a priori the probability of a multimodal response function for the k-th species is 0.53, which is slightly higher than the probability of a unimodal response function. It is also worth mentioning that using fixed value of α in the context of Dirichlet process is commonplace; see, for example, Escobar & West (1995), Neal (2000) (2007), Kurihara, Welling & Teh (2007).
Some remarks regarding the choice of M k and α in general palaeoclimate problems is in order.
In the paradigm of regular mixtures, that is, when the data arise from some mixture model with unknown number of components, Bhattacharya (2008) to increase with the sample size n) satisfying M n / √ n → 0 as n → ∞, is adequate. Thus, for fixed sample size n, one may choose M n to be less than √ n. Although our current set-up is very different from regular mixture problems, as a rule of thumb, we can select M k to be less than √ n, the number of sites. The asymptotic choices of α n (α allowed to depend upon n), again increasing with n but a rate slower than that of M n , are shown by Mukhopadhyay & Bhattacharya (2013b) to be adequate.
For details regarding the other prior choices, see VTK and SB. We choose σ 2 x = 10 to allow a reasonably wide range of possible values of x to be considered. As we report in Section 5.1, our cross-validation results are remarkably robust with respect to other choices of α and σ 2 x .
For the purpose of IRMCMC we first selected i * as i * = {i : x i = median(X)}. Since n = 62 is even, there are two choices of the median. Following Bhattacharya & Haslett (2007) we chose i * = 38. For this importance sampling density, we simulated a sample of size L = 10, 000 after discarding a burn-in period of length 20, 000. From these stored 10, 000 MCMC realizations we re-sampled, without replacement, K 1 = 200 realizations for each of the 62 cases. For each case, given each of the 200 re-sampled realizations, we then simulated, using MCMC, K 2 = 50 posteriors of π ik with respect to different choices of α and σ 2 x is exhibited by the plots. Importantly, it is clearly seen that the posteriors of π ik have modes closer to 1 than to 0 indicating that it is indeed really important to model the count data with zero-inflated multinomial distribution to account for such large proportion of zeros.
Goodness of fit of the response functions
Apart from the cross-validation results, it is also of interest to ascertain how well our Dirichlet process based response functions perform. Since this is directly related to the question of predicting the species abundances, here we consider predicting the observed species abundances using the posterior expectations ofỹ ik conditional on y i· , whereỹ ik is the random variable associated with (or, a replicate of) the observed data point y ik .
We construct the predicted version of the count data for the k-th species using the posterior distributions of {ỹ ik ; i = 1, . . . , n}. Figure 3 shows the respective 95% credible intervals of σ 2 x = 10), blue to (α = 25, σ 2 x = 5), red to (α = 10, σ 2 x = 10), and green to (α = 10, σ 2 x = 5). The results of cross-validation and the fit of the response functions to the observed data may seem to be satisfactory, but a test of overall model adequacy is necessary to formally certify our new model. In the next section we address the issue of model adequacy test.
A TEST FOR OVERALL MODEL ADEQUACY
To quote Gelman, Meng & Stern (1996), assessing the plausibility of a posited model (or of assumptions in general) is always fundamental, especially in Bayesian data analysis. Gelman et al. (1996) seem to be the first to attempt an extension of the essence of the classical approach of model assesment to the Bayesian framework. Their approach is based on computing the posterior distribution of the parameters given the data and then to compute a P -value, involving a discrepancy measure, which is a function of the data as well as the parameters. Their approach differs from the available classical approaches mainly in introducing a discrepancy measure that depends on the parameters as well. Bayarri & Berger (1999) introduced two alternative P -values and demonstrated that they are advantageous compared to the P -value of Gelman et al. (1996).
Motivated by the palaeoclimate reconstruction problem in "modern data" on fossil pollen assemblages, Bhattacharya (2013) proposed a novel approach to model assesment based on "inverse reference distributions" (IRD). He has shown that his approach is suitable for assessing Bayesian model fit in inverse problems but may be extended to quite general Bayesian framework and has some distinct advantages compared to the other approaches. Here we will use the idea of Bhattacharya (2013) for assessing the plausibility of our model. The key idea can be mathematically formulated in the following way. Suppose Y = {y i , i = 1, . . . , n} represent the data and X = {x i , i = 1, . . . , n} represent the non-random covariates. Let X stand for the random vector associated with X; the former may also be thought of as a replicate of X but must be predicted conditionally on Y in an inverse sense. If the posterior distribution of X is consistent with observed X then the model is said to have fit the data adequately. Otherwise the model is considered inadequate for the data. The fully Bayesian approach to this prediction requires computation of an inverse reference distribution based on the posterior where L denotes the likelihood of the unknowns X , θ , θ being the set of model parameters.
Bhattacharya (2013) is to be constructed using the simulated covariatesX; then if T (X), the observed discrepancy measure corresponding to the observed covariates X, falls within the appropriate credible region of T (X), the model is to be accepted, otherwise it should be rejected. The decision theoretic justification of the procedure is provided in Bhattacharya (2013).
Before applying the model adequacy test of Bhattacharya (2013) we need to choose an appropriate discrepancy measure T (·). Figure 1 shows that posterior distributions of some of the x i are skewed, while some are strongly indicative of multimodality. Considering the global modex * i of the posterior distribution ofx i as a convenient measure of central tendency, we use the following observed discrepancy measure: Replacing X withX in (15) yields the inverse reference distribution corresponding to T 1 (X).
The thick, black horizontal lines represent the 95% HPD regions of the posteriors of T 1 (X). The vertical lines represent the observed discrepancy measures T 1 (X). In all the cases T 1 (X) fall comfortably within the 95% HPD regions of the corresponding inverse reference distributions, clearly leading to acceptance of our model. We also considered several variants of the discrepancy measure (15) by replacing the modex * i with the median, taking sum of squares instead of sum of absolute deviations, etc. However, all these variants led to acceptance of our model.
Since the cross-validation posterior distributions ofx i are multimodal, it is possible to question our choice of the discrepancy measure that makes use of the absolute deviation. One plausible discrepancy measure in this case may be that associated with the logarithms of the cross-validation posteriors. In other words, we may choose the following discrepancy measure: Figures 5(a), 5(b), 5(c) and 5(d) display the IRMCMC-based inverse reference distributions associated with T 2 corresponding to our model with (α = 25, σ 2 x = 10), (α = 25, σ 2 x = 5), (α = 10, σ 2 x = 10) and (α = 10, σ 2 x = 5), respectively. As with T 1 , even with T 2 , the observed discrepancy measure T 2 (X) falls comfortably within the 95% HPD region for all the four different choices of (α, σ 2 x ).
In Section S-3 of the supplement we investigate the relationship of the discrepancy measure T 1 with other discrepancy measures that are variants of T 2 above.
DATA
The training data set of HWB consists of modern pollen counts on m = 14 species from n = 7815 different sites of the world, which we denote as before by y i = (y i1 , . . . , y im ), for i = 1, . . . , n. It is important to mention that unlike in the case of the chironomid data, here most of the total counts y i· are missing. It is however known that the total counts in this case are typically 400. Following HWB we also treat the total counts as 400, that is, we take y i· = 400, for i = 1, . . . , 7815. The data also includes modern, bivariate climate variables, namely, MTCO and GDD5 at those sites, which we denote as x i = (x i1 , x i2 ). Here we standardize x i1 and x i2 so that their sample means and variances are 0 and 1, respectively. As in the case of the chironomid data we model the pollen counts y i as zero-inflated multinomial of the same form as (9). Also, as in (3), λ ik is assumed to follow Gamma(ξ ik , 1 ψ ), where ξ ik is now modelled as where N 2 x i , β kj , Σ k represents the bivariate normal density at x i with mean β kj and covariance matrix Σ k . The (s, t)-th element of Σ k is denoted as σ k,st , s, t = 1, 2. We assume that Under G 0 , β kj is assumed to follow bivariate normal with mean vector µ β = (µ β0 , µ β1 ) and covariance matrix Σ k , where µ β is a known vector. For our application we choose µ β0 = µ β1 = 0, matching the sample mean of the standardized climate variables GDD5 and MTCO. The reason that we select these prior parameters in this way is that the species optima {β kj ; j = 1, . . . , M k }, which are exchangeable, and the climate variables at which the species data are collected, are expected to be similar, and hence uncertainties about them are not expected to be very different. In fact, VTK and SB also assume the same prior mean for optimum temperature and the temperature variable.
For the prior on Σ k we assume that for i = 1, 2, σ k,ii ∼ IG(a 0i , b 0i ), the inverse-gamma prior with mean b 0i /(a 0i + 1) and variance b 2 0i /(a 0i − 1) 2 (a 0i − 2), for a 0i > 2. Here we choose a 0i = 4.1 and b 0i = 5.1 for i = 1, 2 so that both the prior means are 1, matching the sample (standardized) variances of x i1 and x i2 , while the prior variance is 1.3. Again, the rationale for matching the sample variances is that the species optima and the climate variables at which the species data are obtained are expected to have similar distributions. The prior variances of σ k,11 and σ k,22 are made slightly larger than the sample climate variances since the former are unobserved unlike the latter, thus incurring relatively more uncertainty. Denoting σ k,12 √ σ k,11 σ k,22 by ρ k,12 , we put the U nif orm(−1, 1) prior on ρ k,12 .
For this pollen data example, we choose M k = 10 and α = 1. Unlike the chironomid example, here setting larger values of α led to overfitting the pollen data by increasing the number of mixture components in the response function (17). This suggests that the response surface in the pollen data example is expected to have less number of modes than in the chironomid data case. It is useful to remark that the choice α = 1 is so common (see, for example, Escobar & West (1995), Neal For the cross-validation purpose we need to select a prior for x = (x 1 , x 2 ), where x corresponds to the left out observed climate variable x i = (x i1 , x i2 ). Based on the observed sample, we set a bivariate normal prior for x with means µ x 1 = µ x 2 = 0 and variances σ 2 x 1 = σ 2 x 2 = 10 for the co-ordinates of x. Somewhat larger variances are chosen to account for extra uncertainty in x, which is now treated as unobserved. Based on the observed sample, the covariance is taken as 0.8.
The joint posterior distribution and the forms of the full conditional distributions of the parameters can be easily calculated as in Section 2.6 and Section S-1.
Implementation issues
Application of IRMCMC in the pollen data problem is carried out by first selecting i * = 5353 according the criterion presented in Section 4.2 of Bhattacharya & Haslett (2007). We used additive TMCMC to update Λ, x, and {σ k,11 , σ k,22 , ρ 12 } in blocks. In fact, we apply TMCMC to the reparameterized versions of the elements of Σ k , that is, using additive TMCMC we update jointly {log σ k,11 , log σ k,22 , tan πρ 12 2 }. The reparameterized versions, being supported on the entire real line, ensures free movement of our additive TMCMC sampler, resulting in good mixing properties.
It is important to mention that updating β kj using the Polya urn distribution as the proposal distribution failed to yield satisfactory mixing. We overcame the problem by adding a TMCMC step to update the distinct components of β kj ; j = 1, . . . , M k in a single block, after Metropolis-Hastings with the Polya urn proposal has been applied sequentially to β kj ; j = 1, . . . , M k . A further step of TMCMC consisting of only two move-types with equal probabilities, either adding a single ∼ N (0, 0.5)I { >0} to all the variables or subtracting it from all of them with equal probabilities, using the TMCMC-based acceptance ratio to decide on the final acceptance, very significantly improved the mixing properties of our algorithm.
With the above proposal mechanisms we generated 30, 000 MCMC samples from the posterior corresponding to i * = 5353. We discarded the initial 10, 000 samples as burn-in and stored the rest of the samples for importance re-sampling. We implemented IRMCMC fixing K 1 = 200 and K 2 = 100, thus obtaining 20, 000 IRMCMC samples for each of the 7815 cross-validation posteriors. The entire exercise took around 9 hours.
Results of cross-validation
In about 94.60% cases x 1 , the co-ordinate associated with GDD5, fell within the 95% HPD regions of the corresponding cross-validation posteriors, and in about 94.19% cases x 2 , associated with MTCO, fell within the respective 95% HPD regions. Figures 6 and 7 show some cross-validation posteriors associated with GDD5 and MTCO respectively, with the vertical lines and the thick horizontal lines denoting the true (observed) climate values and the 95% HPD intervals. The cross-validation posteriors are highly multimodal; the degrees of multimodality seem to be higher in comparison to those of the chironomid example. Indeed, in this pollen case, several species are combined to form a single category; see Appendix A of HWB for a discussion justifying amalgamation of species. Also, some species, such as Juniperus, consist of several sub-species having contrasting climate preferences. These issues substantially contribute to multimodality of the cross-validation posteriors. A detailed discussion on multimodality can also be found in Bhattacharya (2004). Figure 8 shows the posteriors of π ik associated with the pollen data, with respect to different choices of α and σ 2 x . The posterior modes are significantly greater than zero, again vidicating the importance of zero-inflated multinomial. As in the case of the chironomid data, here also the posteriors of π ik appear to be quite robust with respect to the choices of α, σ 2 x 1 and σ 2 x 2 (we assume σ 2 x 1 = σ 2 x 2 for each choice). The fact that the posteriors of π ik remain almost unchanged even with the relatively large value of α (= 5) which caused our model to overfit the data, confirms that the overfit with α = 5 was caused solely due to increase of the number of mixture components in our Dirichlet process based response function, and the modeling associated with π ik plays no role in it.
Response surfaces for the pollen data
As in the chironomid case, here also we assess the fit of our model-based version of species abundances to the observed abundances. Figure 9 displays three such instances, focussing attention on the pollen species Alnus, Ericales and Other, where the last represents a combination of the counts of many species (see Appendix A of HWB for the details). Fitting Other is expected to be challenging because the various species amalgamated into the single category may respond differently to climate changes. The first row of Figure 9, which represent our fitted response surfaces for the above three species, has been constructed as follows. As in Figure 5 of HWB we construct a support lattice which covers the entire set of observed two-dimensional climate points with lattice squares -within each lattice square, we then take averages of the posterior medians of allỹ ik that fall within the lattice square. The second row of Figure 9 represent the observed response surfaces and is construced in the same way as the first row, but the posterior medians are replaced with the observed abundances. The last row shows the absolute difference in each lattice square between the averaged posterior medians and the averaged observed abundances. The spectra of colours ranging from dark blue to dark red indicate progressively larger abundances ranging from 0 to 400.
MODEL ADEQUACY TEST FOR THE POLLEN DATA
Since in this pollen data example the climate variable is bivariate, we consider the following discrepancy measure and its variants: is the mode of the i-th cross-validation posterior, and S is the covariance matrix ofx based on the IRMCMC samples. Obviously, the above measure can be straightforwardly extended to functions of any number of variables. Variants of the above measure, such as square root of the quadratic form, replacing the mode ofx with the median ofx, can be easily considered.
Shown in Figures 10 and 11 are the posterior distributions of T 1 (X) along with the corresponding observed discrepancy measure T 1 (X), whenx * i = (x * i1 ,x * i2 ) are the co-ordinate-wise modes and medians, respectively, of the i-th cross-validation posterior. Both the figures clearly indicate that our model very satisfactorily passes the model adequacy test of Bhattacharya (2013).
As in the case of chironomid, here also we consider the discrepancy measure based on the sum of the logarithms of the cross-validation posterior distributions: Figure 12 shows that the observed discrepancy measure T 2 (X) falls comfortably within the 95% HPD region of the inverse reference distribution associated with T 2 (X), indicating that our model passes the model adequacy test even with respect to T 2 .
CONCLUSIONS AND FUTURE WORK
Our work can be considered to be the necessary stepping stone to full-fledged palaeoclimate reconstructions. Indeed, the fact that the same modelling idea is able to fit both the chironomid and the pollen data vindicates the generality of our model; it is only natural to expect that the same model and methodologies developed in this paper will be able to reconstruct past Holocene temperature (Korhola et al. (2002)) as well as past Irish climate (HWB). In fact, we see no reason why our where the thick line in the base represents the 95% HPD interval and the vertical line indicates the observed discrepancy measure T 1 (X); here x * i = (x * i1 ,x * i2 ) denote the co-ordinate-wise modes of the i-th cross-validation posterior.
model and methods will not be appropriate for predicting and analysing past climates of any other places of interest.
A very important advantage of our model is that it is relatively simple and is quite cheap computationally, with TMCMC playing an important role in this regard. For massive palaeoclimate datasets meant for climate reconstruction, this will certainly turn out to be of great value.
In the current work on cross-validation of modern, training data sets, we have ignored the spatial aspects of the data sets. However, since in the training data sets the climate values are recorded, the observed climate values are expected to have much stronger bearing on inference compared to spatial effects. It seems that the spatial (in fact, spatio-temporal) effects will play important roles while reconstructing past climates at multiple locations, since in such cases the past climates are unknown (see also Section 6 of HWB). Our model can be further generalized by incorporating desirable spatio-temporal effects; we will report this work elsewhere. where the thick line in the base represents the 95% HPD interval and the vertical line indicates the observed discrepancy measure T 1 (X); here where the thick line in the base represents the 95% HPD interval and the vertical line indicates the observed discrepancy measure T 2 (X).
ACKNOWLEDGMENT
We are sincerely grateful to the reviewers for providing detailed, constructive, comments on our paper which greatly improved the quality of our paper. If y ik = 0, the full conditional distribution of z ik gives full mass to 0, that is, On the other hand, if y ik = 0, (2) where C is such that (2) + (3) = 1.
S-1.2 Full conditionals of π ik
The full conditional of π ik is given by In other words, π ik ∼ Beta(z ik + 1, 2 − z ik ).
S-1.3 Full conditionals of λ ik
The full conditional distribution of λ ik is given by Note that if z ik = 1, implying y ik = 0, then the above full conditional boils down to just the prior of λ ik given by the second factor of (5). So, even though (5) is not amenable to straightforward sampling when z ik = 0, for z ik = 1, one would simply sample from the Gamma(ξ ik , 1/ψ) prior of λ ik . We shall use the additive TMCMC methodology with approximately optimized scaling constants to update the entire set of λ ik corresponding to z ik = 0 in a single block.
S-1.4 Full conditionals of θ kj
The full conditional distribution of θ kj is given by the following: where Θ −kj = Θ k \{θ kj }, and, [θ kj | Θ −kj ], which follows from the Polya urn scheme, is given by It is clear that it is not straightforward to simulate from (6). Also notice that continuous distributions, for example, normal random walk will not be appropriate in this case since θ kj has a discrete, not a continuous distribution. Because of similar reasons TMCMC is not valid either. As a result, following Bhattacharya (2006) we shall employ (7) as a proposal distribution for updating θ kj using a Metropolis-Hastings step. A key advantage of using this proposal is that the factor [θ kj | Θ −kj ] does not appear in the Metropolis-Hastings ratio, thus simplifying proceedings to a large extent.
S-2. IRMCMC
Our proposed procedure can be stated in the following manner.
1. Choose an initial case i * . Use [x, Θ, Π, Λ, Z | X −i * , Y ] as the importance sampling density, where X −i * = {x 1 , . . . , x i * −1 , x i * +1 , . . . , x n }. Bhattacharya & Haslett (2007) demonstrate that an appropriate i * may be obtained by minimizing a certain distance function. However, as shown in Bhattacharya & Haslett (2007), in cases where the importance weights does not depend upon the count data Y , this distance functions leads to that i * for which x i * is the median of X. As shown below, in our case also the importance weights are independent of Y , implying that i * = {i : x i = median(X)}.
c. Store the K 1 × K 2 draws of x as the posterior for x i asx so that In the above,x * i can be either the median or the mode of the i-th cross-validation posterior. We consider two cases -in the first case we investigate the relationship between the discrepancy measure D 1 , given by (11) (and its variant) and T 1 , given by (15) of our main manuscript, lettingx * i be the median. In the second case, we investigate such relationships denoting the posterior mode bỹ x * i .
Taylor's series expansion up to the first order aboutx * i yields where u i lies betweenx i andx * i and v i lies between x i andx * i . We now assume that g (·) is continuous and that for i = 1, . . . , n, u i and v i are contained in a small interval so that g (·) is approximately constant in that interval thanks to continuity. Such an assumption can be expected to hold in practice if the observed climate data x i , after suitable scaling if required, have small empirical variance, so that they lie close together. The posterior medians then are also expected to be close to each other, that is, they are expected to lie in a small interval. The assumption that g (·) is continuous on small intervals is expected to hold very generally.
It then holds that for i = 1, . . . , n, |g (u i )| ≈ |g (v i )| ≈ c (> 0). Also, V ar D 1 (X)|Y ≈ c 2 V ar T (X)|Y . Hence, (13) becomes where The difference between T 2 above and T 1 given by (15) of our main manuscript is that the latter involves scaling of each term of the summation by the posterior standard deviation ofx i . If we scale each term of the summation in D 1 by V ar{g(x i )} and denote the modified discrepancy measure by D * 1 , then again by invoking the Taylor's series expansion g(x i ) = g(x * i ) + (x i −x * i )g (u i ), we obtain V ar{g(x i )} ≈ c V ar(x i ), so that (after cancelling c in the ratios) showing that the discrepancy measures are approximately equivalent for the purpose of goodness-of-fit test of Bhattacharya (2013).
Case 2:x * i is the mode of the cross-validation posterior Whenx * i is the mode of the i-th cross-validation posterior, we can consider the following discrepancy measure Taylor's series expansion around the mode yields where u * i lies betweenx i andx * i , and v * i lies between x i andx * i . Now, assuming that g (·) is continuous in a small interval containing u * i and v * i for i = 1, . . . , n, implies |g (u * i )| ≈ |g (v * i )| ≈ c * (> 0), for i = 1, . . . , n. As in the previous case, here also we use the approximation V ar{g(x i )} ≈ c 2 V ar(x i ), using a first order Taylor's series expansion around the posterior median, instead of the posterior mode. This yields showing that approximate probability equality of the form (16) holds with D * 1 replaced with D 2 .
Hence, whenx * i are posterior modes, the discrepancy measures D 2 and T 1 are approximately equivalent for the goodness-of-fit test of Bhattacharya (2013).
It is also clear that the discrepancy measure whenx * i is the mode. | 2013-12-12T15:08:37.000Z | 2013-10-29T00:00:00.000 | {
"year": 2013,
"sha1": "f46780536becc17bd5d1a8d9054bee2e48711558",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f46780536becc17bd5d1a8d9054bee2e48711558",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
251910867 | pes2o/s2orc | v3-fos-license | Aspirin reprogrammes colorectal cancer cell metabolism and sensitises to glutaminase inhibition
Background To support proliferation and survival within a challenging microenvironment, cancer cells must reprogramme their metabolism. As such, targeting cancer cell metabolism is a promising therapeutic avenue. However, identifying tractable nodes of metabolic vulnerability in cancer cells is challenging due to their metabolic plasticity. Identification of effective treatment combinations to counter this is an active area of research. Aspirin has a well-established role in cancer prevention, particularly in colorectal cancer (CRC), although the mechanisms are not fully understood. Methods We generated a model to investigate the impact of long-term (52 weeks) aspirin exposure on CRC cells, which has allowed us comprehensively characterise the metabolic impact of long-term aspirin exposure (2–4mM for 52 weeks) using proteomics, Seahorse Extracellular Flux Analysis and Stable Isotope Labelling (SIL). Using this information, we were able to identify nodes of metabolic vulnerability for further targeting, investigating the impact of combining aspirin with metabolic inhibitors in vitro and in vivo. Results We show that aspirin regulates several enzymes and transporters of central carbon metabolism and results in a reduction in glutaminolysis and a concomitant increase in glucose metabolism, demonstrating reprogramming of nutrient utilisation. We show that aspirin causes likely compensatory changes that render the cells sensitive to the glutaminase 1 (GLS1) inhibitor—CB-839. Of note given the clinical interest, treatment with CB-839 alone had little effect on CRC cell growth or survival. However, in combination with aspirin, CB-839 inhibited CRC cell proliferation and induced apoptosis in vitro and, importantly, reduced crypt proliferation in Apcfl/fl mice in vivo. Conclusions Together, these results show that aspirin leads to significant metabolic reprogramming in colorectal cancer cells and raises the possibility that aspirin could significantly increase the efficacy of metabolic cancer therapies in CRC. Supplementary Information The online version contains supplementary material available at 10.1186/s40170-023-00318-y.
Background
Metabolic reprogramming is a defining feature of cancer cells [1] and is essential to meet both the energetic and biosynthetic requirements of chronic proliferation.Although the specific nature of cancer cell metabolism depends on several factors including tissue of origin and mutational status, often common features include increased aerobic glycolysis (known as the Warburg effect) and glutamine utilisation [2,3].
Colorectal cancer (CRC) is the second most common cause of cancer-related death in the UK, and incidence is increasing, particularly in younger patient populations [4,5], highlighting the need for improved therapies.Metabolic reprogramming supports CRC initiation and progression, and it is well established that it is a driver rather than a passive outcome of tumourigenesis.Indeed, many key oncogenic pathways in CRC have been shown to directly control metabolism, including Wnt, PI3K and p53 signalling [6,7].As such, cancer metabolism is an attractive target for novel therapies.However, many challenges remain in developing metabolic anti-cancer therapies, such as the large overlap between the metabolic programme favoured by cancer cells and normal proliferating cells, resulting in a small therapeutic window and the increased likelihood of toxicity [8].There is also the challenge of overcoming metabolic plasticity; cancer cells are well suited to adapting their metabolism to meet environmental constraints, such as hypoxia and hypoglycaemia, and the contrasting conditions of the bloodstream and metastatic sites [9,10].As a result, when targeted with singular metabolic interventions, cancer cells often rewire their metabolism to enable continued proliferation.
Aspirin is a widely used non-steroidal anti-inflammatory drug (NSAID) prescribed for the prevention of cardiovascular events in high-risk patients.A growing body of epidemiological evidence suggests that aspirin reduces cancer incidence, in particular CRC, as well as potentially slowing disease progression [11,12] and increasing patient survival [13].The US Preventative Services Task Force recommends daily aspirin for CRC prevention in 50-69-year-olds with an increased risk of cardiovascular disease and no increased risk of bleeding [14].Furthermore, the National Institute for Health and Care Excellence (NICE) guidelines now recommend daily aspirin for the prevention of CRC in patients with Lynch syndrome [15].
Aspirin is a pleiotropic drug; its actions at the cellular level are not fully understood, particularly with regard to its role in cancer prevention.Increased knowledge of aspirin's cellular mechanisms could enhance its efficacy, including identification of optimal timing and dose, and those individuals most likely to benefit from taking regular aspirin [16,17].
Epidemiological data suggest that the effect of aspirin on CRC incidence and progression is affected by the length of time for which aspirin is taken [11,18].While the effects of aspirin have been extensively studied in vitro and in vivo, long-term exposure has not been modelled before in cell lines.Therefore, in this study, we investigated the impact of long-term (52 weeks) aspirin exposure on CRC cells, with the aim of identifying novel mechanisms of action.Detailed proteomic and metabolomic analysis revealed altered metabolism and nutrient utilisation with aspirin exposure.
Several key enzymes involved in central carbon metabolism were identified as being regulated by aspirin, including pyruvate carboxylase (PC), pyruvate dehydrogenase kinase 1 (PDK1) and glutaminase 1 (GLS1).Although aspirin alone did not impact the ability of the cells to produce ATP, it does inhibit net glutaminolysis, despite inducing a (likely compensatory) increase in GLS1 expression.Importantly, although the GLS1 inhibitor CB-839 alone had little effect on CRC cell survival, aspirin renders colorectal cells sensitive to the drug both in vitro and in vivo.In addition, reduced glutaminolysis upon aspirin exposure leads to a concomitant and likely compensatory increase in glucose utilisation in the tricarboxylic acid (TCA) cycle, leaving cells sensitive to the mitochondrial pyruvate carrier 1 (MPC1) inhibitor, UK-5099.In summary, we demonstrate that aspirin causes metabolic rewiring in colorectal cancer cells providing therapeutic opportunities to sensitise colorectal cancer to existing metabolic cancer therapies currently under clinical investigation.
Cell lines and culture
The human colorectal carcinoma-derived cell lines; SW620 and LS174T were obtained from the American Type Culture Collection (ATCC, Maryland, USA), and HCA7 was a kind gift from Dr. Susan Kirkland, Imperial College, London.All cell lines were routinely tested for mycoplasma contamination using MycoAlert PLUS mycoplasma detection kit (Lonza, MD, USA) and molecularly characterised using an "in house" panel of cellular and molecular markers to check that cell lines have not been cross contaminated (every 3-6 months).Stocks were securely catalogued and stored; passage numbers strictly adhered to prevent phenotypic drift.All cell lines were cultured in Dulbecco's modified Eagle medium (DMEM) (Sigma-Aldrich, Merck, KGAa) with added 10% foetal bovine serum (FBS) (Sigma-Aldrich, Merck, KGaA), 2 mM glutamine (Gibco, ThermoFisher Scientific Inc.), 100 units/ml penicillin and 100 units/ml streptomycin (Gibco, ThermoFisher Scientific Inc).For stock purposes, cells were maintained in 25cm 2 tissue culture (T25) flasks (Corning, NY, USA) and incubated at 37℃ in dry incubators maintained at 5% CO 2 .Cell media were changed every 3-4 days.Experiments were performed in triplicate independently with distinct passages of cells, unless otherwise stated.
Long-term aspirin
For the long-term aspirin-treated cells, a 20-mM stock solution of aspirin (Sigma, Merck KGaA, Darmstadt, Germany) was created by adding 3.6 mg/ml aspirin to 10% DMEM, and fresh aspirin was made up immediately prior to use.Aspirin concentration in the growth media was maintained continuously for ~ 52 weeks.Passage frequency and ratio were adjusted to maintain confluency in aspirin-treated cells.
Crystal violet staining
To measure the proliferation of cells treated with aspirin in combination with either CB-839 or UK-5099, cells were seeded into 96 well plates (Corning, NY, USA) (20,000 cells per well in all conditions except HCA7 cells treated with 4 mM aspirin, where 40,000 cells per well were seeded) in normal growth medium and incubated for 24 h, with 3-4 technical replicate wells per treatment condition.Cells were then treated with media containing drug treatments (or vehicle control) and incubated for a further 72 h.Plates were then fixed with 4% PFA for 15 min, then stained with 0.5% crystal violet solution (Sigma-Aldrich, Merck KGaA), before solubilisation in 2% SDS, and subsequent OD595 measurements were obtained using an iMark microplate reader (Bio-Rad, Laboratories, Inc.).The number of adherent cells was claculated by the confluence in each concentration of CB-839/UK-5099 at 72 h, relative to the same concentration of aspirin in control conditions, in order to compare the effect of the drugs between different aspirin treatments.
For experiments using Human Plasma-Like Medium (HPLM -Gibco, ThermoFisher Scientific, Inc.), HPLM was supplemented with 10% dialysed FBS (dFBS) and experiments were performed as above.Cells were incubated in 10% dFBS HPLM for at least 48 h prior to the start of the treatment, to allow for metabolic adaptation.During the experiment, media was changed every 24 h (unlike experiments in DMEM where the same media was left for the full 72 h of treatment), in order to avoid depletion of the low levels of nutrients.
IncuCyte
To simultaneously measure cell proliferation and apoptosis upon treatment with aspirin in combination with CB-839, a IncuCyte ZOOM live cell imaging system was used.Cells were seeded in 96 well plates (20,000 cells per well) and incubated for 24 h, with 3-4 wells per treatment condition (technical replicates).Cells were then treated with treatment-containing media (or vehicle control) and placed in the IncuCyte system.The percentage of confluence was measured every 4 h for the total time indicated on the graphs.The IncuCyte system took four different image fields per well.At the time of treatment, the cells were also treated with 2-µM CellEvent caspase-3/7 green detection reagent (C10423; Invitrogen, ThermoFisher Scientific, Inc), which was used to measure apoptosis.Green fluorescent cells, indicating active caspase-3/7 and apoptosis, were measured by the IncuCyte system as green object count (1/mm 2 ).For each individual well, the green object count was normalised to the confluence at each timepoint, and results were expressed as relative apoptosis.The same method was performed using SW620 cells treated with 2 µM ABT-737 compared to a vehicle control prior to the start of the assay, as a positive control for apoptosis in order to validate this assay and confirm the detection of apoptotic cells.
Proteomic analysis TMT labelling and high pH reversed-phase chromatography
Following 52 weeks of aspirin treatment to develop longterm treated cell lines, cells were seeded in T25 flasks, and following maintenance in aspirin for a further 72 h whole-cell protein lysates were collected.Lysates were collected as described previously [19].Protein concentrations were ascertained, and samples were adjusted to 2 mg/mL.One hundred micrograms of each sample was digested with trypsin overnight at 37℃, labelled with tandem mass tag (TMT) ten plex reagents according to the manufacturer's protocol (ThermoFisher Scientific, Inc.) and the labelled samples pooled.
An aliquot of the pooled sample was evaporated to dryness, resuspended in 5% formic acid and then desalted using a SepPak cartridge according to the manufacturer's instructions (Waters, Milford, Massachusetts, USA).Eluate from the SepPak cartridge was again evaporated to dryness and resuspended in buffer A (20-mM ammonium hydroxide, pH 10) prior to fractionation by high pH reversed-phase chromatography using an ultimate 3000 liquid chromatography system (ThermoFisher Scientific, Inc.).In brief, the sample was loaded onto a XBridge BEH C18 column (130 Å, 3.5 µm, 2.1 mm X 150 mm, Waters, Milford, Massachusetts, USA) in buffer A and peptides eluted with an increasing gradient of buffer B (20-mM ammonium hydroxide in acetonitrile, pH 10) from 0 to 95% over 60 min.The resulting fractions (15 in total) were evaporated to dryness and resuspended in 1% formic acid prior to analysis by nano-LC MSMS using an Orbitrap Fusion Tribrid mass spectrometer (ThermoFisher Scientific, Inc.).
All spectra were acquired using an Orbitrap Fusion Tribrid mass spectrometer controlled by Xcalibur 2.1 software (Thermo Scientific) and operated in data-dependent acquisition mode using an SPS-MS3 workflow.FTMS1 spectra were collected at a resolution of 120,000, with an automatic gain control (AGC) target of 200,000 and a max injection time of 50 ms.Precursors were filtered with an intensity threshold of 5000, according to charge state (to include charge states 2-7) and with monoisotopic peak determination set to peptide.Previously interrogated precursors were excluded using a dynamic window (60 s ± 10 ppm).The MS2 precursors were isolated with a quadrupole isolation window of 1.2 m/z.ITMS2 spectra were collected with an AGC target of 10,000, max injection time of 70 ms and CID collision energy of 35%.
For FTMS3 analysis, the Orbitrap was operated at 50,000 resolution with an AGC target of 50,000 and a max injection time of 105 ms.Precursors were fragmented by high energy collision dissociation (HCD) at a normalised collision energy of 60% to ensure maximal TMT reporter ion yield.Synchronous precursor selection (SPS) was enabled to include up to 5 MS2 fragment ions in the FTMS3 scan.
Data analysis
The raw data files (supplied in Supplementary Data File S2) were processed and quantified using Proteome Discoverer software v2.1 (ThermoFisher Scientific, Inc.) and searched against the UniProt Human database (downloaded September 2017; 140,000 sequences) using the SEQUEST HT algorithm.Peptide precursor mass tolerance was set at 10 ppm, and MS/MS tolerance was set at 0.6 Da.Search criteria included oxidation of methionine (+ 15.995 Da), acetylation of the protein N terminus (+ 42.011 Da) and methionine loss plus acetylation of the protein N terminus (− 89.03 Da) as variable modifications and carbamidomethylation of cysteine (+ 57.021 Da) and the addition of the TMT mass tag (+ 229.163Da) to peptide N termini and lysine as fixed modifications.Searches were performed with full tryptic digestion, and a maximum of two missed cleavages were allowed.The reverse database search option was enabled, and all data was filtered to satisfy a false discovery rate (FDR) of 5%.
Protein abundance processing
Protein groupings were determined by PD2.1; however, the master protein selection was improved with an inhouse script.This enables us to infer biological trends more effectively in the dataset without any loss in the quality of identification or quantification.The MS data were searched against the human Uniprot database retrieved on October 2, 2019, and updated with additional annotation information on April 21, 2020.
The protein abundances were normalised within each sample to the total peptide amount, then Log2 transformed to bring them closer to a normal distribution.
Statistics
Statistical significance was then determined using Welch's T tests between the conditions of interest.The p values were FDR-corrected using the Benjamini-Hochberg method.
QIAGEN Ingenuity Pathway Analysis (QIAGEN IPA)
Data were analysed using ingenuity pathway analysis.Proteins from the dataset that met the cutoff of p < 0.05 were considered for the analysis and compared to a reference set consisting of the full list of proteins identified in the experiment.A right-tailed Fisher's exact test was used to calculate a p value determining the probability that the association between the genes in the dataset and the pathways/upstream regulators/functions was by chance alone, and the predicted and observed regulation patterns of the proteins were used to predict an activation z score.
Overrepresentation analysis
Overrepresentation analysis was performed using Webgestalt (www.webge stalt.org), using the functional databases Geneontology, and pathways (KEGG-Kyoto Encyclopedia of Genes and Genomes).Gene symbols were entered for proteins that showed signification regulation (p < 0.05, fold change > 1.4 or < 0.71), in both 2-mM and 4-mM long-term aspirin compared to control.
Extracellular flux analysis
Extracellular flux analysis was carried out using the XFp Seahorse Extracellular Flux Analyzer (Agilent), according to the manufacturer's protocol.Long-term aspirintreated SW620 cells (60,000 cells) were seeded onto a Cell-Tak (354,240, Corning, NY, USA)-coated microplate (2-3 technical replicate wells per condition) and centrifuged at 200 g for 1 min (no brake), allowing for immediate adhesion.Cells were seeded in Seahorse XF assay media (Agilent, CA, USA) supplemented with 10 mM glucose, 2 mM glutamine and 1 mM pyruvate (Agilent, CA, USA).Corresponding OCR/ECAR (oxygen consumption rate/extracellular acidification rate) changes were monitored for the duration of the experiment.Wells had subsequent injections of oligomycin (2 µM), FCCP (2 µM), antimycin A (1 µM) and rotenone (1 µM) and monensin (20 µM) (in order to determine the maximal glycolytic rate, as shown by Mookerjee et al. [21,22]) all from Sigma-Aldrich.Data were acquired using the Seahorse Wave software v2.6 (Agilent, CA, USA).The experiment was performed independently in triplicate.
Stable isotope tracer analysis
For stable isotope labelling (SIL) experiments, cells were cultured with U-[ 13 C]-Glc or U-[ 13 C]-Q (Cambridge Isotopes Laboratories, Inc.) for the indicated time points.13 C-labelled nutrients were added to glucose, glutaminefree DMEM, supplemented with 10% dFBS, 100 units/ml penicillin and 100 units/ml streptomycin, 10 mM glucose and 2 mM glutamine ( 13 C-labelled or unlabelled as appropriate).Cellular metabolites were extracted and analysed by gas chromatography-mass spectrometry (GC-MS) using protocols described previously [23][24][25].Metabolite extracts were derived using N-(tert-butyldimethylsilyl)-N-methyltrifluoroacetamide (MTBSTFA) as described previously [26].D-myristic acid (750 ng/sample) was added as an internal standard to metabolite extracts, and metabolite abundance was expressed relative to the internal standard and normalised to cell number.Mass isotopomer distribution was determined using a custom algorithm developed at McGill University [25].The experiment was performed with three different flasks of cells from the same passage number per condition.Raw data are supplied in Supplementary Data File S1.
In vivo experiments
All in vivo experiments were carried out in accordance with the UK Home Office regulations (under project licences: 70/8646 and PP3908577) and by adhering to the ARRIVE guidelines with approval from the Animal Welfare and Ethical Review Board of the University of Glasgow.Mice were housed under a 12-h light-dark cycle, at constant temperature (19-23℃) and humidity (55 ± 10%).Standard diet and water were available ad libitum.The majority of the work was performed in the C57BL/6J background.The following alleles were used in this study: VillinCreER [27], Apc fl [28].Full intestinal recombination was obtained by two intraperitoneal injections of 2mg tamoxifen, and tissues were harvested 4 days post induction.For drug studies in vivo, Villin-Cre ERT2 Apc fl/fl mice were treated with CB-839 (200mg/kg in 25% (w/v) hydroxypropyl-β-cyclodextrin in 10mm citrate at pH 2.0) or vehicle from day 1 post i.p. tamoxifen administration.For aspirin and combination treatments, mice received aspirin (2.6mg/ml in drinking water) 2 days prior to tamoxifen administration and remained on aspirin till the end of the study.Animals were injected with BrdU (i.p.) 2 h prior to sampling tissues.
Immunohistochemistry (IHC)
Mouse intestines were flushed with water, cut open longitudinally, pinned out onto silicone plates and fixed in 10% neutral buffered formalin overnight at 4℃.Fixed tissue was rolled from the proximal to distal end into Swissrolls and processed for paraffin embedding.Tissue blocks were cut into 5μm sections and stained with haematoxylin and eosin (H&E).IHC was performed on formalinfixed intestinal sections according to standard staining protocols.The primary antibody used was against BrdU (1:150, BD Biosciences, #347,580), and representative images are shown.
Statistical analysis
Data were presented and statistical analysis was performed using GraphPad Prism 9. Statistical tests were performed as stated, and significance was expressed as *p < 0.05, **p < 0.01, ***p < 0.001, and ****p < 0.0001.Results are expressed as mean values with standard error of the mean (SEM) where independent experiments are compared and with standard deviation (SD) where technical replicates are compared.Here, technical replicates refer to separate wells or flasks of cells that are from the same original passage of cells, seeded and treated at the same time.Independent experiments refer to separate passages of cells that were seeded at different times.
Long-term aspirin exposure regulates expression of metabolic pathway genes in CRC cells
To explore the consequences of long-term aspirin exposure on SW620 colorectal cancer cells, we performed proteomic analysis to compare protein expression in cells treated for 52 weeks in continuous culture with either 2mM or 4mM aspirin to untreated controls (experimental design shown in Fig. 1a).Two hundred sixty-five proteins were significantly differentially regulated in cells treated with both 2mM and 4mM aspirin compared to untreated controls (p < 0.05, fold change > 1.4 or < 0.71) [29] (Fig. 1b).Analysis of these proteins using Webgestalt highlighted the "metabolic process" as having the highest number of genes in the gene ontology (GO) biological processes (Fig. 1c).Overrepresentation analysis using the KEGG pathway database highlighted a high enrichment ratio in "metabolic pathways" and "central carbon metabolism in cancer", as well as some specific metabolic pathways including "pyruvate metabolism", "cholesterol metabolism" and "fatty acid biosynthesis" (Fig. 1d).These results suggest that long-term aspirin exposure might rewire cellular metabolic pathways in CRC cells.
Aspirin exposure reprogrammes nutrient utilisation in CRC cells
To determine whether aspirin exposure leads to a functional change in energy metabolism as predicted by the proteomic analysis, we next investigated the effect of long-term aspirin exposure on ATP (adenosine triphosphate) production from glycolysis and oxidative phosphorylation (oxphos) using a Seahorse Extracellular Flux Analyzer.Surprisingly, no changes were observed in either basal or maximal oxygen consumption rate (OCR) or extracellular acidification rate (ECAR), proxy measures of oxidative and glycolytic activity, respectively, with long-term aspirin treatment (Fig. 2a).This suggests that aspirin exposure has no net impact on ATP production in CRC cells and is therefore unlikely to explain the known effect of aspirin on cellular proliferation [30].
We next conducted stable isotope labelling (SIL) experiments to investigate whether aspirin altered the metabolic fate of glucose and glutamine, the most important carbon sources for proliferating cancer cells in culture.Long-term (52 weeks) aspirin-treated SW620 cells were incubated with either uniformly labelled 13 C-glucose (U-[ 13 C]-Glc) or glutamine (U-[ 13 C]-Q) for up to 8 h in order to capture isotopic steady state [31].The conventional metabolism of U-[ 13 C]-Glc and U-[ 13 C]-Q in tumour cells is illustrated in Fig. 2b, c.The majority of metabolites reach isotopic steady state within 8 h of incubation (shown for citrate, glutamate and malate), with the steady-state proportion of labelled metabolite (indicating labelled nutrient contribution) being altered by aspirin exposure (Fig. 2d, e).Incorporation of U-[ 13 C]-Glc and U-[ 13 C]-Q across TCA cycle metabolites at 8 h was increased and decreased respectively upon aspirin exposure (Fig. 2f, g).Similar results were observed in LS174T and HCA7 cells exposed to long-term aspirin (Supplementary Fig. 1).
Mass isotopomer distribution (MID) analysis of citrate, glutamate and malate shows a decrease in the unlabelled metabolites (m + 0) and an increase in the proportion of the glucose-labelled mass isotopomers with aspirin exposure (Fig. 2h).By contrast, an increase in unlabelled metabolites (m + 0) and a decrease in the proportion of the glutamine-labelled isotopomers was observed (Fig. 2i).These data suggest that glutaminolysis is inhibited by aspirin exposure, confirmed by analysis of the glutamate to glutamine m + 5 ratio (Fig. 2j).
These data demonstrate that despite there being no overall impact of long-term aspirin exposure on ATP production, it does cause metabolic reprogramming in three different CRC cell lines, reducing glutaminolysis and increasing glucose utilisation.Glucose and glutamine cooperate in fuelling the TCA cycle; a decrease in the entry of one nutrient can lead to a compensatory increase in the other [32,33].This suggests that the increased glucose utilisation may be a compensation mechanism for the reduction in glutaminolysis in the presence of aspirin in order to maintain carbon entry into the TCA cycle.As there was no overall effect on oxphos, this suggests the cells can maintain TCA cycle function in the presence of aspirin by increasing glucose utilisation.Taken together, these data highlight the metabolic plasticity of the cells, allowing them to minimise the impact on ATP production in the presence of aspirin.
Aspirin regulates levels of proteins involved in central carbon metabolism
Having determined that aspirin impacts cellular metabolism in CRC cells, we sought to identify changes in protein expression that are consistent with the metabolic rewiring we observe.For this, we performed further analysis on the proteomic data in Fig. 1 and found that ingenuity pathway analysis (IPA) predicted inhibition of activating transcription factor 4 (ATF4, an important regulator of cellular metabolism) signalling with aspirin (p = 0.0116, z-score = -2.894).Regulation of all ATF4 target genes captured by IPA are shown in Fig. 3a.
ATF4 is involved in the cellular response to amino acid deprivation and has also been shown to regulate glutamine metabolism [34,35].Figure 3b shows an overview of proteins involved in glutamine metabolism and transport, several of which are ATF4 target genes.For example, expression of glutamic-pyruvic transaminase 2 (GPT2), which catalyses the reversible transamination between alanine and α-ketoglutarate (α KG) to generate pyruvate and glutamate was downregulated upon aspirin treatment (4mM aspirin vs control; log 2 fold change = − 0.95, p value = 0.03) and has been previously shown to be regulated by ATF4 [35].There was also downregulation of ATF4 targets; asparagine synthetase (ASNS; log 2 fold change = − 0.75, p value = 0.004) and phosphoserine aminotransferase 1 (PSAT1; log 2 fold change = − 0.58, p value = 0.0008), as well as glutamate dehydrogenase 1 (GLUD1; log 2 fold change = − 0.27, p value = 0.013) and the aminotransferase glutamic-oxaloacetic transaminase 2 (GOT2; log 2 fold change = − 0.62, p value = 0.004).Furthermore, two amino acid transporters that impact glutamine metabolism; cystine/glutamate antiporter (xCT, encoded by the SLC7A11 gene; log 2 fold change = − 2.09, p value = 0.0009) and the large neutral amino acid transporter LAT1 (encoded by the SLC7A5 gene; log 2 fold change = − 1.89, p value = 0.012), also highlighted by the IPA in Fig. 3a, were both downregulated with aspirin.These data are consistent with the reduced levels of glutaminolysis we observed in Fig. 2. Also consistent is an increase in intracellular abundance of aspartate and a decrease in intracellular abundance of alanine upon longterm aspirin exposure, potentially illustrating the functional consequence of reduced expression of ASNS and GPT2, respectively (Fig. 3c).
Consistent with the proteomic data, mRNA levels of GPT2, SLC7A11 and SLC7A5, were found to be downregulated upon long-term aspirin exposure, suggesting strong transcriptional regulation (Supplementary Fig. 2a).Contrary to expectation, neither protein nor mRNA levels of ATF4 itself were significantly regulated with long-term aspirin exposure (Supplementary Fig. 2b), suggesting that Aspirin might regulate ATF4 through post-translational modification (post-translational regulation of ATF4 activity has been observed previously [36]).Validation of proteomic data was performed for GPT2 and LAT1 by immunoblotting (Fig. 3d and Supplementary Fig. 2c).The key glutamine transporter ASCT2 (alanine-serine-cysteine transporter 2) was also investigated by immunoblotting and was downregulated with long-term 4mM aspirin (Fig. 3d), although this was not statistically significantly (Supplementary Fig. 2c).
Further analysis of the proteomic data also revealed expression changes consistent with the concomitant increase in glucose metabolism we observed in Fig. 2f.Two proteins involved in regulating the entry of pyruvate into the TCA cycle showed significant regulation-PC was upregulated (log 2 fold change = + 1.28, p value = 0.00037) and PDK1 was downregulated (log 2 fold change = − 1.19, p value = 0.002) (illustrated in Fig. 3e).These results were validated by immunoblotting (Fig. 3d and Supplementary Fig. 2d).This is consistent with the increased glucose carbon entry into the TCA cycle that we previously observed (Fig. 2f and h).In addition, glycolysis enzyme hexokinase 1 (HK1; log 2 fold change = + 0.79, p value = 0.012) and glucose transporter 1 (GLUT1, encoded by the SLC2A1 gene; log 2 fold change = + 1.13, p value = 0.00026) were both upregulated in the proteomic data, also consistent with increased glucose utilisation upon aspirin exposure (Fig. 3e).These results were validated by immunoblotting (Supplementary Fig. 2e).
Unexpectedly, GLS1, which catalyses the first step of glutaminolysis (Fig. 3b), showed significant upregulation (Fig. 3d, e and Supplementary Fig. 2d).This is inconsistent with the reduced levels of glutaminolysis we observed in Fig. 2. Both known splice variants of GLS1 (GLS1 KGA and GLS1 GAC ) were identified (Fig. 3d), with GLS1 GAC being the dominantly expressed isoform in our cells.qPCR analysis of aspirin-treated SW620 cells did not show any significant transcriptional regulation of GLS1, PC or PDK1 (Supplementary Fig. 2f ), suggesting posttranscriptional regulation of these proteins.
To demonstrate the changes in other CRC cell lines, expression of GLS1, PC and PDK1 was also investigated by immunoblotting in LS174T and HCA7 cells after longterm aspirin exposure (Supplementary Fig. 3a), showing upregulation of GLS1 GAC in LS174T, though this is not statistically significant, and significant downregulation of PC and PDK1 in HCA7 cells.
Expression changes were also investigated by immunoblotting following short-term aspirin treatment (72 h) (Supplementary Figs.3a-b).This showed significant upregulation of GLS1 GAC in SW620 and LS174T cells, as well as upregulation of PC in SW620.In addition, downregulation of mRNA expression of the ATF4 targets GPT2 and SLC7A5 was also observed in SW620 cells (Supplementary Fig. 3c).Interestingly, these findings suggest short-term treatment is sufficient for the regulatory effect of aspirin on these metabolic enzymes.However, it should be noted that long-term aspirin exposure has a stronger effect.
Metabolic reprogramming in response to aspirin exposes metabolic vulnerabilities in CRC cells
While the metabolic impact of aspirin may be insufficient to explain the known detrimental effect on cellular proliferation [30], it could render cells more susceptible Fig. 3 Aspirin regulates proteins involved in central carbon metabolism.a ATF4 target genes highlighted in proteomic data by IPA analysis.Fold change of proteins in both long-term (52 weeks) 2-mM and 4-mM aspirin conditions, relative to control (n = 3 biological replicates).IPA analysis shows a predicted overall inhibition of ATF4 signalling with long-term 4mM aspirin (p = 0.0116, z score = − 2.894).b Overview of key enzymes involved in glutaminolysis and their average fold changes with long-term 4-mM aspirin treatment in the proteomic data.BCAAs, branched-chain amino acids; OAA, oxaloacetate, PHP, phosphohydroxypyruvate.Created with BioRender.com.c Metabolite abundance relative to cell number of alanine and aspartate in long-term 4-mM aspirin-treated cells compared to control.Error bars represent SD (n = 6 technical replicates).Asterisks refer to p values obtained using t tests ***p < 0.001, ****p < 0.0001.Created with BioRender.com.d Immunoblotting for a selection of metabolic enzymes highlighted in proteomic data, with long-term aspirin treatment (ns, non-specific).Representative of at least 3 independent experiments.α-tubulin is used as a loading control.e Average fold changes in the proteomic data with long-term 4-mM aspirin compared to control, including central carbon metabolism genes that showed significant regulation (p < 0.05, fold change > 1.4) in both long-term 2mM and 4mM aspirin.Created with BioRender.com(See figure on next page.)to further metabolic perturbation.Despite an overall reduction in glutaminolysis (Fig. 2g), aspirin causes a strong upregulation in GLS1 levels (Fig. 3d), which may be a compensatory mechanism to maximise utilisation of glutamine when levels of other glutaminolysis enzymes are reduced (such as GPT2).Increased expression of GPT2 has been previously shown to compensate for inhibition of GLS1, suggesting that the reverse relationship may also occur [37].We therefore hypothesised that aspirin-treated cells may be more sensitive to further blockade of glutaminolysis by targeting GLS1.To investigate this, long-term aspirin-exposed cells were incubated with increasing concentrations of CB-839 (a selective GLS1 inhibitor currently in clinical trials [38], also known as Telaglenastat, illustrated in Fig. 4a).SW620 cells showed no sensitivity to CB-839 alone (up to 10µM), consistent with previous findings [39]; however, cells exposed to long-term aspirin showed significantly increased sensitivity in a dose-dependent manner (Fig. 4b).Similar results were obtained using another GLS1 inhibitor, inhibitor-968 (Supplementary Fig. 4a), demonstrating the specificity of the aspirin effect on glutaminolysis.Similar results were obtained in long-term aspirin-treated HCA7 cells and to a lesser extent LS174T.Although both of these cell lines showed minimal sensitivity to CB-839 without aspirin, aspirin significantly increased their response to the drug (Supplementary Fig. 4b-c).This effect was also investigated with short-term aspirin treatment in SW620 cells (Supplementary Fig. 4d).This also showed sensitisation, but to a lesser extent than in longterm aspirin-treated cells.
To further investigate the effect of CB-839 on longterm aspirin-exposed cells, proliferation assays were performed alongside the detection of caspase-3/7 activation to quantify levels of apoptosis using an Incucyte ® Live-Cell Analysis System (Fig. 4c, d).These results confirm the inhibitory effect on the proliferation of combined CB-839 and long-term aspirin, as shown by a decrease in confluency (Fig. 4c).These results also show significant induction of apoptosis with the combination of CB-839 and long-term aspirin treatment compared to vehicle control and to either drug alone (Fig. 4d).A positive control for this assay was performed using ABT-737 to induce apoptosis (Supplementary Fig. 4e-f ).
Proliferation experiments were also performed using the human plasma-like medium (HPLM), developed by Cantor et al. to be representative of the metabolite composition of human plasma [40].Similar results were obtained in HPLM to those performed in DMEM (Fig. 4e, f ).Both 2mM and 4mM aspirin inhibited proliferation compared to controls.The addition of CB-839 in the absence of aspirin had no effect on proliferation, whereas it significantly reduced cell number with both 2mM and 4mM aspirin.This demonstrates that the effect of aspirin on sensitising cells to CB-839 is present in physiologically relevant metabolic conditions.
Upon long-term aspirin exposure, we have shown that glutaminolysis is reduced (Fig. 2j), leading to a potentially compensatory increase in (Fig. 3b and d), and dependence on, GLS1 (Fig. 4b-f ).We hypothesised that the increase in glucose utilisation we observed in Fig. 2f is another compensatory response to impaired glutaminolysis in order to maintain TCA cycle activity.We reasoned this could leave cells vulnerable to inhibition of glucose utilisation and specifically to pyruvate import into the mitochondria.We investigated this by treating cells exposed to long-term aspirin with increasing concentrations of an inhibitor of the mitochondrial pyruvate carrier 1 (MPC1), UK-5099.UK-5099 inhibits the entry of glucose-derived pyruvate into the TCA cycle (illustrated in Fig. 4a).SW620 cells showed little or no sensitivity to UK-5099 alone, but sensitivity was significantly increased in cells exposed to both 2mM and 4mM aspirin (Fig. 4g).HCA7 cells showed a similar effect to SW620 cells; however, LS174T cells did not show significantly increased sensitivity to UK-5099 with long-term aspirin (Supplementary Fig. 4b-c), suggesting some cell-line specificity in this response.This effect was also investigated upon short-term aspirin treatment in SW620 cells (Supplementary Fig. 4d), which also increased sensitivity to UK-5099.
These findings support the hypothesis that when treated with aspirin, cells reprogramme their metabolism in order to maintain proliferation (summarised in Fig. 5a), leaving them vulnerable to further metabolic manipulation.While they have sufficient metabolic plasticity to prevent an impact on ATP production and complete inhibition of proliferation in the presence of aspirin alone, the cells become more reliant on particular metabolic pathways and are left vulnerable to their targeting, leading to further impaired proliferation and cell death.
Aspirin and CB-389 in combination reduce colon crypt proliferation in vivo
We next sought to investigate the efficacy of combining aspirin and CB-839 in vivo.To achieve this, we used the well-characterized VillinCreER Apcfl/fl mouse model.The mice were induced with tamoxifen (2mg on two consecutive days) to conditionally delete Apc throughout the intestinal epithelium.The mice were treated with either vehicle or aspirin (2.6mg/ml) or CB-839 (200mg/ kg) alone or in combination (aspirin + CB-839), and the effect on intestinal epithelial cell proliferation was investigated (treatments summarised in Fig. 5b).Apc loss leads to a characteristic crypt hyperproliferation in this model as assessed by BrdU incorporation and quantification of the number of stained BrdU + cells in each crypt (Fig. 5c, d).Strikingly, the combination of aspirin (2.6mg/ ml) and CB-839 (200mg/kg) led to a significant suppression of crypt hyperproliferation in the small intestine as indicated by BrdU-stained cells, while neither aspirin nor CB-839 alone impacted proliferation (Fig. 5c, d).Villi length was unchanged across conditions (Supplementary Fig. 5).These results support our in vitro findings showing that aspirin induces sensitivity to CB-839 and support the potential for clinical utility of this approach.
Discussion
Further understanding of the anti-cancer cellular mechanisms of aspirin will be beneficial to inform patient stratification and maximise its benefits.Although best known as a cyclooxygenase (COX) inhibitor, aspirin is a highly pleiotropic drug with many cellular targets.Despite the importance of COX/PGE2 (prostaglandin E2) signalling in cancer [41], this mechanism is not sufficient to fully explain the anti-cancer effects of aspirin [42].Studies have shown aspirin treatment impacts many other pathways including Wnt [43], NF-κB [44,45], AMPK (adenosine monophosphate-activated protein kinase) and mTORC1 (mammalian target of rapamycin complex 1) [46,47], as well as causing epigenetic alterations such as histone methylation [48,49]; however, relatively little work has focused on the impact of aspirin treatment on cancer cell metabolism.
Here, we used proteomics with the aim to identify novel cellular mechanisms of aspirin that may contribute to its anti-cancer effect.This analysis highlighted a potential effect on cellular metabolism which was subsequently confirmed; using a combination of extracellular flux analysis and SIL, we comprehensively characterised the effect of aspirin on CRC cell metabolic pathway activity.Our results show that although there is no overall impact of aspirin on ATP production, long-term aspirin exposure leads to significant reprogramming of nutrient utilisation, leading to a reduction in glutaminolysis and increased glucose entry into the TCA cycle in three CRC cell lines (summarised in Fig. 5a).
We also show regulation of proteins involved in central carbon metabolism that is consistent with the effects on pathway activity, including increased expression of PC and decreased expression of PDK1 and regulation of several glutaminolysis enzymes.Surprisingly, there was a strong increase in the expression of GLS1, which is contradictory to the observed decrease in net glutaminolysis.We suggest that this is a compensatory response to glutaminolysis being otherwise impaired.Increased glucose utilisation is also a likely compensatory mechanism to reduced glutaminolysis, as both nutrients provide important carbon sources for the TCA cycle.This information was used to infer potential metabolic vulnerabilities induced by aspirin; we show that aspirin-treated cells are more sensitive to the GLS1 inhibitor CB-839 and MPC1 inhibitor UK-5099.Importantly, the combination of aspirin and CB-839 was found to be effective in reducing cell proliferation both in vitro and in vivo; the treatment inhibited proliferation and induced apoptosis in CRC cell lines and inhibited proliferation of colonic epithelial cells in VillinCre; Apc fl/fl mice.
A small number of recent studies have linked aspirin's anti-cancer effects to metabolism; aspirin has been found to inhibit glucose metabolism through the regulation of PDK1 in breast cancer cells [50] and through the regulation of GLUT1 in hepatoma cells [51].In support of our study, Boku et al. show that combining GLS1 inhibition with aspirin is effective at reducing the colony-forming efficiency of CRC cells in vitro [52].The same study suggested that aspirin treatment mimics the effects of glutamine deficiency in CRC cells and showed increased expression of glutaminolysis genes with aspirin treatment [52].The authors concluded that glutaminolysis is therefore likely upregulated upon aspirin treatment; however, this was not directly measured.Interestingly, our functional studies of glutaminolysis using SIL show a reduction in glutaminolysis with aspirin.We conclude that the upregulation of GLS1 is a likely compensatory mechanism for reduced glutaminolysis.Despite these apparent differences, both studies highlight the possibility that aspirin has potential to be a simple and costeffective drug to increase the efficacy of CB-839 in CRC patients.This is important as despite the attractiveness of targeting glutamine metabolism for cancer therapy [53], CB-839 has achieved varying success in previous studies, particularly when studied in vivo [54,55].Indeed, in our study, CB-839 alone was almost completely ineffective at inhibiting cell proliferation and had no effect on apoptosis in vitro or crypt proliferation in vivo.Therefore, our findings have exciting implications for clinical translation, as both drugs are already used clinically and have known safety profiles in humans.
An increasing number of studies highlight the value of combinatory approaches when applying metabolic interventions.One comprehensive study highlights several combinations of metabolic inhibition that overcome the ability of cancer cells to adapt their metabolism in response to singular perturbations-known as metabolic flexibility or plasticity [32].A recent study showed that combining CB-839 with an inhibitor of ASCT2 was effective in liver cancer cells [56].Several studies have also highlighted success when combining metabolic interventions with chemotherapies [57][58][59] and immunotherapies [60].There is also increasing interest in the impact of diet on tumour metabolism and how this may interact with metabolic therapies [61].Our findings support the value of combining multiple complementary interventions when targeting tumour metabolism.By highlighting metabolic vulnerabilities caused by the exposure of CRC to aspirin alone, we were able to exploit these to maximise the inhibition of CRC cell proliferation with metabolic inhibitors.The use of aspirin to increase the efficacy of metabolic inhibitors is particularly valuable due to the longstanding ubiquitous use of aspirin in clinical practice.The efficacy of aspirin and CB-839 in vivo shown here suggest that this combination warrants further clinical studies and could potentially provide a novel therapeutic option for CRC patients alongside traditional chemotherapies.An important next step should involve further in vivo experiments assessing the efficacy of the combination of aspirin and CB-839 on overall tumour growth, as the short-term proliferation defect we observe may not necessarily translate to reduced tumour growth in mouse models of colorectal cancer.
Conclusions
Aspirin leads to significant reprogramming of glucose and glutamine metabolism in CRC cells.While this has no effect on overall ATP production, it renders cells vulnerable to further metabolic perturbation.Aspirinexposed cells show increased sensitivity to inhibition of GLS1 by CB-839 and glucose utilisation by UK-5099.The combination of aspirin and CB-839 was also effective at reducing cell proliferation in vivo, therefore having exciting implications for clinical translation: CB-839 is currently under investigation in the clinic to treat various tumour types, and this study suggests that aspirin may significantly increase the efficacy of this drug in CRC.Further investigation into this combination is warranted to determine whether it could provide an effective and safe treatment option for CRC.
Fig. 1
Fig.1Long-term aspirin treatment regulates cellular metabolism in CRC cells.a Experimental design for proteomic analysis of long-term aspirin-treated SW620 cells (each flask represents one independent experiment) (created with BioRender.com).b Volcano plots showing protein expression changes with long-term 2-mM or 4-mM aspirin treatment compared to control.Each point represents one protein.Thresholds for proteins of interest were p < 0.05, fold change > 1.4 or < 0.71), in both 2mM and 4mM conditions (downregulated proteins in blue and upregulated in red).c Number of genes in GO biological processes categories, of all proteins of interest in both 2mM and 4mM aspirin.d Overrepresentation analysis of proteins of interest in both 2mM and 4mM aspirin, using the KEGG pathways database (See figure on next page.)
(
See figure on next page.)Fig. 2 Long-term aspirin treatment reprogrammes nutrient utilisation in CRC cells.a Extracellular flux analysis of long-term (52 weeks) 2-mM and 4-mM aspirin-treated SW620 cells compared to controls.Error bars represent SEM (n = 3 independent experiments).ns = not significant (p > 0.05 following t tests at indicated time points).b, c Schematics of U-[ 13 C]-Glc (b) and U-[ 13 C]-Q (c) incorporation into TCA cycle metabolites and amino acids.Created with BioRender.com.d-j SIL data for long-term (52 weeks) 4-mM aspirin-treated SW620 cells compared to control cells.Error bars represent SD (n = 3 technical replicates).Asterisks refer to p values obtained from t tests at the 8-h time point (*p < 0.05, **p < 0.01, ***p < 0.001, ****p < 0.0001).d, e Proportion of 13 C labelling in citrate, glutamate and malate from U-[ 13 C]-Glc and U-[ 13 C]-Q over time.f, g Proportion of 13 C labelling in metabolite pools at 8 h from U-[ 13 C]-Glc (f ) and U-[ 13 C]-Q (g).Asterisks indicate the adjusted p value obtained using multiple t tests.h, i Mass isotopomer distribution (MID) analysis at 8 h for U-[ 13 C]-Glc (h) and U-[ 13 C]-Q (i) labelling in citrate, glutamate and malate.Asterisks indicate the adjusted p value obtained using multiple t tests.j Ratio of m + 5 glutamateo m + 5 glutamine in long-term 4-mM aspirin-treated cells in comparison to control at 8 h from U-[ 13 C]-Q
(Fig. 4
See figure on next page.)Aspirin treatment sensitises CRC cells to metabolic inhibitors.a Schematic showing the mechanism of action of metabolic inhibitors; CB-839 and UK-5099.Created with BioRender.com.b Cell proliferation assay of long-term (52 weeks) aspirin-treated SW620 cells with increasing concentration of CB-839.The graph shows the relative cell number in each aspirin condition, measured by crystal violet staining at 72 h compared to vehicle control.Error bars show SEM (n = 3 independent experiments).Asterisks refer to p values obtained using one-way ANOVAs with Dunnett's multiple comparisons tests at each CB-839 concentration (*p < 0.05, **p < 0.01, ***p < 0.001).Images show representative wells in each condition at 72 h.c, d Confluency and relative apoptotic cells in long-term 4-mM aspirin-treated SW620 cells in combination with 5µM CB-839, in comparison to controls and to each drug alone.Error bars show SEM (n = 3 distinct passages of cells analysed on the same experimental plate).Relative apoptotic cells were measured by green fluorescent nuclei (indicating cells with activated caspase-3/7) relative to cell confluency.Line graphs show values over time, and the bar graph shows the values at the experiment endpoint (75 h after treatment).Error bars show SEM (n = 3 distinct passages of cells analysed on the same experimental plate).Asterisks refer to p values obtained using a one-way ANOVA with Tukey's multiple comparisons test (****p < 0.0001).e, f Proliferation assay of SW620 cells treated with aspirin and/or CB-839 compared to vehicle control for 72 h, performed in Human Plasma-Like Medium (HPLM).e Cell number over time relative to the 0 h time point.f Relative cell number at 72 h, 5µM CB-839 treatment condition is shown relative to vehicle control in the same aspirin treatment condition.Error bars show SEM (n = 3 independent experiments).Asterisks refer to p values obtained using one sample t tests, comparing to a hypothetical mean of 1 (**p < 0.01).g Cell proliferation assay of long-term aspirin-treated SW620 cells with increasing concentration of UK-5099.The graph shows a relative cell number in each aspirin condition measured by crystal violet staining at 72 h.Error bars show SEM (n = 3 independent experiments).Asterisks refer to p values obtained using one-way ANOVAs with Dunnett's multiple comparisons tests at each UK-5099 concentration (*p < 0.05, **p < 0.01, ***p < 0.001).Images show representative wells in each condition at 72 h
Fig. 5
Fig. 5 Aspirin and CB-389 in combination reduce colon crypt proliferation in vivo.a Schematic summarising the effects of aspirin treatment on metabolic reprogramming of CRC cells.Created with BioRender.com.b Schematic showing the timeline of treatment, induction with tamoxifen (2 mg) and sampling for in vivo aspirin (2.6mg/ml in drinking water) and CB-839 (200mg/kg) combination experiments using the Villin CreER Apc fl/ fl mouse model.c, d Quantification of BrdU and representative images of BrdU staining in the small intestine of Villin CreER Apc.fl/fl mice treated with vehicle, CB-839 (200mg/kg) and/or aspirin (2.6mg/ml in drinking water).Scale bar 100μm.Error bars show SEM (n = 4 mice per experimental arm).Each dot represents the average number of Brd-positive cells per half crypt for each mouse.Asterisks refer to p values obtained from one-tailed Mann-Whitney tests (*p < 0.05; **p < 0.01) | 2022-08-30T13:39:50.374Z | 2022-08-26T00:00:00.000 | {
"year": 2023,
"sha1": "0f6f104bac0c3587f43399f95ed40423f7f1f1dd",
"oa_license": "CCBY",
"oa_url": "https://cancerandmetabolism.biomedcentral.com/counter/pdf/10.1186/s40170-023-00318-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60fc3d1afc87513386d9dae3011a960f60f0be74",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
31962643 | pes2o/s2orc | v3-fos-license | Induction of Cell Death by Betulinic Acid through Induction of Apoptosis and Inhibition of Autophagic Flux in Microglia BV-2 Cells
Betulinic acid (BA), a natural pentacyclic triterpene found in many medicinal plants is known to have various biological activity including tumor suppression and anti-inflammatory effects. In this study, the cell-death induction effect of BA was investigated in BV-2 microglia cells. BA was cytotoxic to BV-2 cells with IC50 of approximately 2.0 μM. Treatment of BA resulted in a dose-dependent chromosomal DNA degradation, suggesting that these cells underwent apoptosis. Flow cytometric analysis further confirmed that BA-treated BV-2 cells showed hypodiploid DNA content. BA treatment triggered apoptosis by decreasing Bcl-2 levels, activation of capase-3 protease and cleavage of PARP. In addition, BA treatment induced the accumulation of p62 and the increase in conversion of LC3-I to LC3-II, which are important autophagic flux monitoring markers. The increase in LC3-II indicates that BA treatment induced autophagosome formation, however, accumulation of p62 represents that the downstream autophagy pathway is blocked. It is demonstrated that BA induced cell death of BV-2 cells by inducing apoptosis and inhibiting autophagic flux. These data may provide important new information towards understanding the mechanisms by which BA induce cell death in microglia BV-2 cells.
INTRODUCTION
Microglia has multiple functions in regulating homeostasis of the central nervous systems (CNS). Microglia is a resident immunological cell in the CNS and participate in both innate and adaptive immune responses. Microglia cells have been implicated as active contributors to neuron damage in neurodegenerative disorders such as Alzheimer's disease, Parkinson's disease and multiple sclerosis (Block et al., 2007;Saijo and Glass, 2011). Recently, several studies have reported the involvement of apoptosis and autophagy in microglia-induced neurotoxicity Su et al., 2016).
Apoptosis, also known as type I programmed cell death, is a selective physiological process that plays an important role in the balance between cell replication and cell death. A wide range of stimuli can be integrated to trigger the irreversible decision to die. Most cytotoxic and neurotoxic agents cause cell death by apoptosis (Foo et al., 2015;Liu et al., 2015b).
The advantage of apoptosis-inducing agents is the elimination of potentially harmful cells without causing inflammation. Autophagy is essential for cell survival and the maintenance of homeostasis. There have been reports showing that autophagy is involved in the degradation of unnecessary or defective cellular components in the lysosome (Levine and Kroemer, 2008;Mizushima and Komatsu, 2011;Mochida et al., 2015). It is also reported that autophagy plays a critical role in the progression of certain human disorders, including neurodegenerative disease and cancer (Levine and Kroemer, 2008). Recent studies indicate that autophagy also functions in cell death, and it is called type II programmed cell death (Baehrecke, 2005). Growing evidences suggest the inter-relationship between apoptosis and autophagy in controlling cell survival and cell death (Mukhtar et al., 2012;Mukhopadhyay et al., 2014).
It has been reported that microglia is involved in the signaling cascade that is associated with neuronal cell death in various neurological diseases such as Alzheimer disease, Parkinson disease and traumatic brain injury (Block et al., 2007;Gao and Hong, 2008;Loane et al., 2009;Lee and Jeong, 2014;Stoica et al., 2014). It was reported that microglia is involved in apoptosis of other cells such as pheochromocytoma cells (Hornik et al., 2016). Some efforts were made to find a small molecule or an extract that modulate microglia cells thus protect microglia or other neuron cells (Lee and Jeong, 2014;Wang et al., 2014;Liu et al., 2015a). In addition, elucidation of microglial cell death induction mechanism caused by small molecules were reported (Liu et al., 2015b;Yu et al., 2015). Hence studies in the modulation of microglia cell survival and death could be useful in formulating a strategy for the treatment of neurological diseases. In this study, we evaluated the cell death induction effect of BA in BV-2 microglia cells. BV-2 cells are common microglial cells that have been widely used in studies of neuro-protective effect and neurotoxicity (Kwon et al., 2012;Hao et al., 2013;Li et al., 2014;Liu et al., 2015b). We report that BA inhibited cell proliferation and caused cell death by inducing apoptosis and inhibiting autophagic flux in BV-2 cells.
Rabbit polyclonal anti-Bcl-2, rabbit polyclonal anti-caspase-3, and rabbit polyclonal anti-human anti-Bax and anti-PARP, rabbit polyclonal anti-LC3 and rabbit polyclonal anti-p62 were purchased from Cell signaling technology, Danvers, USA. Mouse polyclonal anti-α-tubulin and mouse polyclonal anti-vinculin were obtained from Santa Cruz biotechnology, Oregon, USA and Sigma-Aldrich, respectively. Anti-mouse and anti-rabbit horseradish peroxidase (HRP)-conjugated secondary IgG antibodies were from Bethyl, Montgomery, USA. West Pico Chemiluminescent substrate solution was purchased from Thermo scientific, Waltham, USA.
Cell culture BA treatment
Mouse microglia BV-2 cells were maintained in the logarithmic phase of growth in Dulbecco's modified Eagle's medium (DMEM) (Welgene) supplemented with 5% fetal bovine serum (FBS, Gibco BRL), 2 mM L-glutamine, and antibiotics. Cultures were maintained at 37°C in a humidified atmosphere of 95% air and 5% CO2. Logarithmically growing BV-2 cells were used for all experiments. Betulanic acid (BA) was dissolved in DMSO at the concentration of 20 mM and diluted in tissue culture medium before use. For glucose starvation experiment, BV-2 cells were cultured in glucose free DMEM (Welgene) supplemented with 10% FBS (Gibco BRL), 2 mM L-glutamine, and antibiotics for 16 hrs.
Cytotoxicity analysis and morphology observation
Cell viability was estimated by the MTT assay. Exponentially growing cells were seeded at 3×10 4 cells/well in a 96-well plate and treated with various concentrations of BA. After the cells were incubated for 20 hrs, 20 µl of MTT (5 mg/ml) was added and the cells were incubated for another 4 hrs at 37°C. The supernatant was discarded and 150 µl of dissolving solvent (4 mM HCl and 0.1% NP-40 in isopropanol) was added. The plate was gently agitated until the blue formazan crystals were fully dissolved. The absorbance was measured at 550 nm using a microplate reader (Wallac Victor 3-V, Perkinelmer, USA). The data were expressed as a mean percentage of viable cells as compared to the respective control cultures. All experiments were performed at least in triplicate.
Morphology observation
Cells used in this study were constantly observed under an inverted phase-contrast microscope (Primo Vert, Zeiss, Oberkochen, Germany). Photographs were taken after BV-2 cells were incubated with various concentrations of BA for 24 hrs as described in the text and figure legend.
DNA fragmentation analysis
Cells were grown at a density of 8×10 5 cells/ml and exposed to BA at different concentrations as described in the text and figure legends. Cells were rinsed with ice-cold phosphate buffered saline (PBS), centrifuged and resuspended in 0.01 vol. of TE buffer (10 mM Tris-Cl, 1 mM EDTA, pH 8.0). DNA was purified as previously described (Hyun et al., 1997). The resulting purified DNA fragments were subjected to electrophoresis on 1.5% agarose gel. DNA bands were visualized by fluorescence after ethidium bromide staining, and quantified with a densitometer (Ultra-Lum Imaging System, San Diego, USA). Results shown are an example from 3 different experiments.
Flow cytometry
The effects of BA on cell proliferation were evaluated by measuring the distribution of the cells in the different phases of the cell cycle by flow cytometry. Cells were treated with BA at various concentrations and harvested by centrifugation at 750×g for 5 min. Cell pellets were rinsed with PBS and re-suspended in the staining solution containing DAPI (NIM-DAPI, 10 µg/ml, Beckman coulter). The cell suspensions were incubated at room temperature for 10 min in the dark and analyzed on a fluorescence-activated cell sorter flow cytometer (Quanta SC, Beckman coulter). All experiments were performed at least in triplicate.
Western blot analysis
BV-2 cells were treated with BA and subjected to western blot analysis. BV-2 cells were exposed to various concentrations of BA for 24 hrs or 16 hrs to analyze apoptosis-related or autophagy-related proteins, respectively. Cells were lysed in a lysis buffer (20 mM Tris, 100 mM NaCl, 0.1% NP40, 50 mM NaF, 2 mM EDTA, 1 mM Na3VO4 and protease inhibitor, pH 7.5) and protein concentrations were determined by Bradford assay. Lysis buffer L (1% triton X-100, 50 mM NaF in phosphate buffered saline) was used for the analysis of autophagyrelated proteins. The total protein (5 or 10 µg) in each lysate were separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and electro-transferred onto PVDF membranes. The membranes were blocked with 5% non-fat milk for 1 hour at room temperature and then probed with specific primary antibody for 16 hours at 4°C. The specific protein bands were visualized by peroxidase-conjugated secondary antibody and chemiluminescent substrate solution.
BA inhibited proliferation of BV-2 cells
The chemical structure of test compound, BA is shown in Fig. 1. The effect of BA on cellular proliferation was evaluated using the MTT assay. A 24 hr exposure to BA dramatically decreased the proliferation of BV-2 cells in a dose-dependent manner ( Fig. 2A). The concentration required to inhibit growth by 50% (IC50) was approximately 2.0 µM. Relative cell survival was also assessed at various times after exposure to 2.0 µM of BA. Prolonged exposure to BA markedly decreased the viability of these cells (data not shown). Control cells treated with vehicle alone showed no changes in cell proliferation or viability.
Morphological changes were observed using a phase contrast microscope. Treatment of various concentrations of BA for 24 hrs in BV-2 cells resulted in reduction of live cell numbers and morphology changes (Fig. 2B). The morphology change includes rounding, detachment and cell shrinking which are distinct morphological characteristics associated with apoptotic cells.
Induction of apoptosis by BA treatment
To determine whether BA-meditated inhibition of growth and proliferation were associated with apoptosis, the BAinduced chromosomal DNA degradation and appearance of DNA fragmentation in BV-2 cells were examined. As shown in Fig. 3, cells treated with BA showed significant degradation of chromosomal DNA and appearance of DNA fragmentation. When we measured the chromosomal DNA content after 24 hr of various concentrations of BA treatment, approximately 8% and 26% of chromosomal DNA were degraded at 2 µM and 6 µM of BA treatment, respectively (Fig. 3B).
The induction of apoptotic bodies in BA-treated BV-2 cells was further analyzed by flow-cytometric determination of DNA content (Fig. 4). Histograms of DNA content obtained from DAPI-stained BV-2 cells showed that the percentage of cells with reduced DNA content progressively increased as the treatment dose increased. Apoptosis was negligible up to 4 µM of BA treatment. However, the percentage of apoptotic
Effect of BA treatment on Bcl-2 level and caspase-3 activation
In order to investigate the mechanism by which BA induces apoptosis, we examined the expression levels of various apoptosis-related proteins. BV-2 cells were cultured in media containing 0-6 µM of BA for 24 hrs. Total proteins were isolated and Bax, caspase-3, and PARP [poly(ADP-ribosyl)polymerase] immunoreactivity levels were measured by western blotting. As shown in Fig. 5, western blot analysis revealed that BA treatment decreased the levels of Bcl-2 protein, an important regulator of apoptotic signaling pathways (Reed, 1998). No significant change in the level of pro-apoptotic protein, Bax was observed. We also found that BA induced the proteolytic processing of caspase-3 in dose-dependent manner. Activation of caspase-3 led to the cleavage of a number of proteins, one of which is poly (ADP-ribose) polymerase (PARP). Although PARP is not essential for cell death, the cleavage of PARP is another hallmark of apoptosis. BA treatment also induced a dose-dependent proteolytic cleavage of PARP, with concomitant accumulation of the 89 kDa form and the disappearance of the full-size 116 kDa molecule (Fig. 5). Taken together, these findings suggest that BA induced apoptosis through the down-regulation of Bcl-2 and the activation of caspase-3.
Effect of BA treatment on autophagic flux
To further confirm the cell death mechanism mediated by BA in BV-2 cells, changes in the expression levels of the protein in autophagy induction pathways were investigated. In order to examine the autophagic flux, the conversion of microtubule-associated protein light chain 3 (LC3)-I to phosphatudylethanolamine (PE) conjugate form (LC3-II), and p62 expression were observed.
BA treatment increases in the conversion of LC3-1 to LC3-II in dose-dependent manner within BV-2 cells (Fig. 6). On the other hand, the expression level of p62 was increased, suggesting that p62 was not degraded but accumulated (Fig. 6). As a positive control, cells were starved with glucose and the changes in the level of p62 and the conversion of LC3-I to LC3-II form were monitored (Fig. 6B). Under the glucose starvation condition the level of p62 was decreased and the the level of LC3-II was increased which are the well-known characteristic of autophagy induction (Kim et al., 2013). These results indicate that BA treatment induced the accumulation of LC3-II, which represents the increase in the number of autophagosome. However, there was no concomitant degradation of p62 was not occurred. Rather, p62 accumulation was observed. These data suggested that BA treatment inhibited the autophagic flux in BV-2 cells.
DISCUSSION
The importance of natural products in drug discovery have been emphasized (Rosén et al., 2009;Hong, 2011). Natural products have been good source for new drug development and discovery in various diseases (Butler, 2008). Natural products and their molecular framework also have been used in medicinal chemistry for drug design for the discovery of new drugs (Rodrigues et al., 2016). Recently, many attempts have been made to find a new therapeutics of neurological diseases (Butler, 2008;Choi et al., 2011;Gu et al., 2014). BA is a natural product that can be found in many medicinal plants and contains many favorable biological activities (Periasamy et al., 2014).
In this study we evaluated the cell death inducing effect in microglial BV-2 cells. BA showed cell proliferation inhibition effect with IC50 of 2 µM. The alterations in cell morphology, the fragmentation of chromosomal DNA, and the appearance of sub-G1 hypodiploid cells in flow cytometry analysis all indicate that BA induced apoptosis in BV-2 cells. To further analyze the molecular mechanism by which BA causes cell death, we evaluated the level of proteins in apoptosis pathway.
Apoptosis is morphologically characterized by cellular shrinkage, chromatin condensation, and nuclear fragmentation. During apoptosis, double strand cleavage occurs at the linker regions between nucleosomes to produce DNA fragments which develop characteristic DNA ladder pattern on agarose gels (Wyllie, 1980;Arends et al., 1990). The appearance of chromosomal DNA fragmentation pattern seems different in various cell types. In our data the pattern of chromosomal DNA fragmentation in BV-2 cells was rather smeary degraded bands than discrete DNA ladder bands. Similar observations were made with dehydroepiandrosterone-treated BV-2 cells and primary neuronal cells (Vogel et al., 1997;Yang et al., 2000).
In our study, it was observed that caspase-3 was activated and the caspase substrate PARP was proteolytically cleaved to low molecular weight fragments in BA-treated BV-2 cells. In addition to caspase-3 activation, the level of Bcl-2, an antiapoptotic protein, was decreased while the level of Bax, a proapoptotic protein, remained constant upon treatment of BA, resulting in a decrease in the ratio of Bcl-2/Bax, one of the major events that regulate apoptosis (Oltvai and Korsmeyer, 1994). Similar observation was reported that BA or BA derivative-treated human cancer cells in the induction of decrease in the level of Bcl-2 and PARP cleavage by caspase activation (Li et al., 2010;Khan et al., 2016). It was obvious that BA caused cell-death through the induction of apoptosis. However, the extend of cell death caused by apoptosis alone could not explain the cell death data observed with MTT. Therefore, we tried to find other cell death inducing mechanism that is involved in BA-induced BV-2 cell death.
Autophagy is the major lysosomal intracellular degradation system that involves the delivery of cytoplasmic cargo to the lysosome. Autophagy is essential in survival, differentiation, development and homeostasis (Levine and Kroemer, 2008;Mizushima and Komatsu, 2011). Autophagy can be activated in response to various cellular and environmental stress conditions to promote cell survival or cell death (Baehrecke, 2005). One of the best characterized proteins in autophagy pathway is p62 (also known as sequestome 1/SQSTM1) which is unbiquitously expressed. p62 interacts with LC3 and subsequently, it is incorporated into the autophagosome and degraded in autophagy pathway (Mizushima and Komatsu, 2011). During autophagy process, a cytosolic form of LC3 (LC3-I) is conjugated to PE to form LC-3-PE conjugate (LC3-II). The conversion of LC3-I to LC3-II is used as autophagosome marker for autophagy monitoring (Mizushima and Yoshimori, 2007;Tanida et al., 2008;McLeland et al., 2011). It is suggested that comparison of the amount of LC3-II between samples is more important than the comparison of the LC3-1/LC3-II ratio (Mizushima and Yoshimori, 2007). Autophagosome accumulation may indicate the induction of autophagy and, at the same time, may represent the increased generation of autophagosome, and a block in autophagosome maturation and the completion of autophagy pathway (Mizushima et al., 2010). If there is an autophagy induction, the increase in the conversion of LC3-I to LC3-II and the decrease in p62 are expected. It is known that p62 is selectively incorporated into autophagosome through binding to LC3 and degraded by autophagy (Mizushima and Komatsu, 2011). It is also reported that the accumulation of LC3-II form does not always indicate the induction of autophagy. It may be accumulated by blocking of the downstream steps (Mizushima et al., 2010).Our data showed the accumulation of p62 and the increase in LC3-II upon BA-treatment. However, under the glucose starvation condition, that is known to induce autophagy (Kim et al., 2013), the level of p62 decreased and the level of LC3-II was increased. These data indicate that the accumulation of autophagosome without fusion with lysosome and subsequent degradation of p62, hence suggesting the inhibition of autophagy flux. It seemed that the inhibition of autophagic flux contributed to cell death in BA-treated BV-2 cells.
In conclusion, we demonstrated that BA caused cell death in microglia BV-2 cells by inducing apoptosis and inhibiting autophagic flux. BA-treatment inhibited cell proliferation and, induced chromosomal DNA fragmentation and appearance of sub-G1 hypodiploid cells. Induction of apoptosis was through the decrease in anti-apoptotic Bcl-2 and the activation of caspase-3. Autophagic flux inhibition was shown through the accumulation of p62 and increase in the conversion of LC3-I to LC3-II. | 2018-04-03T05:16:47.336Z | 2017-03-10T00:00:00.000 | {
"year": 2017,
"sha1": "ce2c4389863a714241817a6f29f843228c2ff8f5",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc5685431?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce2c4389863a714241817a6f29f843228c2ff8f5",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
232111839 | pes2o/s2orc | v3-fos-license | Effects of Schlemm’s Canal Expansion: Biomechanics and MIGS Implications
Objective: To evaluate the change of biomechanical properties of the trabecular meshwork (TM) and configuration of collector channels (CC) by high-resolution optical coherence tomography (HR-OCT) induced by Schlemm’s canal (SC) dilation. Methods: The anterior segments of two human eyes were divided into four quadrants. One end of a specially designed cannula was placed in SC and the other end connected to a perfusion reservoir. HR-OCT provided three-dimensional (3D) volumetric and two-dimensional (2D) cross-sectional imaging permitting assessment of the biomechanical properties of the TM. A large fluid bolus was introduced into SC. Same-sample, pre and post deformation and disruption of SC and CC lumen areas were analyzed. Results: Morphologic 3D reconstructions documented pressure-dependent changes in lumen dimension of SC, CC, and circumferential intrascleral channels. 2D imaging established volumetric stress-strain curves (elastance curves) of the TM in quadrants. The curves of TM elastance shift to the right with an increase in pressure-dependent steady-state SC area. After a bolus disruption, the SC area increased, while the CC area decreased. Conclusion: Our experimental setup permits the study of the biomechanical properties of TM by examining elastance, which differs segmentally and is altered by mechanical expansion of SC by a fluid bolus. The study may shed light on mechanisms of intraocular pressure control of some glaucoma surgery.
Introduction
Glaucoma is a leading cause of irreversible blindness, and primary open-angle glaucoma (POAG) is the most common type. The cause of glaucoma remains an enigma, but stiffening and loss of pulse-dependent bulk motion of the trabecular meshwork (TM) tissues are implicated as an important factor. For example, TM motion that permits Schlemm's canal (SC) to fill with blood with pressure gradient reversal becomes abnormal and eventually stops as glaucoma progresses [1][2][3]. In addition, pulsatile aqueous flow from SC to the episcleral veins slows and eventually stops in glaucoma but is restored by drugs that reduce intraocular pressure (IOP) [4].
SC functions as a compressible chamber. Its morphology and behavior suggests that it is a component of a lymphatic-like pump that regulates IOP [5]. Many studies and multiple lines of evidence document that flow to the aqueous veins is pulsatile and that the origin of the aqueous inducing pulsatile flow is SC and the distal outflow channels [6,7]. Especially compelling evidence is that pulsatile aqueous in the aqueous veins increases when access to the episcleral veins is blocked in normal veins [8,9]. The pulsatile flow requires the presence of TM motion since it is the compressible tissue capable of changing SC dimensions to induce pulsations. Optical coherence tomography (OCT) imaging is shedding new light on TM motion, providing evidence that TM motion becomes abnormal in glaucoma. Imaging in humans demonstrates that distention of the TM into SC is dependent on IOP [10]. OCT studies demonstrate both in the laboratory and in patients that the TM undergoes cyclic motion induced by the ocular pulse [11,12]. Reduced TM motion occurs experimentally in response to a decrease in the ocular pulse that is associated with the closure of SC [11]. These OCT studies indicate that biomechanical properties define pressure-dependent TM distention into SC and determine oscillatory responses to the ocular pulse in normal subjects.
Recent evidence suggests that OCT can characterize abnormal responses in glaucoma [13]. Recent high-resolution OCT (HR-OCT) studies demonstrate synchronous real-time changes in the lumen dimensions of SC and collector channel (CC) in response to pressure variation [14][15][16]. These OCT studies, as well as scanning electron microscopy and micro-CT, identify comparable structures in the TM pathways [17][18][19]. Our current study extends information related to the biomechanical behavior of the outflow system by further exploring tissue elastance. Elastance determines the ability of the TM tissues of SC inner wall to appropriately distend and recoil. Optimization of elastance is essential to control normal IOP, a property that becomes abnormal in glaucoma.
Medications and laser procedures are often ineffective in controlling IOP, so many patients have required interventional surgery. Trabeculectomy has for many years been the most common interventional surgery, but the risk-benefit ratio makes its use problematic, except in those needing very low pressures. Micro-invasive glaucoma surgery (MIGS) procedures aimed at restoring flow through the aqueous outflow pathway are attractive because they generally attain pressures in the mid-teens and have a good safety profile. Our pilot effort in this report explores the changes in biomechanical responses resulting from MIGS-like dilation of SC. Canal expansion is done with procedures that use viscoelastics to dilate SC, such as external or ab interno canaloplasty (AbiC) or instrumentation of the canal with sutures or cannulas.
Previous experimental studies have cannulated SC and dilated it with viscoelastic or the passage of sutures with dimensions comparable to cannulas [20]. Assessment of effects has been done with light and scanning electron microscopy. Cannulation by sutures the size of cannulas disrupts the SC inner wall, compresses the external wall, and tears away endothelial tubes connecting the TM to hinged collagen flaps at CC entrances [21]. Viscoelastic also overdilates SC distal at the site of infusion cannula insertion. The biomechanical effects of disruption of SC tissues have not previously been amenable to study. Of special interest are pressure-dependent biomechanical responses after disruption of connections between the TM and hinged flaps at SC ostia.
Our goal in this pilot study is to explore the biomechanical properties of the outflow system resulting from changes in SC pressure. Our first aim is to explore changes in structural features and relationships of SC, CC, and distal pathways using three dimensional (3D) volumetric imaging. A second aim is to assess the mechanical properties of the TM through the quantitation of stress-strain relationships. A final aim is to explore the changes in biomechanical behavior of the outflow system after disruption of canal structures by mechanisms that simulate canal-based MIGS procedures.
Tissue Preparation
Normal human eyes used in this study were provided by the Sightlife™ eye bank within 24 h after the donor's death. The donors' ages were 53 (Eyes #1 and #2) and 67 (Eye #3). Both were Caucasian with no history of eye disease. The 12 o'clock limbal position was marked, and the eye hemisected followed by removal of the lens and iris. We then divided the anterior segment into four quadrants (superior temporal, ST; inferior temporal, IT; superior nasal, SN; and inferior nasal, IN). The wedge-shaped limbal quadrant with the TM facing upward toward the OCT beam was immersed in Hanks balanced salt solution in the Petrie dish. The cornea and sclera of the quadrant were affixed to the dish using pins that pierced an underlying layer of silicon preplaced within the dish [15].
Experimental Protocol
The perfusion system consisted of two perfusion reservoirs, a perfusion pump, and a three-way switch. One end of a laboratory-fashioned 150 µm steeply tapered cannula tip was inserted into the cut end of SC and connected to PE 60 tubing leading to a three-way switch. Quantitative measurement of pressure-dependent SC dimensions was achieved as follows. In each quadrant, the SC cannula led to a single reservoir to provide controlled steady-state hydrostatic pressures of 0, 5, 10, 20, 30, and 50 mmHg. At each steady-state condition, both two dimensional (2D) cross-sectional images and 3D volumetric images were acquired with the OCT system.
Using the same quadrants as in the steady-state experiment, we next simulated expansion of SC as is done with viscoelastics. We used the three-way switch to connect the SC cannula to a perfusion pump. Perfusion pump parameters were set as follows: speed, 10 mL/min; volume, 125 µL; speed 20 mL/min, volume 125 µL; speed, 30 mL/min, 125 µL; speed 30 mL/min, volume 250 µL. After the introduction of the BSS bolus, steadystate measurements were again made to determine whether alterations in biomechanical properties of the tissues had occurred.
Imaging System and Scanning Procedures
The HR-OCT system consists of a light source with a central wavelength of 1340 nm and a half-bandwidth of 110 nm, a 1024-pixel built-in high-speed spectrometer, and a 92 kHz A-line InGaAs linear scan camera. The output power of the OCT system is 2.5 mW with~105 dB energy at the spot focus. The scan rate is 200 frames per second with an axial resolution of 7.2 µm in tissue (5 µm in the air), transverse resolution of 5 µm, imaging depth of 2.2 mm, and a focus range of 0.5 mm. During imaging, a glass sheet covered the fluid surface to reduce the impact of surface fluctuations on imaging [15].
The range of 3D imaging was 2 × 2 × 3 mm 3 , which was composed of 512 B-scan frames with each frame composed of 360 equally spaced A-lines. With the 3D image acquisition mode, the 3D structure of SC, CC, and ISCC could be clearly identified. The 2D acquisition model was used to show the variation of the profile of the same-sample SC section under conditions of changing perfusion pressure and were acquired over 7 s.
Data Analysis
Amira software was used to reconstruct the 3D volumetric images of SC. The optimal angle was selected to observe the structural relationships of SC, CC, and intra-scleral collector channel (ISCC) adjacent to SC. For analysis of the 2D images, the boundary between SC and CC was semi-automatically delineated. ImageJ software then provided delineation of the SC and CC area [6]. The total area of SC within a 2 mm length was automatically calculated from the 3D images. Stress-strain curves were developed to assess the biomechanical characteristics of the TM that determined its response to changing SC pressure. The analysis compared curves before and after a fluid bolus that disrupts TM-CC connections.
D Volumetric Imaging
HR-OCT volumetric images provide a 3D view demonstrating relationships between SC, CC, and ISCC ( Figure 1 Eye #1 superior temporal quadrant while maintaining a steadystate pressure of 30 mm Hg). The site of cannula insertion is visible as a funnel-shaped area at the right edge of the SC image. SC dimensions gradually decrease as distances distal to the site of the cannula insertion increase. The XZ plane in Figure 1 demonstrates the CC departing the canal in a relatively narrow plane, rather than arising from multiple surfaces of SC lumen. The XY plane captures an orthogonal view of CC entrances further emphasizing the uniform plane of departure from SC. In the XY plane, circumferential ISCC have an orientation parallel to SC, have a distended configuration, and encompass an area similar in size to that of SC. entire length of the canal was distended except the distal portion far from the cannula. Of great interest, the canal lumen dimensions exhibited highly segmental behavior in response to increases in SC pressure of 5 to 10 mm Hg, with not only changes along the canal circumference but also in the plane orthogonal to its length. The CC entrances and their attachment to the adjacent ISCC expanded and new entrances became apparent as SC pressure increased from 10 to 20 mm Hg. As SC pressure further increased to 30 mm Hg, the circumferential channels in ISCC merged to create a fairly uniformly dilated lumen area parallel to SC. The CC entrance openings and ISCC arose in a consistent plane, so a section cutting through their plane of exit creates what appears to be a uniformly thick structure. However, tilting slightly, for example into the YZ plane of Figure 1, will reveal a more complex arrangement. Figure 1 and demonstrate progressive enlargement of SC, CC, and ISCC in response to systematic steady state increases in SC pressure. At a hydrostatic pressure gradient of 5 mm Hg, only segmental areas of SC lumen were visible, and CC were not visible. With progressive increases of SC pressure, segmental areas of SC lumen expanded and coalesced until by 30 mm Hg the entire length of the canal was distended except the distal portion far from the cannula. Of great interest, the canal lumen dimensions exhibited highly segmental behavior in response to increases in SC pressure of 5 to 10 mm Hg, with not only changes along the canal circumference but also in the plane orthogonal to its length. The CC entrances and their attachment to the adjacent ISCC expanded and new entrances became apparent as SC pressure increased from 10 to 20 mm Hg. As SC pressure further increased to 30 mm Hg, the circumferential channels in ISCC merged to create a fairly uniformly dilated lumen area parallel to SC. The CC entrance openings and ISCC arose in a consistent plane, so a section cutting through their plane of exit creates what appears to be a uniformly thick structure. However, tilting slightly, for example into the YZ plane of Figure 1, will reveal a more complex arrangement.
Morphology with 2D Cross-Sectional Imaging
The 2D-OCT images show the morphological changes of the same cross-section of SC and its surrounding tissues under different perfusion pressures in Figure 3 (Eye #1 superior temporal quadrant). With the increase of perfusion pressure, SC and its distal CC continued to expand. The valvular structures were identified in the lumen of SC and around the CC entrance. After an aqueous bolus of 30 mL/min and 250 µL, the area of SC increased, while the area of CC decreased compared with their area before fluid bolus perfusion; areas are compared while maintaining identical pressures in SC (Figure 4, Eyes #1 and #2). After the bolus perfusion, the transluminar structure was disrupted and the relative position of the valvular structure around CC shifted.
Morphology with 2D Cross-Sectional Imaging
The 2D-OCT images show the morphological changes of the same cross-section of SC and its surrounding tissues under different perfusion pressures in Figure 3 (Eye #1 superior temporal quadrant). With the increase of perfusion pressure, SC and its distal CC continued to expand. The valvular structures were identified in the lumen of SC and around the CC entrance. After an aqueous bolus of 30 mL/min and 250 µL, the area of SC increased, while the area of CC decreased compared with their area before fluid bolus perfusion; areas are compared while maintaining identical pressures in SC (Figure 4, Eyes #1 and #2). After the bolus perfusion, the transluminar structure was disrupted and the relative position of the valvular structure around CC shifted.
Morphology with 2D Cross-Sectional Imaging
The 2D-OCT images show the morphological changes of the same cross-section of SC and its surrounding tissues under different perfusion pressures in Figure 3 (Eye #1 superior temporal quadrant). With the increase of perfusion pressure, SC and its distal CC continued to expand. The valvular structures were identified in the lumen of SC and around the CC entrance. After an aqueous bolus of 30 mL/min and 250 µL, the area of SC increased, while the area of CC decreased compared with their area before fluid bolus perfusion; areas are compared while maintaining identical pressures in SC (Figure 4, Eyes #1 and #2). After the bolus perfusion, the transluminar structure was disrupted and the relative position of the valvular structure around CC shifted. Figure 5A displays a volumetric stress-strain curve (elastance curve) representing the relationship between tissue deformation and pressure changes. At each incremental volume increase, the incremental tissue deformation decreases, indicating distention dependent increases in tissue stiffening. The volumetric stress-strain curves of the TM were determined by the relationship between the instantaneous volume of SC and the perfusion pressure in the lumen of SC. The curve characterizes the elastance of the TM by plotting the relationship between tissue deformation and pressure change. Elastance, also the term for tissue stiffness, is the tangent at each location on the curve. As the fluid volume increases, the tissue stiffens, resulting in a nonlinear relationship where a greater incremental pressure rise occurs with each incremental volume increase. In Figure 5B, baseline steady-state elastance curve responses are compared at the infusion rates of 10, 20, and 30 mL/min while holding infusion volume (125 µL), and infusion rates of 30 mL/min with a total volume of 250 µL. The elastance curves move gradually towards the lower right of the plot with incremental infusion bolus increases. It indicates the TM stiffness is changed by the fluid bolus expansion of SC, which mimics the viscoelastic dilation of SC during some MIGS procedures. The degree of TM stiffness change is relevant to the volume and speed of the infusion fluid. Figure 5A displays a volumetric stress-strain curve (elastance curve) representing the relationship between tissue deformation and pressure changes. At each incremental volume increase, the incremental tissue deformation decreases, indicating distention dependent increases in tissue stiffening. The volumetric stress-strain curves of the TM were determined by the relationship between the instantaneous volume of SC and the perfusion pressure in the lumen of SC. The curve characterizes the elastance of the TM by plotting the relationship between tissue deformation and pressure change. Elastance, also the term for tissue stiffness, is the tangent at each location on the curve. As the fluid volume increases, the tissue stiffens, resulting in a nonlinear relationship where a greater incremental pressure rise occurs with each incremental volume increase. In Figure 5B, baseline steady-state elastance curve responses are compared at the infusion rates of 10, 20, and 30 mL/min while holding infusion volume (125 µL), and infusion rates of 30 mL/min with a total volume of 250 µL. The elastance curves move gradually towards the lower right of the plot with incremental infusion bolus increases. It indicates the TM stiffness is changed by the fluid bolus expansion of SC, which mimics the viscoelastic dilation of SC during some MIGS procedures. The degree of TM stiffness change is relevant to the volume and speed of the infusion fluid.
Discussion
In this pilot study, our approach circumvents the problem posed by viscoelastic SC dilation, which does not permit the study of dynamic motion. Instead, we cannulate and dilate SC using an aqueous fluid that permits varying control of same sample SC pressures. While systematically controlling pressure gradients, we simultaneously monitor
Discussion
In this pilot study, our approach circumvents the problem posed by viscoelastic SC dilation, which does not permit the study of dynamic motion. Instead, we cannulate and dilate SC using an aqueous fluid that permits varying control of same sample SC pressures. While systematically controlling pressure gradients, we simultaneously monitor
Discussion
In this pilot study, our approach circumvents the problem posed by viscoelastic SC dilation, which does not permit the study of dynamic motion. Instead, we cannulate and dilate SC using an aqueous fluid that permits varying control of same sample SC pressures. While systematically controlling pressure gradients, we simultaneously monitor the configuration of TM, SC, and ISCC with HR-OCT. Our technique permits the same-sample comparative assessment of TM biomechanical properties, which avoids the limitation of variation between tissue samples. The TM configuration determines SC dimensions. As the ventricular volume change during the cardiac cycle represents the function of myocardium, we use SC dimension changes as a means of assessing TM behavior. In this study, we quantitate stress-strain relationships (elastance) determined by TM responses to changes in pressure.
Our HR-OCT study permitted us to explore morphology relationships of SC, CC, and ISCC using 3D volumetric imaging. We were able to examine both the stress-strain relationships of the TM and CC and the synchrony of changing dimensions of the tissues. In addition, our experimental protocol permitted us to study changes in biomechanical properties of the tissues following simulation of SC dilation with MIGS-like approaches.
The 3D volumetric information provided by HR-OCT permits us to visually examine the relationship between the TM, CC, and ISCC through dynamic rotation of the tissues along the XY, XZ, and YZ axis. The dimensions of SC, CC, and ISCC decrease, moving from the proximal areas adjacent to the cannula to the more distal circumference of SC. The decrease in SC dimensions with distance from the site of infusion is consistent with findings from histologic studies following viscoelastic injections into SC [20].
The views perpendicular to CC exit sites confirm that CCs exit from SC in a uniform, rather narrow plane and join circumferentially oriented ISCC. Through-cornea viewing of microvascular casts permits viewing CC perpendicular to their sites of exit from SC and is consistent with the findings of this study. The findings are also consistent with those from SEM and micro-CT studies [17][18][19].
HR-OCT 3D morphology reconstructions demonstrate the ability of the TM and tissues surrounding CC to change shape ( Figure 1). Without pressure in SC, the lumen is small and connections between regions of the lumen are patchy [16,22,23]. As SC pressure increases, the disconnected regions of the lumen of SC coalesce, creating a continuous structure. With the increasing SC pressure, CC lumen dimensions increase. With further pressure increases, the ISCC fill and segmental areas also coalesce. The findings indicate that in the absence of pressure, elastance properties of the tissues cause them to recoil, reducing the dimensions of SC, CC, and ISCC. Elastance properties of the tissues permit the TM, as well as the tissues surrounding CC and ISCC to expand, increasing available areas for aqueous flow [24].
The evidence from steady state 2D configuration imaging confirms findings with 3D imaging, showing that the TM and tissue surrounding the CC and ISCC are not distended at 0 mmHg. The lumen dimensions of CC and ISCC become progressively more visible as SC pressure increases, demonstrating the elastance properties of the tissue surrounding their volumes. Connections between the TM and CC entrances are also visible and are especially obvious when interrogating the 2D volumes by increasing SC pressure gradients during experiments. The changes in CC dimensions provide increasing space through which aqueous can flow from SC into ISCC.
A reverse finite element model (FEM) studied the bulk motion of normal and glaucomatous TM lamellae under tensile loading conditions. The FEM model used experimental control of transtrabecular hydrostatic pressure gradients such as those described in the current study. The elastic modulus of normal human the TM estimated by inverse FEM was 70 ± 20 kPa (mean 6 SD), whereas glaucomatous human TM was slightly stiffer (98 ± 19 kPa) reaching borderline significance with a p-value of 0.051. Outflow facility was significantly correlated with TM stiffness estimated by FEM in six human eyes (p = 0.018) [25].
As in the FEM study, our HR-OCT system measures the bulk motion of the TM tissues represented by the motion of the trabecular lamellae. The SC inner wall endothelium dimensions and motion responses are too small to be resolved with our current technology. By examining the relationship between an increase in SC area and the resulting relationship between pressure and tissue deformation we can characterize the TM tissue elastance. Elastance is the ability of a hollow organ to recoil toward its original dimensions on the removal of a distending force and represents the ability to store and release elastic energy [26]. The stored energy associated with an increasing volume distributes between the elastic energy of tissue deformation and pressure energy. The TM maintains its position against the force of IOP through its elastance properties that determine how far the TM distends into SC when IOP rises. Elastance properties also determine the amplitude of TM excursions in response to pulse oscillations. The measurements we obtain with elastance curves allow us to explore the complex relationship of the distribution of energy associated with volume-dependent tissue deformation and pressure.
We find an increase in SC lumen area as SC pressure increases indicative of TM tissue deformation with the accrual of elastic energy ( Figure 5A). Lumen area configuration depends on the biomechanical properties of the surrounding tissues. We develop volumetric stress-strain curves (elastance curves) using volume as the dependent variable. However, we add rapidity of volume change as a second experimentally controlled variable ( Figure 5B). We find synchronous SC and CC lumen dimensions changes ( Figure 4) indicative of synchrony of the TM motion and the tissue surrounding CCs. The correlated movement of the TM and CC entrance tissues is consistent with the findings of prior studies [15].
Our pilot study has clinical relevance because it is the first to use OCT to demonstrate changes in biomechanical properties of the outflow system in response to MIGS-like procedures that dilate SC (Figure 4). To regulate IOP, elastance properties of the outflow system must be kept in a narrow range. Either too high or too low elastance will result in loss of TM motion, a loss thought to be a hallmark of the glaucoma process [4,13].
Increased elastance/stiffness will prevent the motion necessary for the TM to alter outflow channel dimensions in response to IOP changes and reduce the ability to respond to the ocular pulse. Compliance is the inverse of elastance, so reduced elastance results in increased compliance. High compliance alters the ability of the TM to withstand the force of IOP, and herniation and apposition to SC external wall can result [27]. In the absence of normal elastance, the TM will also not undergo appropriate excursions in response to the ocular pulse or pressure transients associated with blinking and eye movement.
Our steady-state experiments demonstrate that SC areas increase, but CC areas decrease compared with their original dimensions at an individual level of perfusion pressure in response to experimental SC dilation that disrupts TM-CC connections. MIGS-like procedures that dilate SC were thus found to alter the biomechanical properties of both the TM and tissue surround CC in our experiments. Our study as well as prior viscoelastic studies demonstrate disruption of valvular structure traversing the SC lumen and their attachments at CC entrances. We hypothesize that disruption of connections between the TM and CC entrances reduces the ability of the CC entrances to move in response to pressure-dependent TM motion [5].
Volumetric stress-strain (elastance) curves explore the effects of volume increases on biomechanics before and after expansion of SC with a fluid bolus in all four quadrants of an eye ( Figure 6). The elastance curve varied markedly between quadrants, consistent with segmental and regional variability. However, in each quadrant, the elastance curves shifted toward the right and steepened, indicating that biomechanical properties of the TM change with SC overexpansion. Whether the altered elastance will be beneficial is dependent on the interplay between the initial enlargement of the outflow channels and disruption of connections between the TM and CC entrances. Disruption of TM-CC entrances may reduce the ability of the recoiling TM to maintain appropriate tensions to hold CC entrances open.
Conclusions
Our study explores morphologic relationships and biomechanical properties of the outflow system in ex vivo human eyes using our HR-OCT platform. Using 3D morphometric imaging, we identify the ability of SC, CC, and ISCC to undergo synchronous changes Life 2021, 11, 176 10 of 11 in shape in response to changes in SC pressure. We find the CCs exit SC in a relatively uniform plane to then enter the circumferentially oriented ISCC.
Volumetric stress-strain studies demonstrate the ability to develop elastance curves. The curves identify synchronous SC and CC dimension increases with increasing SC pressures. Assessment of these biomechanical properties can improve our understanding of normal mechanisms of IOP control and the abnormalities in glaucoma.
Our study finds that MIGS-like overexpansion of SC causes alterations in elastance properties of the TM and tissues surrounding the CC. The canal overexpansion causes changes in the biomechanical responses of the TM, resulting in a pressure-induced increase of the motion that determines SC dimensions. At the same time, there is a decrease in the motion response of the tissues surrounding CC, resulting in reduced lumen dimension changes. The reduced CC response may result from the disruption of attachments between the TM and the attachments that control CC entrance dimensions. Institutional Review Board Statement: No living humans or animals were involved in this study. Not applicable.
Informed Consent Statement:
No living humans or animals were involved in this study. Not applicable.
Conflicts of Interest:
The authors declare no conflict of interest. | 2021-03-05T05:28:53.452Z | 2021-02-01T00:00:00.000 | {
"year": 2021,
"sha1": "30ef69bfb54982fc9d9e91825ebc17adb4a170d3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-1729/11/2/176/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "30ef69bfb54982fc9d9e91825ebc17adb4a170d3",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251495038 | pes2o/s2orc | v3-fos-license | ERP evidence of heightened attentional response to visual stimuli in migraine headache disorders
New findings from migraine studies have indicated that this common headache disorder is associated with anomalies in attentional processing. In tandem with the previous explorations, this study will provide evidence to show that visual attention is impacted by migraine headache disorders. 43 individuals were initially recruited in the migraine group and 33 people with non-migraine headache disorders were in the control group. The event-related potentials (ERP) of the participants were calculated using data from a visual oddball paradigm task. By analyzing the N200 and P300 ERP components, migraineurs, as compared to controls, had an exaggerated oddball response showing increased amplitude in N200 and P300 difference scores for the oddball vs. standard, while the latencies of the two components remained the same in the migraine and control groups. We then looked at two classifications of migraine with and without aura compared to non-migraine controls. One-Way ANOVA analysis of the two migraine groups and the non-migraine control group showed that the different level of N200 and P300 amplitude mean scores was greater between migraineurs without aura and the control group while these components’ latency remained the same relatively in the three groups. Our results give more neurophysiological support that people with migraine headaches have altered processing of visual attention.
Introduction
Migraine is a well-known neurological headache disorder that is often characterized by many physiological symptoms including light sensitivity, nausea, and throbbing headaches during headache attacks-also known as ictal periods. Among all different types of this primary headache disorder, migraines with aura cause more pronounced clinical symptoms and have diverse manifestations that vary from visual disturbances to paresthesia or speech disturbances, with visual auras as the most common symptom with 90% of occurrence in this subcategory (Dodick 2018). Migraines without aura are a more recurrent type of migraine with less visual phenomena and include menstrual migraines (Headache Classification Committee of the International Headache Society 2018). Migraines are predominantly reported to cause not only adverse pain but also subjective impairments in cognition (Vuralli et al. 2018). Although the memory and cognitive interruptions are usually found before (preictal) and during (ictal) headache attacks, studies show the cognitive dysfunctions could last between the main attacks (known as the interictal phase) (Mickleborough et al. 2014).
Visual attention is one of the most critical cognitive processes and is frequently found to be impaired based on the subjective clinical reports of migraine sufferers. Attention is the way we focus our cognitive resources on aspects of the visual world that are behaviourally relevant to our current task while diminishing the extent to which we are distracted by less relevant inputs. Research from the past decade has started to paint a picture that while controls readily identify behaviourally relevant stimuli and suppress non-relevant Communicated by Bill J Yates.
1 3 stimuli, migraineurs have a decrease in the ability to suppress behavioral irrelevant stimuli (Mickleborough et al. 2011). Specifically, it is found that the migraine brain shows altered habituation to repeated visual stimuli (Fong et al., 2020;Guo et al., 2019a, b;Coppola et al. 2019). For example, Mickleborough et al. (2013) used logos as visual stimuli for 25 migraineurs and 25 non-migraineurs to compare their post-sensory processing. While viewing the repeatedly presented images, the migraine group had an amplified cortical response to the repeated visual stimuli while the controls showed almost no change in brain response across the same repeated stimuli. Yet the current literature shows contradictory results regarding whether migraine causes an amplification (Mickleborough et al. 2013;Coppola et al. 2009;Chen et al. 2020;Kam et al. 2015) or an attenuation of attentional processing (Guo et al. 2019a, b;Raggi and Ferri 2020), suggesting that more supporting research is needed to show the potential impairment and attentional dysfunction in migraineurs. Recent research has identified a need for continued research assessing altered habituated responses in the evoked potentials of migraineurs vs. non-migraine controls to explore whether the failure for habituated responses could be reproducible in different contexts including with different stimulation parameters (Sand et al. 2008;Omland et al. 2016).
Evidence is building that migraineurs have attentional issues specific to the decision to suppress responses to behaviourally irrelevant stimuli. As an example, Chen et al. (2020) described dysfunctional inhibitory control in migraineurs when suppression of response is expected, and Mickleborough et al (2011) report migraineurs have a significant decrease in normal suppression of cortical responses to visual events outside their zone of attentional. It is not surprising then to find that migraineurs also have increased attentional orienting to sudden-onset stimuli as compared to controls (Mickleborough et al. 2011). Adding to this, previous neuroimaging studies exhibit the altered cortical activities in the attentional control network of migraineurs during visuo-spatial tasks including regions such as the frontal eye field, superior parietal lobule, superior temporal gyrus, and superior temporal sulcus (Kelley et al. 2007;Mickleborough et al. 2016). During visual spatial-orienting tasks in controls as compared to migraineurs, Mickleborough et al. (2016) found that controls have more activation than migraineurs (in between attacks) in the right temporal-parietal junction (rTPJ), a key area in the visual attentional network which has a suggested role of assessing unattended stimuli for behavioural relevance before sending the signal to redirect attention to behaviourally relevant sensory stimuli that are outside the focus of attention (Corbetta et al. 2000;Giesbrecht et al. 2006;Kelley et al. 2007). Therefore, the decreased activity in the rTPJ (in migraineurs as compared to controls) supports the theory that migraineurs have attentional issues related to processing behavioural significance of stimuli and selecting irrelevant stimuli to ignore. Given this growing evidence that the migraine brain may not be adequately assessing the relevance of behavioral stimuli, impacting the amount to which it is attended to, suppressed or habituated to, a sustained attention paradigm is an appropriate task to further understand migraineurs dysfunctional attention.
The Hillyard sustained attention paradigm is a common experimental design which includes a sequence of auditory or visual repetitive stimuli that are randomly interrupted with a deviant stimulus, such an interruption can cause a measurable change to event-related potential (ERP) components with predictable negative and positive peaks (i.e., amplitude) and timing (also called latency) after the onset of the stimulus (Luck and Kappenman 2017). While the amplitude of each component indicates the intensity of its relevant sensory or cognitive response, the shortened or prolonged duration of a component is shown in its latency which could reflect separate indications about the individual's response. The visual oddball paradigm task is an example of this experimental design where participants are given multiple visual stimuli (such as letters, shapes, or colours) that could appear on the central fixation point or in other visual fields depending on which specific type of attention (such as visuospatial attention, or involuntary attention) is investigated. Right after the onset of the stimulus in an oddball paradigm, the early components with negative and positive polarity are attributed to the motor response and sensory processing of the input, always followed by the emergence of other components that indicate higher cognitive processing as the main focus of attention studies.
The ERP analysis of these cognitive components could help us explore whether migraineurs have an amplified or attenuated excitatory mode in response to the repeated environmental input. ERPs are prominent non-invasive measurements of attention-related brain activities and include components of the continuous EEG recording that occurs immediately after the onset of a stimulus in an attention task. If an individual is shown a series of stimuli, ERP components could be measured by averaging the recurrent EEG activity that occurs after each stimulus, resulting in positive or negative polarities time-locked to the stimulus onset (Luck and Kappenman 2017). ERPs are an appropriate choice for our attention research in migraineurs, given their clear temporal resolution of brain activities (Luck and Kappenman 2017).
The N200 component of ERP is the negative polarity that usually indicates cognitive processing before the motor response to a stimulus as well as object recognition (Woodman 2010) after the onset of a visual stimulus. This early exogenous component is generally associated with involuntary and unwanted processing of the stimuli (Patel and Azzam 2005) but compared to other components such as P300, it is not broadly discussed in the migraine population and needs more investigations as a component of interest especially in the context of visual attention. An early study by Drake et al. (1989) focused on auditory event-related potentials giving 30 unmedicated migraineurs and 20 controls various tones at 1000 and 3000 Hz. They showed no difference in N200 amplitude (the minimal peak) and latency (the time difference between the stimulus onset and the peak amplitude) in migraineurs. On the other hand, Coppola et al. (2019) andde Tommaso et al. (2014) focused on the somatosensory stimulation of migraine without aura and showed that when migraineurs without aura are given a high-frequency repetitive transcranial magnetic stimulation (HF-rTMS), the evoked potential showed a heightened N200 and P250 in high frequency somatosensory evoked potentials of the migraineurs without aura while these components showed a decreasing response to HF-rTMS in healthy controls. On the other hand, Fong et al. (2020) associated the increase in N200 amplitude with cortical hyperexcitability that could happen in some non-migraineurs comparable to migraineurs.
Here in this study, we will explore whether there could be an alteration of the N200 component during a visual oddball task when we compare migraineurs to non-migraine headache sufferers. Reports on N200 latency are mainly controversial indicating shortened, heightened or unchanged 200 timing among migraineurs, such inconsistency could be associated with more sensory processing imbalance caused in visual pathways (Guo et al. 2019a, b;Fong et al. 2020).
The P300 component of ERP data is a key component of interest when assessing late exogenous ERPS and visual attention processes (Krigolson et al. 2017). The P300 is best elicited in oddball paradigms and is especially sensitive to active attentional processing, which makes it a clear variable for examination in a study focused on group differences in visual attention (Krigolson et al. 2017). As is indicated by the name, the P300 ERP is shown graphically as a positive fluctuation as early as 300 ms after the presentation of a stimulus. This component is mostly associated with stimulus encoding, identification and categorization (Picton 1992). Furthermore, the P300 ERP component can determine whether an individual is attending to a stimulus by demonstrating activities near the 300 ms window (Krigolson et al. 2017). For example, when someone is mind wandering, their P300 amplitude is reduced compared to when they are given a task-relevant stimulus (Kam et al. 2015). Regarding this, a look at this component could help us discover how much the participants are involved with attentional processing. Some studies have focused on this component in migraineurs and the findings controversially show increased (Mickleborough et al. 2013) or decreased P300 peak amplitude (Guo et al., 2019b). Both attenuation or amplification of P300 component indicate an altered attentional processing of the migraineurs. In addition to P300 amplitude, the latency of this component is used as an indication of the time individuals would spend on discriminating or categorizing the standard stimulus from the oddball stimulus (Fong et al. 2020). Although Drake et al. (1989) earlier reported that P300 could be prolonged for migraineurs with reference to more expected stimulus processing time in migraineurs, we would like to measure migraineurs and non-migraineurs' P300 latency in an oddball paradigm.
While recent studies with a focus on attentional processing in the oddball paradigm have compared migraineurs with healthy controls showing alterations in N200 and P300 components in terms of their amplitude and latency (Vilà-Balló et al. 2021;Guo et al. 2019a, b), we further explore if there are any differences between migraine population and other non-migraine headache sufferers.
The current study utilizes a 15-min visual oddball paradigm task presentation along with an electrode headbandthe MUSE EEG system-to analyze the frontal and temporal ERP activities of participants. This system has been previously used by Krigolson et al. (2017) for the same oddball task, and we are now applying it to migraine vs. non-migraine groups. Such a portable system could provide more context-based information about attentional processing in different everyday life settings. This study is an opening exploration of a portable EEG headband to collect the fronto-temporal activities of migraineurs during a visual attention task. Given that migraineurs have an altered response to repeated visual stimuli, we hypothesize that they will show amplification of the normal oddball response. We would like to address the current gap of a clear temporal picture about how cognitive processing is altered in the migraineurs and whether such predicted hyperexcitable may differ in different types of headaches comparing migraineurs to non-migraine headache sufferers as well as migraineurs without-aura to migraineurs with aura who usually report more neurological symptoms. Regarding the collected ERP data, we expect the migraine brain will have an even greater increase in attentional response as measured via ERP N200 and P300 components as compared to controls oddball stimuli appear in a series of standard stimuli. Additionally, we will explore how these components differ if comparing two migraine headache categories (with and without aura) with the non-migraine headache controls.
Study design
We chose a cross-sectional study design to observe the attentional response of migraineurs compared to nonmigraine headache sufferers in response to a visual oddball 1 3 paradigm task. We subsequently categorized migraineurs into migraineurs with and without aura to learn how these two groups would show different levels of response compared to the control group. Our dependent variables included N200 and P300 components of the collected ERP while the respondents were given a repetitive visual oddball paradigm task.
Participants
The current study recruited undergraduate students from the University of Saskatchewan. All the participants were rewarded 1 course credit or a gift card honorarium for their voluntary participation. All the participants gave their written consent form approved by the Biomedical Research Ethics Board at the University of Saskatchewan (Bio-REB 652) before they filled out a self-report questionnaire on their headache characteristics and had a quick tutorial (less than 30 min) on the MUSE headband and their oddball paradigm task. A total of 85 participants were initially recruited for this study all of whom reported some headache experiences. After data collection, five participants were eliminated due to missing or poor EEG data, one who had taken medicines less than 24 h before testing, two participants with headache experience within the past 48 h prior to data collection, and two individuals with probable migraine but not enough symptoms to be considered in the migraine group. Based on the headache criteria of the International Classification of Headache Disorders guide (Headache Classification Committee of the International Headache Society 2018), we analyzed 75 participants (57 females, 15 males and 3 unspecified, age > 18 with a mean age of 13 as the onset of their headache symptoms), who were placed in migraine (n = 42) and the non-migraine headache control groups (n = 33). 33 identified female (78.5%) and 8 male (19.04%) were in the migraine group and 24 female (72.7%) and 7 male (21.2%) were included in the non-migraine group. The female to male ratios in our groups reflect reports that migraine is reported to be almost three times more prevalent among women than the males (Al-Hassany et al. 2020). The groups were matched as a function of age and education.
Reported headache characteristics
The participants initially filled out a self-administered questionnaire which included 19 customized open-ended and closed-ended questions based on ICHD (2018) migraine criteria to collect more headache characteristics and place the participants in migraine vs. non-migraine group (Mickleborough et al. 2016).
In the migraine group, each headache attack lasted with a mean score of 12.10 h (SEM = 4.17). The duration of each headache attack in non-migraine headache sufferers had a mean score of 12.23 h (SEM = 5.55). Mean scores of total headache frequency in the past three months were 5.19 (SEM = 1.22) for the migraine group and 2.26 (SEM = 0.41) for the non-migraine headache sufferers. We also looked at headache frequency among those migraineurs who could be potentially categorized into subtypes of migraine with aura and migraine without aura. The mean scores of reported headache frequency were 6.35 in the migraineurs with aura (SEM = 2.19) and 4.12 (SEM = 0.98) in migraineurs without aura. More information on participants' headache characteristics in migraine and control non-migraine headache groups is found in Tables 1 and 2.
EEG recording apparatus
This study used a portable EEG headband called MUSE (Interaxon, Ontario, Canada) with 500 Hz sampling rate and no onboard data processing. The MUSE EEG system comprises 4 electrodes (AF7, AF8, TP9, and TP10) analogous to electrode Fpz which is considered the reference electrode (Fig. 1). The device is light-weighted and does not require gel application; however, the skin of the forehead and mastoids were dampened to enhance electrical conductance. Like a standard non-portable EEG, the MUSE headband is sensitive to motor movements; therefore, the participants were asked to remain still throughout the whole process of data collection. The EEG data were processed in MATLAB_ R2020a based on the protocol earlier utilized by Krigolson et al. (2017) using Brain Vision Analyzer software. The final data were then transferred to SPSS 27.0 for data analysis.
The oddball task procedure
The oddball paradigm task was performed in our lab's soundproof booth on an iPad mini (Apple Inc., California, USA). After completing the self-administered questionnaire, the participants were instructed by a researcher on how to perform the task. They were asked to remain still while being alone in the booth. All the participants were given an identical oddball task while wearing a MUSE EEG Headband. A black fixation cross appeared in the center of the screen (RGB value = [0, 0, 0]) and participants were directed to limit eye movements away from the center of the screen. The participants could see a series of random orange (RGB value = [0 0 255]) and purple (RGB value = [0 255 0]) coloured circles that were set to replace the fixation cross at random intervals and each last for 800-1200 ms in the center of the dark gray screen (RGB value = [108 108 108]). The standard stimulus (orange circle) occurred 75% of the time and comparatively, the target oddball stimulus (the purple circle) was less frequent with 25% of the appearances on the screen (Fig. 2). The circles were presented in random sequence order. Participants were prompted to tap the left side of the screen when viewing the purple stimuli (oddball stimulus) and to tap the right side of the screen when the circle was orange (standard stimulus). All stimuli were presented in front of a light grey background to prevent attentional distraction or discomfort on the participant's eyes. Stimuli colours and circles were all with the same brightness and dimensions. We used 10 blocks of 50 trials which on average lasted 15 min for each participant to complete. All data were recorded for each session of the study and evaluated on a time-continuum scale.
ERP analysis
We used MATLAB for raw data analysis and processing in this study (Krigolson et al. 2017). The data was collected from four electrodes AF7, AF8, TP9 and TP10. The data were pooled from AF7 & AF8 electrodes for analysis. The analysis began with data filtering followed by extracting epochs of data from continuous EEG data individually (for the oddball and the control) from 200 ms prior to and 600 ms after stimulus onset. Baseline correction occurred based on the 200 ms before stimulus onset. We subsequently used an artifact rejection algorithm to discard any segment with an absolute difference of more than 60 μv. The remaining segments were averaged for the conditional oddball and standard trials for each participant, then added up to reach grand conditional waveforms (oddball and standard) for migraine and control groups separately. Subsequently, we calculated the grand difference waveform by subtracting the average standard from the average oddball waveforms. The N200 peak component latency was defined (N200: 270 ms) and a time window was picked for calculating the mean peak amplitude of P300 with regard to its double-peaked shape (P300: 330-408 ms). For this means, the voltage amplitudes and latencies at the location between the two peaks of P300 were mean averaged.
Results
This study used SPSS 27 for statistical analysis. To reassure the validation of our collected ERP data, we conducted a Factorial ANOVA to examine the main effects Fig. 2 The oddball paradigm test: Each block of trials included 75% of the standard stimulus (purple) for 75% and 25% of the oddball stimulus (orange). The stimuli were presented at random intervals at each block after the fixation cross. Each stimulus lasted for 800-1200 ms at the center of the dark gray screen of ERP conditions (standard vs. oddball) and the groups (migraineurs vs. controls). By looking at the within-subject effects, we found a significant main effect of conditions on the collected ERP responses across the two migraine and control groups [F (1,816) = 5.097, p = 0.024].
For N200 component, we looked at the second minimum peak that was found on the grand difference waveform where N200 is usually expected. As explained earlier, we extracted the mean peak amplitude by averaging the voltages surrounding the two emerged peaks of P300 by choosing a time window between the two emerged peaks on the grand ERP difference waveform. As can be seen in Figs. 3 & 4, while the standard waveforms were similar for both groups, the oddball condition waveform was greater in amplitude for the migraine group compared to the control group. In other words, while both groups showed the typical oddball response, the migraine group showed an even larger difference between oddball to standard, reflecting an amplified oddball effect in N200 and P300. Figure 5 shows the N200 peak and P300 time window grand average difference waveforms in the two groups; for both the N200 and P300 amplitudes, the difference scores between oddball vs. standard were statistically greater in migraine participants when compared to non-migraine controls [(t (74) = − 2.406, p = 0.019); [t(74) = 2.169, p = 0.033], respectively].
Fig. 4 The individual ERP waveforms for Standard and Oddball conditions in migraine
For further exploration of the findings, we aimed to discover whether N200 and P300 amplitudes of the grand average difference waveform differed between migraineurs with and without aura (Table 5) compared to the non-migraine controls. The mean amplitude of N200 and P300 were subsequently compared between the three groups by running a One-Way ANOVA ( Table 6). As shown in Table 6, the mean score differences were statistically significant for both N200 amplitude (F 2,72 = 3.180, p = 0.048) and P300 amplitude (F 2,72 = 3.168, p = 0.048). Finally, the Fisher LSD Test of variance was chosen as the post-hoc test in this study to discover which groups' mean scores showed significant differences for N200 and P300 amplitudes. While the mean difference score of N200 amplitude between migraineurs with aura (N = 29, Mean = − 2.43, SD = 0.23) and those of the control group (N = 33 M = − 1.85, SD = 0.18) was near to the significance level (Mean difference = − 0.58, p = 0.052), migraineurs without aura (N = 13, M = − 2.69, SD.30) had a larger N200 amplitude when compared to the control group
Discussion
Previous research suggests that migraineurs have an amplified attentional response to visual stimuli (Coppola et al. 2009;Mickleborough et al. 2014), and neuroimaging research suggests this may be due to the migraine brain not adequately assessing the relevance of behavioral stimuli leading to an unwanted increased processing of stimuli (Mickleborough et al. 2016). For example, migraineurs show an increase in ERP amplitude (late positive potential) to repeatedly presented visual stimuli across trial blocks while controls showed no significant effect of the block (Mickleborough et al. 2013). Given this, we hypothesized that migraineurs would show an amplified response to the oddball visual stimuli in a visual oddball task. In other words, we expected the migraine cortical activity would show a greater increase in attentional response to an oddball as measured via ERP N200 and P300 components as compared to controls. As hypothesized, both migraineurs and control participants demonstrated the expected relationship with the oddball having a larger amplitude than the standard (Figs. 3 & 4), with migraineurs showing an amplification of these N200 and P300 difference scores (Fig. 2), suggesting migraineurs have an exaggerated attentional response to the oddball stimuli. This provides supporting evidence to the theory that migraineurs could find it more difficult not to attend to unwanted amplified processing of visual stimuli. This could directly affect the top-down attentional processing in regard to inhibition of irrelevant visual stimuli. Our findings align with previous research that has indicated reduced sensory habituation, attentional deficits, and increased cortical excitability between headache periods in migraineurs (Mickleborough et al. 2011(Mickleborough et al. , 2014Coppola et al. 2009;Siniatchkin et al. 2003). Below we discuss what the N200 and P300 represent in oddball paradigms and what this might mean for migraineurs.
N200 amplitude as indication of an exaggerated attentional shift in migraineurs
The N200 component indicates perceptual processes by showing how a subject shifts attention to a visual stimulus (Papaliagkas et al. 2008;Piras et al. 2003;Kappenman et al. 2021). In addition, based on Hoffman (1990), N200 is a stimulus-oriented component that refers to how individuals distinguish the stimulus before giving a motor response. Accordingly, our results suggest that our sample migraineurs detected and shifted attention to the oddball stimuli more excessively than the controls. Fong et al. (2020) suggested that the increase in N200 could reflect that impairment of inhibitory control over the cortical pyramidal cells resulting in a wide neural activity in the visual cortex during visual stimulation (Fong et al. 2020;Sand et al. 2009). This fits with the proposed abnormality in GABAergic inhibitory interneurons in migraineurs (Chronicle et al. 1994). Our results are consistent with those of de Koning et al. (2001), who also found that in the interictal phase, N200 amplitude was heightened for migraine without aura. In previous research, specific high-frequency patterns such as gratings and checkerboard stimuli resulted in a higher spike of N200 in migraineurs (Oelkers et al. 1999); our findings showed that even a visually simple oddball paradigm could cause a distinguished increase in N200 amplitude. Therefore, this component could be considered as a potential indicator of anomalies in migraineurs' visual hyperexcitability regardless of the choice of stimulus. We imply that migraineurs could find it more challenging to disregard even the simplest irrelevant information in the given input; such an involuntary attentional shift to the unwanted stimuli could further be discussed in relation to migraineurs' pain intensity and avoidance behaviour during and between migraine attacks. The failure of attentional control could result in an overload of irrelevant and unnecessary data to process may be presumed in close association with the gradual wear and tear of the brain (also described as allostatic load) which is hypothesized to progressively cause migraine chronification.
P300 amplitude indicates intensified active attentional processing in migraineurs
As explained earlier, P300 could be considered as the most important component in the post-sensory (i.e., cognitive) level of the ERP when individuals are given visual or auditory stimuli. The P300 is reliably reflective of active attentional processing, a term that refers to the state of identifying, encoding, and categorizing a stimulus during an oddball paradigm task (Krigolson et al. 2017). By measuring the grand average difference waveforms in migraineurs compared to the control counterparts, we found an evident heightened migraine-related amplitude in P300, indicating intensified active attentional processing of the oddball stimuli. Compared to N200, P300 has been more investigated in migraine studies and stands as a vivid representation of selective attention and information processing. Previous research suggests that migraineurs have a decrease in attenuation of unattended external (environmental) stimuli when attention is located in another region of visual space (covert visuospatial orienting) (Mickleborough et al. 2016) but migraineurs do show normal attenuation of visual stimuli when attention is directed to an internal train of thought (mind wandering). Kam et al. (2015) used a sustained attention to response task (SART) for migraine and non-migraine individuals, asking them to stay attended to or unattended to some selected visual tasks. This study showed that the P300 was attenuated in migraineurs respective to their nonmigraine counterparts when given SART. Our detected increase in P300 amplitude is a evidence of alterations in cognitive processing of external visual stimuli. This supports the subjective complaints that migrainuers have about feeling overwhelmed by most visual stimuli (such as when presented with flashing lights, or a specific checkerboard or striped patterns). We showed that P300 latency was the same for migraineurs and non-migraineurs. Some studies reported a prolonged P300 in the migraine population compared to healthy subject (Huang et al. 2017), indicating that cognitive performance could be delayed in migrainuers. Yet our result did not show a difference in P300 timing when migraneurs were compared to non-migraine headache sufferers. Similar to the findings of Titlic et al. (2015), we speculate that migraineurs spend a similar amount of time on stimulus evaluation as non-migraineur headache sufferers.
Migraine with vs. without aura
Looking at the differences between the headache categories, we found that although the N200 amplitude difference between migraineurs with aura and non-migraineurs nearly reached significance (p < 0.052), the migraineurs without aura showed a significant reduction in both N200 and P300 amplitudes when compared to the non-migraine controls. Regarding such discrepancy between our study's migraine categories (with and without aura), we would like to bring attention to paucisymtomatic or "symptom-free" auras, a pathophysiological experience that could impact the cortical mechanisms in migraineurs without a symptomatic emergence (Hajdikhani and Vincent 2019) and how they are associated with cortical hyperexcitability. Some migraineurs do not report any evident aura-related symptoms, so their situation could be translated as migraine without aura while their cortical activities may indicate that they are suffering in the same way as migraineurs with aura. A recent study by Hadjikhani and Vincent (2019) indicated that some silent neurological symptoms could specifically occur in the frontal lobe even in the absence of apparent symptomatic lesions, indicating the potential for "silent auras" and suggesting these two groups may not differ physiologically. Although we propose that the discovered neurophysiological responses are a potential means to track migraine irrespective of the reported visual symptomology, there is a need to probe neurophysiological characteristics of migraine, such as intercellular inhibitory processes, for better detection and classification of migraine anomalies. Future investigations could help us learn whether or not all migraine headaches are involved with intrinsic cortical dysfunctions with similar phenomena to cortical spreading depression (CSD), a gradual neuronal and glial depolarization wave that is more frequently found to happen in migraineurs with aura but is recently being hypothesized as migraine pathogenesis (Harriott et al. 2019). In accordance with the attention that International Classifications of Headache Disorders (ICHD) brought to further neuroscientific and clinical studies of migraine without aura (Headache Classification Committee of the International Headache Society 2018), we encourage explorations of glial and intracortical hyperactivity found in prefrontal areas compared to the extrastriate visual cortex activities and how they could be associated with the frequency of reported visual phenomena such as scotoma. In addition to that, future studies could explore in more detail how attentional processing of migraine as a predominant primary headache disorder is similar to and different from secondary headache due to neurological diseases or postconcussion/mTBI, which may also have reported impaired visual disturbances.
Finally, while we used a portable EEG device that could give convenient output about attentional processing in everyday settings, we decided to keep our environment homogeneously controlled for all the participants during this study. Our oddball paradigm was run in a soundproof booth with controlled visual or auditory distractors. Additionally, compared to the conventional 10-20 EEG system, our utilized apparatus is easy to wear and does not cause any discomfort for headache sufferers presumably even if diagnosed with allodynia. We accordingly suggest future studies to explore attentional processing in everyday life contexts in individuals with headaches, including migraineurs and extending to individuals with a secondary headache from concussions, TBIs, and other neurological disorders. This could shed an eye on anticipations of modifying urban light pollution for those who suffer from light sensitivity, specifically migraineurs who are dealing with an ongoing cause of disability and may need lifelong support and adjustments in their living conditions. Along with the neurophysiological measurement of attentional processing, neuropsychological evaluations of executive functions could be added in order to learn migraineurs' behavioural responses as well.
Limitations of the study
In our study, participants were instructed to hold the iPad in a comfortable manner so as to limit the movement of the head during the experiment. However, this may present a confound as the iPad-and therefore, the presented stimuli-was not at a homogenous distance from the participants' face, both within and between subjects. Therefore, data relevant to small muscle and eye movements were not able to be appropriately collected and analyzed. This may have negatively impacted the study's results, as some effects may be a result of neglecting eye movements, which have an impact on resulting ERP data. Furthermore, we conducted our research in a laboratory setting using university students, which may be taken into consideration for generalizability. Finally, more information could have been gathered regarding participants' depression or anxiety and a more diverse sample with different genders, age groups, neurological conditions could have been collected, along with demographic information such as education, race/ethnicity, socioeconomic status. Moreover, no information was collected about the frequency of migraine vs. non-migraine attacks in the migraine group.
Conclusion
We explored visual attention processing of migraineurs compared to non-migraine headache sufferers by focusing on N200 and P300 components of ERP data in response to the oddball paradigm-a method for assessing sensory and postsensory processes following the onset of a stimulus. The migraineurs presented an amplified difference in their postsensory processing of the visual oddball stimuli in N200 and P300 as compared to controls, indicating that migraineurs detected, shifted attention to, and actively attended the oddball stimuli more excessively than the controls. Overall, we provided further evidence for a heightened cortical response when attending to visual stimuli in migraineurs. Our results imply that migraineurs go through a more complicated attentional processing in response to what they see around them every day, during the headache-free phases of their lives. This study adds to the growing picture that abnormal attentional processing may contribute to an increased brain response to ordinary visual stimuli, and the extent to which this response is related to hyperexcitability of the cortex and headache pain in migraine, as well as how it may relate to other neurological conditions which have similar headache pain and visual sensitivity, such as post-concussion system, warrant future work.
Funding This research was supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada in the form of an NSERC Discovery Grant to the senior author M. Mickleborough (RGPIN-2016005811) and an NSERC Undergraduate Student Research Award to G. Sun. | 2022-08-12T06:18:36.505Z | 2022-08-11T00:00:00.000 | {
"year": 2022,
"sha1": "9648b28022a7e9ef7fc5b86097f61abd3bc2dc3e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00221-022-06408-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "9a43444ca281c7e417fb93c1f15984bbfae50a90",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
248531243 | pes2o/s2orc | v3-fos-license | Assessing the psychosocial work environment in the health care setting: translation and psychometric testing of the French and Italian Copenhagen Psychosocial Questionnaires (COPSOQ) in a large sample of health professionals in Switzerland
Background Measuring work-related stress in a reliable way is important in the development of appropriate prevention and intervention strategies. Especially in multilingual studies the use of comparable and reliable instruments is crucial. Therefore, the aim of this study was to translate selected scales and single items from the German version of the Copenhagen Psychosocial Questionnaire (COPSOQ) into French and Italian and psychometrically test them in a sample of health professionals. Methods This study used cross-sectional data from health professionals at 163 randomised selected health organisations in Switzerland. Selected COPSOQ items/scales were backwards- and forwards- translated and cross-culturally adapted from German to French and Italian. Reliability was assessed with Cronbach alpha and intraclass correlation coefficients, construct validity with confirmatory factor analysis (CFA) and structural equation modelling as well as comparative fit index. Results Responses from 12,754 health professionals were included in the analysis. Of the overall 24 scales, 20 in the German version, 19 in the French version and 17 in the Italian version attained sufficient internal consistency with a threshold of 0.7 for Cronbach’s alpha. Predominantly high factor loadings on scale level are reported (> 0.35), as well as good and satisfactory fit values with RMSEA below 0.1, SRMR below 0.08 and CFI above 0.95. For 10 out of 15 scales, the test for factor invariance revealed a significant difference regarding the psychological constructs of the scales across the language versions. Conclusions The psychometric properties verify the underlying theoretical model of the COPSOQ questionnaire, which is to some extent comparable across the three language versions. Of the 10 scales with significant factor variance, four showed large differences, implying that revision is needed for better comparability. Potential cultural issues as well as regional differences may have led to the factor variance and the different reliability scores per scale across language versions. One known influencing factor for regional differences is culture, which should be considered in scale development. Moreover, emerging topics such as digitization should be considered in further development of the questionnaire. Supplementary Information The online version contains supplementary material available at 10.1186/s12913-022-07924-4.
Background
Stress at work is becoming an increasingly relevant issue, with one in six European employees reporting chronic health problems [1]. The resulting costs of stress at work are internationally considered a significant financial burden on society (US$ 221′13 million to 187 billion) [2]. In Switzerland, for example, work-related stress accounts for 24% of total health-related production losses due to absenteeism as well as presenteeism, which corresponds to 3.2% of employees' average monthly earnings [3]. Work-related stress is defined as 'a pattern of reactions that occurs when workers are presented with demands or pressures (stressors) that are not matched to their knowledge, abilities and skills and which challenge their ability to cope' [4,5].
Health professionals in particular are frequently affected by various stressors at work, such as workprivate life conflicts, understaffing, long working hours, high quantitative and emotional demands and reward frustration [6][7][8][9][10]. Stress at work potentially leads to lower job satisfaction and commitment to the organization, and is associated with health professionals' intention to leave their profession prematurely [11][12][13]. In consequence, work-related stress may exacerbate the issue of workforce shortage of qualified health professionals in several countries [14]. In Switzerland, the healthcare system is also struggling with such a shortage [15].
Assessment tools that capture stressors and consequences of stress at work among health professionals in a reliable and valid way are essential in developing appropriate prevention and intervention strategies. Several studies have been conducted to assess work-related stress and intention to leave among health professionals, such as the European longitudinal Nurses' Early Exit study [16][17][18] or the RN4CAST [19] study, using selected scales of the Copenhagen Psychosocial Questionnaire (COPSOQ) to cover relevant topics among health professionals. The COPSOQ developed by Kristensen [20] is one of the most widely used instruments and has been translated into more than 25 languages [21][22][23]. The COPSOQ is a self-report questionnaire that assesses psychosocial stressors and stress reactions as well as individual health and wellbeing [5], and has the advantage of a scientifically grounded theoretical background [24]. The COPSOQ is available in a short, middle or long version and is designed for workplace surveys, analytic research and international comparisons [5,20,22]. The scales and single items included in the COPSOQ, are used to assess various stressors at work, such as demands (e.g. quantitative demands, sensorial demands), work organisation and content (e.g. influence at work, opportunities for development, meaning of work), social relations and leadership (e.g. predictability of work, role clarity, role conflicts, quality of leadership, social support at work), the person-work interface (e.g. job insecurity) as well as the home-work interface (e.g. work-private life conflict, demarcation). In addition, scales assessing employees' stress reaction (e.g. behavioural or cognitive stress symptoms) and possible long-term consequences of stress at work (e.g. burnout-symptoms) are included [22].
The COPSOQ has already been used in the healthcare sector, translated and validated in German, French and Italian and tested in previous studies [17,[25][26][27][28]. The current version, number 3, of COPSOQ developed by the International COPSOQ Network [29] consists of so-called core items that are mandatory in any national version and further items that can be added. Thus, every national version differs in these further questions. Consequently, since the available translated versions have been adapted to the cultural conditions of the country for which they were designed and differ greatly in terms of topics and item selection, comparable French, Italian and German versions of the questionnaire for multilingual studies are currently lacking. As an outlook for further developments of the questionnaire, the COP-SOQ international network strives for international comparability and calls to examine validity across countries [25]. A comparable version in German, French and Italian is especially important for countries with these national languages, such as Switzerland (66% Germanspeaking, 23% French-speaking, 8% Italian-speaking). In multilingual samples like Switzerland, cultural adaptation is important to understand if the linguistic groups interpret and understand the items in the same way. Therefore, comparable items / scales are essential [30].
This study aims to present selected scales and single items from the German COPSOQ Version translated into French and Italian and to analyse their psychometric properties in a large and heterogeneous sample of health professionals in Switzerland.
Design
This study was conducted in two phases. First, the selected scales and single items from the COPSOQ were translated from German into French/Italian, culturally adapted and tested using 'cognitive debriefing' in interviews.
Second, the translated scales and single items were psychometrically validated in a large group of health professionals as part of the STRAIN project (work-related stress among health professionals in Switzerland). Briefly, STRAIN is an ongoing cluster randomized controlled trial (ClinicalTrials.gov identifier: NCT03508596) that is based on three measurements: the baseline T 0 , the first measure T 1 and second measure T 2 . The results presented in this study are based on the cross-sectional data from the STRAIN baseline measurement T 0 (September 2017 to March 2018) and the first measurement T 1 (January to May 2019). Since cases with repeated measurements were identified and removed (e.g. if a person filled out the questionnaire at T 0 and T 1 , the case at T 1 was removed) the study is based on cross-sectional data only. Further details regarding the STRAIN project are published in Peter, Schols [31].
Recruitment and study sample
Health organisations were randomly selected from all hospitals, nursing homes, and home care organisations registered by the Swiss Federal Statistical Office in 2016. These included Swiss acute care, rehabilitation and psychiatric hospitals, nursing homes and home care organizations from all language regions of Switzerland. A total of 100 hospitals, 100 nursing homes, and 100 home care organisations were randomly selected from the German, French, and Italian-speaking regions of Switzerland using a web-based randomization approach [32] also ensuring a geographically representative sample for Switzerland. Overly small (average number of beds < 20, < 7 employees) or specialised organisations (e.g. in gynaecology or neonatology) were excluded.
Selected organisations were invited to participate and provided with information about the study. A total of 36 acute care, rehabilitation or psychiatric hospitals (23 German-speaking, 12 French-speaking, 1 Italianspeaking), 86 nursing homes (56 German-speaking, 24 French-speaking, 6 Italian-speaking) and 41 home care organisations (36 German-speaking, 3 French-speaking, 2 Italian-speaking) agreed to take part in the study [31].
Content and use of the questionnaire
Using the German COPSOQ versions from 2005 and the extended German standard version 2017 ( [26]; Nübling et al. 2017 [33]), we selected scales for translation and validation that were in previous studies [34] considered relevant regarding the work environment and demands at work in the healthcare sector. Table 1 shows the seven domains and 29 selected COPSOQ scales that were translated and validated for this study. All questions (i.e. items) for the three languages are available in Supplement A. For all scales used in the questionnaire, consent was obtained from the original author for their use. The COPSOQ versions are not under license. The scales we included from COPSOQ revealed satisfactory-good construct validity, criterion validity, diagnostic power and reliability (Cronbach's alpha 0.64-0.89) in previous studies [22,25,26].
The item responses are scored on a five-point Likert scale (1 = always, 2 = often, 3 = sometimes, 4 = seldom, 5 = never/hardly ever or 1 = to a very large extent, 2 = to a large extent, 3 = somewhat, 4 = to a small extent, 5 = to a very small extent). The polarity on the Likert scales differ between the scales, e.g. for scales on demands at work high scores indicate higher risk for work-related stress, while for the scales on opportunities for development or influence at work low scores indicate a higher risk for work-related stress. The total scale scores are arrived at based on average item-responses and transformed to a value range from 0 (never/hardly ever or to a very small extent) to 100 (always or to a large extent), taking account of reversed scored items as well. This transformation of items from 1 to 5 to 0-100 is done in most publications using the COPSOQ to allow comparability of results when using different COPSOQ Versions [22]. According to the original author of the COPSOQ [22], scale scores can be calculated if at least half the items are not missing (e.g. for a scale with 5 items, the mean is calculated if at least 3 of the 5 items are completed). No imputation procedure for missing values was performed.
Translation and cultural adaption
Items from selected German-COPSOQ scales were translated and cross-culturally adapted to French and Italian in accordance with established guidelines for scientific translation processes "SPOR Principles of Good Practice" [35]. Figure 1 presents the stages of the translation process. In stage one, all items were independently forward translated by a native French/Italian-speaking health professional and a native French/Italian-speaking professional translator. After translation, the two versions were compared, discussed (peer group stage 1: two first authors and translators native French/Italian-speaking), and a common final version 1 was created. In stage two, the translated items were independently back translated into German by a French/Italian-speaking health professional and a translator, who were native German-speakers. Afterwards, language discrepancies were resolved by discussion (peer group stage 2: two first authors and translators native German-speaking), and a final version 2 was created. If questions arose regarding the comprehensibility of individual items, the original author of the German COPSOQ scale was involved. In a last step, the translated items were tested using 'cognitive debriefing' [35], to determine acceptability, understandability and clarity of translation. For this purpose, interviews with 5 native French-speaking and 5 native Italian-speaking health professionals were conducted and all items tested. After those interviews, a few adjustments were made in the translation-team (two first authors, native French/ Italian-speaking, and German-speaking translators). Afterwards a final version was created and proofread by a translation agency (Final Version).
Data collection
For data collection, all health professionals (nurses, midwives, medical-technical, medical-therapeutic professionals, physicians) in the participating organisations were invited to participate. The questionnaire was available in an online and paper version (including a direct reply envelope) in a German, French and Italian Version. The participation was on a voluntary basis for organisations as well as for health Sensorial demands (precision, vision, attention) 5 Work environment (being exposed to noise, cold, chemicals) 5 Demands for hiding emotions (hiding feelings) 2
Work organisation & content
Opportunities for development (opportunity to develop skills) 3 Influence at work (degree of influence concerning work) 3 Scope for breaks/holidays (decide when to have a break / holidays) 2 Meaning of work (perceiving work as meaningful / important) 2 Commitment to the workplace/organisation (being proud to belong to this organisation) 2
Social relations & leadership
Predictability (being informed in advance about decisions, changes) 2 Rewards (work is recognised and appreciated by one's superior) 1 Role clarity (clear work tasks, objectives, area of responsibility) 3 Role conflicts (contradicting role requirements) 3 Quality of leadership (superior is good at work planning, solving conflicts) 4 Social support at work (support received from colleagues/superior) 4 Feedback (feedback received from superior) 2 Social relations at work (possibility to talk to colleagues during work) 1 Social community (atmosphere, co-operation) 2 Unfair behaviour / mobbing (feeling unjustly criticized by colleagues/superior) 1
Person-work interface
Job insecurity (worry about becoming unemployed) 4 Insecurity of the working environment (changes in shift schedules) 2
Home-work interface
Work-private life conflict (conflict between work and private life) 5 Demarcation (being available in leisure time for work issues) 2
Stress symptoms & long-term consequences
Behavioural stress symptoms (not having time to relax or enjoy life) 4 Cognitive stress symptoms (problems concentrating, taking decisions) 4 Job satisfaction (being pleased with work prospects, conditions) 6 Intention to leave the organisation (thoughts on job changes) 1 Intention to leave the profession (thoughts on career change) 1 Burnout-symptoms (emotionally, physically exhausted) professionals and they had the option to choose the version of the questionnaire they preferred (online or paper version).
Psychometric and statistical analysis
Participants' characteristics and validation statistics for all scales were stratified by language groups. Since not all scales contain a sufficient number of items to calculate all psychometric coefficients (e.g. singleitem scales), reliability was calculated only for scales with at least two items [36] and construct validity for scales with at least three items [37]. Reliability was investigated using Cronbach alpha and intraclass correlation coefficients. Although Cronbach alpha is an accurate estimate for two items, it may underestimate true reliability [36]. Floor and ceiling effects were calculated as the proportion of respondents choosing the lowest and highest response options for all items within a scale, adhering to the procedure from comparable studies [23,38]. Furthermore, we calculated Intra Class Correlations (ICC) (3,1) in accordance with the recommendation by Shrout and Fleiss [39] that ICCs (3,1) be used to measure the consistency of multiple ratings (two-way mixed effects analysis of variance (ANOVA); each subject is measured by a fixed set of items), using the psych package in R [40]. For Cronbach Alpha, values > 0.7 indicate scale suitability, whereby a higher number of items normally results in a higher coefficient [41]. For ICC values, less than 0.4, between 0.4 and 0.59, between 0.60 and 0.74, and greater than 0.75 are indicative of poor, fair, good, and excellent reliability, respectively [42].
Construct validity and associations between latent constructs were estimated using confirmatory factor analysis (CFA) and structural equation modelling using latent variable analysis in R [43,44]. CFA tests the given theoretical model and defines its measure of quality [45]. Construct validity was estimated a) on scale levels by using single items as indicators, and b) on domain levels by using the mean values of scales as indicators. For the latter we used structural equation modelling to assess the strength of association between the different psychological domains. Standardized loadings/ coefficients (β), corresponding standard errors (S.E), and R-squared (amount of scale variance explained by latent variable) are shown. The values for factor loadings were seen as satisfactory above 0.4 [46]. Various measures were used to estimate model fit. A root mean-square error of approximation (RMSEA) below 0.05 was considered good (below 0.08 as acceptable); a Standardized Root Mean Square Residual (SRMR) below 0.08, and comparative fit index (CFI) above 0.95 were considered satisfactory fit [43,47,48]. In multilingual studies, comparability of the data from different language versions is crucial. Hence, the assumption that the instrument measures the same psychological construct across language groups was tested. To compare CFA models (on scale levels) across language groups, likelihood ratio tests were conducted [49]. Analyses were performed using R (version 3.5.1) [50].
Study sample description
A total of 12,754 health professionals completed the questionnaire with a mean age of 41.48 years (SD 12.47). A total of 10,738 (84.2%) were German-, 1788 (14.0%) French-, and 228 (1.8%) Italian-speaking. Most of the respondents were female (81%), nurses (58%), and worked in the acute care setting (42.8%). Participants' characteristics are shown in Supplement B. The percentage of missing values on scale level was between 7 and 13%. Most of the scales had low floor and ceiling effects, except for the scales "unfair behaviour", "intention to leave the profession" and "intention to leave the organisation". Table 2 shows the results for reliability of the scales stratified by language group. Scales that include at least two items were considered for calculation. In the German version 20 of the 24 scales with at least two items exceeded the conventional threshold of 0.7 for Cronbach's alpha, indicating sufficient internal consistency, whereas in the French version 19 and in the Italian version 17 reached the threshold of 0.7 for Cronbach's alpha. The scales "Quantitative demands", "Opportunities for development", "Scope for breaks and holidays", "Feedback", and "Demarcation", failed to show desirable levels for Cronbach's alpha in some or in all language groups, ranging from 0.39 -0.68. The vast majority of scales showed fair (0.40 -0.59) or good (0.60 -0.74) scale consistency as measured by ICC. Figure 2 illustrates the mean values (between 0 and 100) on the domain level (demands at work, work organisation & content, social relations & leadership, home-work interface and stress symptoms) as well as scales on job satisfaction, intention to leave (the organisation / the profession) and burnout symptoms. The figure demonstrates that the mean values for the German, French and Italian versions show similar low or high relative tendencies for each dimension/scale.
Construct validity on scale level
In Table 3 the results of the CFA for each scale by language using single items as indicators are presented. R-squared showed predominantly satisfactory factor loadings with values higher than 0.40 in all language groups. In Table 4 the corresponding results from the estimate model fit for each scale and language version are presented. The majority of the scales indicated a good to satisfactory fit with an RMSEA below 0.1, SRMR below 0.08 and CFI above 0.95. The scale Social Support at work could not meet any of the criteria in any language versions.
Factor invariance
The measurement of invariance tests the psychometric equivalence of the construct across groups. Table 5 presents the findings of the invariance test. The test for factor invariance indicates a variance across the language versions with p-values of < 0.05. For 10 out of 15 scales a significant difference regarding the psychological construct across the language versions is expected. All dimensions included scales, which showed variance across language versions. In particular, the dimensions Work organisation & content as well as Home-work interface comprised solely of scales with variance across the languages. Model fit was acceptable for RMSEA (FR 0.08, IT 0.08), and SRMR (FR 0.07, IT 0.07), respectively. Models did not show a satisfactory fit with regards to CFI (FR 0.82, IT 0.82) in either language.
Discussion
Valid versions of the COPSOQ are already available in the languages German [25,26], French [27] and Italian [28]. However, for the first time, a questionnaire for measuring stressors and consequences of work-related stress among health professionals is available for multilingual studies in the three languages German, French and Italian which is, to some extent, comparable across those languages.
Most of the translated and tested scales showed acceptable to good internal consistency. The CFA tends to verify the underlying theoretical model of Nübling, Stößel [25], which has been already tested for concurrent validity [51]. It also confirms the strong relationships between the dimensions, as well as the low values for the scales social relations and sensorial demands; we therefore underline the proposition to remove or revise those scales [21].
Moreover, the results are comparable to a recently published study in which the latest version of the underlying questionnaire (COPSOQ III) was validated without an Italian version for international comparability [29]. However, there are differences regarding the reliability of some scales. In Burr, Berthelsen [29], the scales Predictability (0.62), Meaning of Work (0.62) and Job Insecurity (0.66) are given a below-threshold value of 0.7, whereas in this study the scales Quantitative Demands (0.56 -0.62), Opportunities for Development (0.65 -0.68), Scope for breaks and holidays (0.39 -0.43), Feedback (0.62 -0.65) and Demarcation (0.39 -0.40) were revealed to be unsatisfactory in terms of achieving the threshold. However, the scales for Feedback and Demarcation are no longer included in the COPSOQ III, which makes comparison of those two scales with the study of Burr, Berthelsen [29] impossible and highlights the diversity of the included scales within the national versions. Hence, the scales Feedback and Demarcation can be excluded in accordance with the latest COPSOQ III version. Furthermore, the COPSOQ III has the dimension Control over Working Time included, which consists of 4 items with a Cronbach's alpha of 0.69 [28]. Two items match with the items of the Scale Scope for breaks and holidays, which was found to have a low reliability in this study as well as the study evaluating the German COPSOQ version [52]. The authors of the COPSOQ German version have acknowledged this issue and stated to observe it in further studies [52]. In the meantime, pending further development of the COPSOQ by the responsible COPSOQ network, researchers must decide in each case when using the current version as to whether international comparability or reliability is prioritised. When deciding for international comparability, it should be noted that the reliability of comparability would be limited.
Furthermore, the data used in the study of Burr, Berthelsen [29] are company-specific and collected across a multitude of branches, whereas in this study the data comes from health professionals working in the healthcare system, and are thus expected to differ to a large extent with regard to the working conditions and occupational culture.
Independently of the language version, short scales were affected by lower reliabilities. This finding might contribute to the discussed dependency of Cronbach's alpha on the number of items [53]. In addition, some findings imply the evaluation of the scales, whether they should be enriched with additional items or excluded from the questionnaire. Cultural and regional differences may have led to the different reliability per scale across language versions and therefore to a significant factor variance in 10 out not assign too much significance to the results of the scale social support at work. In Switzerland researchers have to deal with a heterogenous population when surveying nationally, due to the different language regions, despite the country's small size in relation to other countries. It is known that linguistic differences often go hand in hand with cultural differences and therefore should be considered when developing a measurement across languages and/or cultures [54]. Several questionnaires appeared to struggle with invariance across language versions [30]. One reason for the statistical differences across the language versions could be that the French and Italian language regions in Switzerland have higher numbers of foreign health professionals, such as cross-border workers [55], whose evaluation criteria might differ from those of domestic personnel, for example in terms of job insecurity (e.g. migration policy). An analysis of the missings at the item level could indicate cultural issues, which should be addressed in order to enhance comparability. Moreover, the enormous change in healthcare systems brought about by digitization [56] implies the emergence of new influencing factors from the interaction of health
Strengths & limitations
Besides a structured and carefully implemented translation process, one strength of the study is the large sample size across all health professions, settings and language regions, which allows a generalization of the findings. This study delivers important information for further research enabling multilingual research in measuring stressors and consequences of stress at work among health professionals in Switzerland. It provides an extensive amount of information on scales, which is expected to be helpful in future research aimed at advancing scale development and choosing appropriate scales. For the first time, language versions of the COPSOQ were comprehensively statistically analysed for their consistent measurement of the underlying construct.
Although the strengths are promising, they must be considered in the context of the limitations, since twothirds of the scales differ significantly regarding the measured psychological construct in the language versions. In addition, the results presented in this study are limited to the healthcare sector. Therefore, further psychometric testing of the new multilingual COPSOQ Versions in Italian and French should be carried out in other work sectors to further confirm our results. Hence, interpretation of the results across language regions must be made in the context of these differences. The findings could have originated in the bottom or ceiling effects that were identified, which indicate limited discrimination properties of some scales. Moreover, the study included data sets from two measurement periods, which may have led to duplicates, and, in turn, to cases of duplicates remaining undetected due to possible misstatements. Future research should allow to assign two measurement points to one individual, which would enable to conduct an analysis of test-retest reliability. This analysis has been found to be more appropriate for the analysis of the reliability of psychosocial work environment scales [57]. Finally, several scales were measured with single-items or two items; it is thus possible that the construct to be measured was not sufficiently covered by these items.
Conclusions
This article presents the psychometric properties of a trilingual questionnaire that measures stressors and consequences of stress at work among health professionals. The COPSOQ is known as a generic instrument across branches. An adaptation to working conditions in the healthcare sector could optimize the psychometric properties of the instrument. Hence, future investigation to optimize internal and construct validity of some scales and dimensions is needed to improve the questionnaire. The identified variances across language versions imply re-evaluating the questionnaire to determine whether it is biased by cultural factors, which should be identified in advance. | 2022-05-06T13:37:59.806Z | 2022-05-06T00:00:00.000 | {
"year": 2022,
"sha1": "1a9ba59e7b72c42db2bf35cae226166b288809ac",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "1a9ba59e7b72c42db2bf35cae226166b288809ac",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
62222539 | pes2o/s2orc | v3-fos-license | Parallel Simulation of Population Balance Model-Based Particulate Processes Using Multicore CPUs and GPUs
Computer-aided modeling and simulation are a crucial step in developing, integrating, and optimizing unit operations and subsequently the entire processes in the chemical/pharmaceutical industry. This study details two methods of reducing the computational time to solve complex process models, namely, the population balance model which given the source terms can be very computationally intensive. Population balancemodels are also widely used to describe the time evolutions and distributions of many particulate processes, and its efficient and quick simulation would be very beneficial. The first method illustrates utilization of MATLAB’s Parallel Computing Toolbox (PCT) and the second method makes use of another toolbox, JACKET, to speed up computations on theCPU andGPU, respectively. Results indicate significant reduction in computational time for the same accuracy using multicore CPUs. Many-core platforms such as GPUs are also promising towards computational time reduction for larger problems despite the limitations of lower clock speed and device memory.This lends credence to the use of highfidelity models (in place of reduced order models) for control and optimization of particulate processes.
Introduction
Modeling and simulation are powerful tools universally employed in designing, analyzing, and controlling particulate processes.These particulate processes such as crystallization, granulation, milling, and polymerization are some of the major unit operations carried out in the manufacture of bulk commercial products like pharmaceuticals, detergents, fertilizers, and polymers.Research work focusing on the modeling and simulation of these particulate processes, specifically those involving granular materials, has been growing at a steady pace over the last few decades [1][2][3].This is a significant achievement in itself, considering the fact that these systems are inherently dynamic in behavior and are driven by complex microscale phenomena [4].Although the underlying mechanisms of such processes are yet to be thoroughly grasped, granulation, a particle design process, is one such area where substantial progress has been made over the years [5].The approaches for modeling such systems are as numerous as they are varied: Discrete Element Modeling (DEM) [6], Population Balance Modeling (PBM) [3,[7][8][9][10][11][12][13], hybrid models by combining PBM with DEM [14], PBM with Volume of Fluid (VoF) methods [15], and PBM with Computational Fluid Dynamics (CFD) [16], to name a few.Of the aforementioned, the most widely used are the DEM and PBM methods.Population Balance (PB) models are more suited to simulate a very large number of particles over lengthy time periods due to the semimechanistic approach (compared to more mechanistic approaches such as DEM and VoF) it utilizes to describe the dynamics of granulation processes [17].Because of these advantages, PBM offers a very efficient manner of developing a comprehensive model of a granulation process, which can be simulated within a realistic time frame to be further used in control and optimization [18], since it provides a convenient mathematical framework whereby the detail of the model is user specific and depends on the kernel formulations specified by the user [7].
However, as with all model-based simulations, the utility of the PBM technique depends on the computational expenses it incurs in terms of run time and hardware resources.In addition to increased numerical accuracy, the need for speed has always been a demand for the scientific community to handle larger and more complex problems as well as the industrial community who use process models to Modelling and Simulation in Engineering study their processes in silico.This is demonstrable by the fact that even in a high-level language environment like MAT-LAB, the computational load increases almost polynomially on increasing the dimensionality of a system, leading to longer run times.MATLAB is one of the preferred languages of development for scientific computing due to the ease with which algorithms can be developed and prototyped, which in turn is enabled by its array-based semantics, powerful visualization capabilities, and subject-specific toolboxes, all encased in an integrated framework [19].While MATLAB excels on the "ease of programmability" and "portability" fronts, it has been found to be lacking in the "performance" department [20].This is partly due to the abnormally high memory requirements of modern scientific applications, and partly due to the fact that MATLAB itself consumes a sizable portion of the system memory.In addition, a MATLAB code for a distributed process such as granulation typically has several nested for loops and multiple operations over large data sets that are executed "serially" or "sequentially" by default, drastically bringing down the rate of simulation.Further discussion of observed execution bottlenecks in a PBM code can be found in the next section.Although researchers are continuously upgrading their hardware to include the latest CPUs (Central Processing Units) and higher amounts of RAM (Random Access Memory) in an attempt to improve calculation efficiencies, most of them do not develop codes that fully leverage the parallel processing capabilities of the current generation of multicore/multiprocessor CPUs.Furthermore, due to limitations on the power density that can be supplied, attainable peak CPU clock frequency is restricted (6 GHz for an Intel Core i5 [21]).
By parallelizing an existing code, the programmer is able to circumvent the restriction of running a code sequentially on one core and in addition exploit other massively parallel processors like the GPU (Graphics Processing Unit) [22].Since 2006, MATLAB comes standard with a toolbox for this purpose called the Parallel Computing Toolbox (PCT), although for GPU computation, the toolbox from "Accelereyes inc." named "JACKET" is chosen as it clearly outperforms MATLAB's built-in capabilities [23].There has been some recent work on the parallelization of PBM simulations prior to this study.Gunawan et al. formulated an efficient way of parallelizing High-Resolution Finite Volume-(HRFV-) solved PBEs by assigning the operations on particles in the first half of the size range that were more computationally intensive to processors of greater rank and operations on the other size range half of decreasing load intensity to higher ranked processors [24].Their strategy enabled efficient load distribution and resulted in a near linear speedup.More recently, Ganesan and Tobiska [25] built upon this work by developing a finite element approach of splitting the PBE dimensionally into spatial and internal coordinates, permitting the problem to be parallelized easily without the need for load balancing.Both papers outlined innovative techniques for parallelization of a PBM for multiple processing units.This paper intends to provide a means of significantly mitigating the handicaps of PB simulations by demonstrating how parallel processing capabilities of multicore CPUs, as well as GPUs, can be harnessed within a high-level language environment like MATLAB by using both in-built functionalities as well as third-party toolboxes.The focus here will not be on developing specialized parallel codes from the ground up that will eventually be application and/or hardware limited, but rather provide the modelling and research community at large with the aforesaid tools to parallelize their codes with minimum effort.GPU computing has also been applied to mixing processes described by DEM as seen in the work of Radeke et al. [26] thus confirming its usefulness in mitigating computational times of complex process models.
Background
2.1.Population Balance Models.Population balance models have traditionally been one dimensional described by a single intrinsic property such as particle size [27].A general form of the population balance equation (1) highlighting the temporal variation of the distribution of one or more intrinsic properties is given as follows [28]: where is the particle number distribution and x is the vector of internal coordinates, which are of interest for studying the process.R formation and R depletion represent the net formation and depletion rates of particles occurring from all discrete granulation mechanisms such as aggregation, nucleation, and breakage.However, dependence on particle size only was found to be inadequate in characterizing the variability in granulation behavior and thereafter, other factors like granule porosity were also found to exert a dominating effect on the process [29,30].Consequently, in addition to granule size, binder content and granule porosity are typically selected as decisive factors in optimizing and controlling the process, as evidenced in the current research on granulation [11], which involves model development within a three-dimensional population balance framework.Verkoeijen et al. [31] had previously described an efficient way of implementing such a framework by expressing the intrinsic properties of granules-that is, the volume of solids , volume of liquid , and volume of gas -as a vector in volume space with three coordinates, that is, , , and .Particle internal coordinates (2) are now represented as where each of these three coordinates (, , ) comprises unique distributions of phase volumes (solid, liquid, or gas) of all particles belonging to a predefined volume class, and can therefore be represented as three separate discretized domains or "grids, " containing the distributions.These grids are composed of "bins" that denote the volume classes to the particles present in the population.The first bin in the solid volume grid represents the particles that have the least solid content.This way, the individual solid volumes of the particles can be represented by allocating them in the corresponding bins.The same principles apply to the other two phase fraction grids, liquid () and gas ().From now on, the term "grid size" will be used to refer to the total number of bins in a grid.This approach has two important benefits: (a) it enables decoupling of individual mesoscopic processes like aggregation, consolidation, and layering; (b) it improves the numerical solution of the aggregation model due to the mutually exclusive nature of the internal coordinates [7].This 3-dimensional model can now describe changes in the volume distribution of particle volume with respect to time [3], as follows: where (, , , ) represents the population density function such that (, , , ) is the moles of granules with solid volume between and + , liquid volume between and + , and gas volume between and + .The partial derivative term with respect to accounts for the layering of fines onto the granule surfaces; the term with respect to accounts for the drying of the binder and the rewetting of granules; the term with respect to accounts for consolidation, which, due to compaction of the granules, results in a continuous decrease in pore volume and an increase in pore saturation.On the right-hand side, the R breakage term comprises a breakage kernel and a breakage function; R nucleation accounts for the rate of nucleation of new particles.R nucleation and R breakage are not utilized in this study and therefore not described in more detail.The authors would like to direct the readers to Poon et al. [8] and Ramachandran et al. [3] for their descriptions.R aggregation (4)-( 6) takes into account the formation/depletion of granules due to aggregation, for which the terms have been defined in the literature [32] as where nuc is the solid volume of nuclei and ( , − , , − , , − ) is the size-dependent aggregation kernel that signifies the rate constant for aggregation of two granules of internal coordinates ( , , ) and ( − , − , − ).
Parallel Computing for the CPU and GPU.
To speed up calculations, a single problem or "task" is split into multiple subtasks, which are executed simultaneously on multiple processors.This method of program execution is termed "parallel computing" or "parallel processing" as opposed to "serial" or "sequential" execution, and developing scripts that leverage this style of execution is called "parallel programming" [33].
In the current generation of multicore processors, there are multiple independent processing units called "cores" which carry out a set of instructions.A single processor can consist of many such cores, with each core capable of executing an instruction set.Although a "core" refers to the physical component providing parallelism, in general it can also mean a thread (a piece of software) or a processor or even a machine (on a network) executing a stream of instructions [34].For instance, the Intel Core i7-2600K processor has four physical cores, each with two threads raising the number of (logical) cores to eight [35].The fundamental concept behind parallel programming is that cores/processors should provide a peak speedup of times over just one core/processor.However, such gains in simulation time are at best theoretical, simply because the time needed for data transfer and synchronization to, from, and between cores negate's any benefit in speedup [36].Depending on the hardware architecture of the parallel computer and implicitly, the level of communication required, several parallel programming models have been established, an elucidation of which can be found in the literature [37,38].The most widely used approaches are task parallelism, data parallelism, and distributed memory/message passing model.An elaborate explanation of each model is beyond the scope of this paper, so simple definitions and possible modes of implementation are provided instead [39,40].
(i) Task parallelism is achieved by assigning each task (or subtask) to a unique core or thread (threads model) and finally splitting or combining the data stream at the end.Implementation: POSIX threads, Open MP.
(ii) Data parallelism involves dividing a large amount of data into sections across cores, each of which is then operated upon by the same task within each core.Implementation: Fortran 90 and 95.
(iii) In message passing, each subtask has its own local memory on the core and exchanges data between cores through messages.Programmer must explicitly determine the level of parallelism.Implementation: Message Passing Interface (MPI).
CPU Architecture and Parallel
Programming.Computing architecture can be classified, based on Flynn's scheme [41], as single instruction, single data (SISD); multiple instruction, single data (MISD); single instruction, multiple data (SIMD); or multiple instruction, multiple data (MIMD) systems.The current generation of Intel processors like the Core i7 falls in the MIMD category but utilizes SISD (single instruction, single data) processing units at the lowest level [42].There are three common approaches to implement parallel execution on these system: SIMD (SSE) instructions operating on multiple data sets in parallel with a single instruction stream; simultaneous multithreading (SMT), popularly called "hyperthreading"; or as is now generally preferred through custom libraries or "toolboxes" like MAT-LAB's Parallel Computing Toolbox (PCT).PCT enables the developer to take advantage of multicore processors, GPUs, and computer clusters by making available high-level constructs such as parallel for loops parfor, specialized arrays (distributed and codistributed arrays), and preparallelized numerical algorithms.This allows the programmer to focus on building the algorithm and not on micromanaging parallel communication between cores, which are taken care of by MATLAB behind the scenes.These parallel programming constructs will function in the same way, independent of the underlying hardware component being used, whether it is a multicore desktop via PCT, or on a network of computers (computer cluster) via PCT with the MATLAB Distributed Computing Server (MDCS) package [43].
In MATLAB, each task is handled by an independent instance of MATLAB called a worker or lab that runs as a separate system process.Communication to, from, and between these workers is handled by the client instance of MATLAB.These labs are executed on cores, but their number need not correspond to the number of cores present on a device.In addition to an implicit low-level multithreading that is built into MATLAB, there are explicit methods of parallelism available to the developer as well [44].The parfor keyword is perhaps the easiest way to achieve parallelism in an existing code with little modification.Just replacing the (preferably) outermost for with a parfor in a for-loop results in substantial speedup, sometimes proportional to the number of cores depending on the problem.The job of distributing iterations and collecting end results is handled by MATLAB without any requirement for explicit commands from the programmer.But this gain is soon lost when the number of labs exceeds the number of cores, since communication overhead is always higher between threads than cores [45].Another explicit approach to parallelize a code is by using the SPMD keyword.SPMD (Single Program Multiple Data) is a high-level construct that can be built upon a combination of the aforementioned task, data, and message passing types of parallelism.Each MATLAB worker is assigned the same program, which operates on different arrays or different sections of a very large data array, hence the term "Single Program Multiple Data." Furthermore, if data exchange and synchronization between the workers is desired, functions based on the Message Passing Interface (MPI) library [46] like LabSend() and LabReceive() are available in conjunction with the SPMD keyword.Other forms of explicit parallelism include distributed and codistributed arrays, which will not be considered in this paper.For any of the constructs described before are to be implemented, a matlabpool open command must be issued beforehand in order for the client session to establish a connection with available workers.For further information on PCT constructs and their implementation we refer to the appropriate section of the MATLAB manual [43].
GPU Architecture and Parallel
Programming.The GPU is an excellent example of the SIMD design paradigm.A GPU is organized as an array of many cores, or as NVIDIA describes, "streaming multiprocessors'' (SMs).Each SM has a certain number of ALU (Arithmetic and Logic) units called streaming processors (SPs), which share a common control logic and instruction cache.While the CPU design paradigm boasts excellent performance in sequential operations, the presence of a complex control logic and large cache memory limits the maximum speed achievable in gigaflops [47].The GPU control logic systems, on the other hand, are not as bulky, with the GPU themselves fabricated as relatively wide SIMD vectors, increasing their parallel processing capacity.Owing to their architecture, GPUs are specialized for dataparallel calculations, unlike MIMD-based platforms like the Core i7 which are suitable for task-parallel, data-parallel and, message passing applications.The GPU card used in this investigation was a GeForce GTX 280 with 240 SPs/ALUs, each of nearly 1.3 GHz frequency.Each SM has 1024 threads, bringing the total to 30,720 threads within a single GPU [48].
To program these massively parallel architectures, NVIDIA developed the Compute Unified Device Architecture (CUDA), which was released in 2006, permitting highlevel programmability within the C language [49].CUDA was built upon the three key abstractions of hierarchy of threaded groups, shared memories, and barrier synchronization.CUDA, in conjunction with an Application Program Interface (API), greatly simplifies the process of GPU programming by transforming CPU code written using C, CUDA FORTRAN, OpenCL, or DirectCompute into GPU primitives.There are a number of custom libraries available to a GPU programmer in addition to MATLAB's own built-in support via PCT like GPUmat by the GP-you group and JACKET by the Accelereyes corporation.For this investigation, we decided to go with JACKET because of its extensive collection of GPU-ready functions and better performance when compared to the other products [50,51].JACKET is a third-party MATLAB toolbox acting as a wrapper around CUDA transforming MATLAB functions into GPU functions at the basic level by converting CPU data structures into GPU types.This retains MATLAB's interpretive programming style while providing real-time, transparent access to the CUDA compiler [52].Of all the available constructs, the gfor construct (similar to the PCT's parfor) was applied as it offered the easiest and most efficient way for parallelizing for loops to run on the GPU.It executes for loops in parallel by distributing the values of all loop iterations across GPU cores and subsequently executing calculations on each core in a single pass, resulting in considerable speedup.It must be kept in mind that for both the CPU and GPU, ideal parallelism is attained only if a task can be divided into a number mutually exclusive subtasks, which could then be executed independently of each other on separate cores.This kind of problem is termed as "embarrassingly parallel" [36].In reality, most problems lie somewhere between this extreme and the "annoyingly sequential" extreme.
Model Parallelization Strategy
Parallelization of the code with respect to both the CPU and GPU involved the SPMD approach (outlined in the previous section) combining both data-and task-parallel styles of programming.Though the bulk of the PBM code is "annoyingly sequential" in nature, it is less computationally intensive than the aggregation kernel, which is where the potential for parallelism exists.The aggregation kernel, (assuming a three dimensional form that is required by particulate processes such as granulation) typically comprises 6 nested for loops, with two sets of three loops each, to account for interactions between the , , and fractions of two colliding particles in a bin.Since each MATLAB worker is designed to operate independently of each other with all communications handled by the client instance, the best approach is to decompose the index space adequately by a process known as loop slicing [53].The first step in the process is to identify loop axes (a range of loop index values) capable of functioning as indices for parallelism, followed by assigning these loop axes to available MATLAB workers, numlabs, (preferably equal to the number of cores on the parallel device) by means of labindex.Numlabs returns the number of workers open in a given matlabpool session, while labindex returns the currently executing worker's index.Loop orders may be switched for efficient memory access patterns and axes may be further sliced if the device memory is found to be insufficient for a given loop size.
For the purpose of this study, a 3D population balance model based on (3) was developed with the following simplifications: (i) aggregation as the only source term, eliminating breakage and nucleation terms ( nuc = 0); (ii) "growth terms": drying/rewetting, layering, and consolidation are neglected; (iii) an empirical aggregation kernel proposed by Madec et al. [54] is used, yielding the following PBE (7).Please see Appendix A for details of the aggregation kernel used and Appendix D for the numerical solution of the PBM based on a hierarchical twotiered algorithm proposed by Immanuel and Doyle III [55]: This was done to highlight the improvement in simulation speed achieved by parallelizing only the formation/depletion code blocks of a PBM script, which tend to be the most computationally intensive.It was observed that removing the formation and depletion terms associated with aggregation from a PBM code (that considered all mechanisms) resulted in only a 20% faster simulation time, proving that aggregation is indeed the primary computational bottleneck.This is due to the presence of multiple "nested for loops, " prominently those that account for the integral equations ( 5) and ( 6) running sequentially on a single CPU core.In other words, broadening the range of each loop index causes individual iterations to run slower.Additionally, there are numerous such forloops and sequential sections of code performing calculations independently of each other that can be parallelized.Increasing the number of bins/grids in each dimension with respect to , , and , while raising the dimensionality of the system, also slows down the code execution considerably.This is also termed the curse of dimensionality phenomenon.Although it is preferred to use a higher grid size for an accurate representation of the system, the aforementioned shortcomings curb the degree of flexibility available to a researcher and/or industrial practitioner.Therefore, there is much potential for speedup in parallelizing these loops to run simultaneously on all cores/processors present on the device.
Following the procedure just described, execution of the aggregation kernel can be parallelized by "slicing" the outermost loop: where JACKET's gfor employs an algorithm similar to the one above to distribute sections of a for-loop on a GPU, so the programmer does not have to explicitly manage communication to, from, and between workers.The approach just described, loop slicing, allows greater control of data distribution across workers, reducing the demand for system resources over time in a "smoothed-out" fashion [56].Besides the data-parallel approach, another, more straightforward divide-and-conquer method involves task parallelism.Implementations of task parallelism are generally done through the fork-join model, described in Refianti et al. [57], which relies on multiple threads executing blocks of sequential code to achieve parallelism.Here, a multiprogramming style was adopted in order to easily achieve coarse-grained (meaning fewer, but larger tasks) parallelism with consecutive, but independent sections of the code being mapped onto different threads and task scheduling done at the time of compilation, that is, statically.A major shortcoming of this approach is the static nature of task distribution which leaves the granular complexity of task unbounded.A task with unbounded or variable task size means inefficient CPU usage, since every task runs for different periods of time depending on the size of the problem and consequently exit workers at different times [58].
Both the task-and data-parallel approaches follow the same algorithm: at the start of the simulation, only the MAT-LAB client instance is actively processing code sequentially.On seeing an SPMD keyword, the code then forks off function calls onto idle workers in a parallel manner.With every worker active, execution of the allocated serial tasks now begins asynchronously.After all the workers have completed their respective tasks, they return their results to the client instance as Composite types, which can then be cast back to regular CPU single or double types and subsequently rejoined.To sum up, the procedure followed herein for parallelizing PBMs involved three steps: locating portions of the code that are most time consuming with tools like MATLAB profiler; applying one of the aforementioned approaches for parallelism as appropriate; and finally optimizing for minimal variable transfer overhead.
Comparing CPU-for, GPU-for, and GPU-gfor Execution.
The first set of simulations was conducted to compare the speed gains obtained by running the case with only aggregation PBM code, based on (7), first on a GPU and then a single CPU core.For the GPU, two parallel versions of this code were investigated: in one case, standard forloops were executed on the GPU (termed the "gpu-for" version) and in the other, termed the gfor version, JACKET's gfor constructs were used instead.The CPU version was left unparallelized, that is, with regular for-loops, to execute sequentially on a single MATLAB worker.Simulation was carried out on a machine with a Core2Quad Q6600 processor (2.4 GHz clock, 4 cores, no threads), 4 GB of RAM (2 GB × 2 sticks), and an NVIDIA GeForce GTX 280 GPU (240 CUDA cores, 1296 MHz processor clock, 1 GB memory).Results from the simulation of each of these three cases were first validated by comparing bulk property plots of total number of particles versus time, total volume versus time, and average diameter versus time after the final time step to verify uniformity.This was followed by plotting the time taken to simulate each case versus grid size and then the speedup ratio versus grid size.The ratios were calculated as Ratio = single CPU time gfor time (10) or Ratio = GPU fortime gfor time .
From the curves depicting temporal evolution of granule properties (Figure 1), it is clear that numerical accuracy of the computations was not compromised during execution either on the CPU or GPU, as the curves in each plot coincide perfectly with one another.As expected, the total number of particles (Figure 1(a)) decreases at a constant rate due to aggregation, wherein the collision (and therefore, depletion) of two particles leads to the formation of a new one by coalescence [8].An analysis of the total volume plot, Figure 1(b), predictably reveals constant value lines considering the fact that total mass/volume is conserved in the system; that is, no particles are either added to or removed from the system during the process.The volume of a new, larger granule is equal to the sum of the volumes of the smaller coalescing particles that formed it.By extension, this is the reason why the average granule diameter plot, Figure 1(c), shows a proportional increase in the size of granules over time.Calculations for these bulk properties are the same for all simulation cases and can be found in Appendix E.
The simulation time versus grid size curves, Figure 2(a), show the single-worker CPU version of the code to be much faster than its GPU counterparts, with the slowest of the set being the code with GPU-for-loops, followed by the gfor loop version.It must be noted that the GPU is a stand-alone device and does not share its memory with the host (CPU) or provide a means for virtual memory addressing.In other words, data will not be communicated automatically between the host and the device memories, but rather explicitly invoked.This causes severe memory transfer overheads each time a variable is copied to and from the GPU across the PCI-E bus [59], which is why the GPU versions are drastically slower than their CPU counterparts.Furthermore, while the INTEL Core2Quad Q6600 CPU can achieve processor clock speeds of 2.4 GHz, the GPU core clocks in significantly lower at 1.3 GHz forcing the same computations to take longer to run on the GPU.As anticipated, the code with gfor ran faster than just for on the GPU owing to gfor's inherent ability to schedule and control loop distribution.This speedup is readily discerned in Figure 2(c), with the ratio calculated by (11).Although preliminary results indicate that CPUs are better than GPUs for this program, the trend quickly reverses as we increase the size of the grid (and implicitly, the resolution of the system) beyond "11", as suggested in Figure 2(b).The steady increase in the ratio (10) curve implies that the simulation time curves for gfor and CPU-for are converging and will eventually meet at some particular grid size, after which the GPU will perform significantly better than the CPU in a progressive manner.Beyond a grid size of 20 it became impractical to run the code for extensive periods of time, and therefore further investigations were not carried out.The initial drop seen in the CPU for curve in Figure 2(a) and in Figure 2(b) is an anomaly shown to be reproducible even after initiating the simulation at various grid size values and is probably due to an initial memory overhead during the "warm-up" of the CPU before commencing code execution.In addition to the aforementioned hardware limitations of the GPU, JACKET's execution of a script is not transparent to the programmer, and capabilities in terms of benchmarking, assigning tasks to specific thread blocks, and controlling memory access patterns are nonexistent.Future work will involve building a GPU-efficient code from the ground up, which will lend itself better to parallelization in conjunction with constructs like gfor.
Comparing Single CPU and SPMD Execution.
Next, a comparison of the simulation times for the PBM code to run on a single worker sequentially and then on multiple workers was done, followed by plotting the speedup gained.Prior to execution, the code was "streamlined" to efficiently search for and perform computations on relevant particlecontaining bins in a grid, unlike the version employed in the previous section which looped over all bins irrespective of whether particles were present.This optimization was carried out to eliminate the time spent on unnecessary calculations, specifically with respect to empty bins.The GPU version could not be streamlined since our version of JACKET did not allow for conditional branching within gfor-loops [52].Parallelism was attained with the loop slicing technique described in Section 3. The formation and depletion loops were sliced in accordance with the pool of MATLAB workers available (one, two, four, six, and eight) to analyze the gain in speedup and effects of transfer overhead.The new streamlined code was run on an Intel Core i7-870 CPU (4 cores, 8 threads, 2.93 GHz clock speed) with 8 GB of RAM.To determine the most appropriate index range for loop discretization, different combinations of sliced formation and depletion loops were tested for the efficiency of simulation (refer Table 1).Although formation is the primary computational bottleneck requiring loop slicing, the initial test runs in conjunction with MATLAB's Profiler tool affirmed that it was also necessary for depletion to execute at least one worker for the gain in speedup to outweigh memory transfer overhead.Consequently, certain combinations based on grid size and number of workers were discarded with only pertinent ones being retained.
Within these combinations, the ones yielding the lowest simulation times for a grid size of 36 were chosen from each worker pool class for comparative analysis: 0 formation, 0 depletion (1 worker); 1 formation, 1 depletion (2 workers); 3 formations, 1 depletion (4 workers); and 6 formations, 2 depletions (8 workers).As done previously, the plots for granule physical properties, Figures 3(a)-3(c), were examined to ensure validity and numerical precision of the results.Having confirmed that, the simulation times, the parallel speedup, and efficiency curves were plotted for the five worker pool classes selected (Figures 4(a)-4(c)).
The speedup factor and parallel efficiency were calculated as given in Wilkinson and Allen [36]: Execution time on a single worker Execution time on workers , Simply put, the speedup factor directly quantifies the gain in performance of multiprocessor system over a singleprocessor one.As observed in Figure 4(b), the maximum speedup achieved with 8 workers was 2.2 times, leading to an average per worker efficiency of 27.35.Parallel efficiency is a measure of computational resource usage, with lower values implying lower utilization and higher values implying higher utilization on average.Although the speedup achieved for a grid size of 36 was marginal, it was theorized that an increase in the problem size would improve not only speedup, but also parallel efficiency.As expected, an increase in grid size to 60 positively affected both the speedup and efficiency of parallel execution as seen in Figure 4(c).Furthermore, it was also observed that the most efficient way of parallelization for 6 cores was by splitting formation 5 times and depletion 1 time, as opposed to the previous strategy of splitting formation 4 times and depletion twice.Since depletion is much less computationally intensive than formation and only becomes challenging at higher grid sizes, this finding is in line with our expectation that each worker has to have sufficient work for parallelism to pay off.That is, for a fixed pool of workers, an increase in problem (grid) size will mean improved speedup.This will also explain the drop in efficiency as well as speedup from 4 to 6 workers (i.e., with formation sliced 4 and depletion 3 times, Figures 4(a) and 4(b)).Currently, we have restricted ourselves to a grid size of 60 due to MATLAB's limit on the maximum possible array size (proportional to available system RAM), but future work will involve working around these memory limitations using distributed data types and employing constructs like labSend and labReceive for better memory read/write patterns.Of the three main factors that might have impacted the efficiency of our parallel algorithm, load balancing and data dependency were ruled out as the for loops were split evenly across workers with each loop capable of independent execution on a worker.Thus, the only possible reason could be overheads resulting from communication between workers.These overheads are generally the result of computational costs of cache coherence; memory conflicts inherent to a shared-memory multiprocessing architecture like the INTEL Core i7 [60]; and memory conflicts between operating system services [61].Moreover, since MATLAB looks to the operating system to open a pool of workers, it does not guarantee proper assignment of each worker to a single physical core/thread, which would result in exaggerated overheads from worker instances trying to communicate with (or waiting for) another instance on the same thread.
Speeding Up a PBM Code Integrating More Mechanisms.
Finally, a more complex, integrated form of the PBM code incorporating terms for consolidation, aggregation, and liquid drying/rewetting is parallelized and executed (see Appendices A, B, and C, resp., for kernels used).These mechanisms, in addition to breakage/attrition, are fundamental in describing the granulation process accurately to a greater extent.Although breakage is a crucial element, the focus is still on aggregation as it remains the most computationally intensive and therefore the primary target for parallelization in a full-fledged PBM code.Parallelization was achieved with the fork-join technique, a type of task parallelism.The SPMD keyword is used to force consecutive but independently executing sections of code to be split among the available pool of workers, followed by collection of calculated data at the end.The functions parallelized were those computing for drying/rewetting, consolidation, and finally the aggregation terms (formation and depletion), each of which was assigned to run on individual workers, thereby improving parallelism.The simulation was carried out on the INTEL Core 2 Quad Q6600, utilizing all four cores.The temporal evolution of physical properties was plotted for both the SPMD and the single CPU versions of the code (Figure 5), after which the time required for simulation of the parallel and sequential execution was plotted to display speedup (Figure 6).
SPMD version Single worker version
As can be seen from Figure 5(a), the total number of particles predictably decreases over time due to aggregation by coalescence.The total volume of the particles, Figure 5(b), on the other hand rises at a steady state as a result of continuous liquid binder addition over time and is also the reason why the average granule diameter increases gradually in Figure 5(c).The tendency for these curves to level off after a certain period of time is due to the limited number of bins in the grid, 15, which restricts the the extent of granule aggregation and growth.This further serves to stress the need for faster simulations through parallelization in order to circumvent these restrictions and run the code for longer and for higher number of bins.Data for both the SPMD and single CPU versions are in good agreement with each other, affirming numerical precision and validity of the SPMD version results.The grid size is increased and the corresponding simulation times are plotted.Even for a grid size of just 15, a speedup of 15.5 times was achieved, which is significant considering that only four workers were used.This is an example of superlinear speedup where for processors, a speedup of greater than is produced [62].Superlinear speedup may happen if problem size per processor is small enough to fit into registers, data caches, or other smaller, yet faster memory banks instead of the RAM [63].Since some of the parallelized functions like drying/rewetting and consolidation utilize just a few variables per processor, causes of parallel inefficiency (load imbalance, interprocessor communication) are offset resulting in faster multiplicationaddition (MAD) operations than on a uniprocessor machine, where bandwidth consumption would be higher than the rate at which RAM could deliver.
Conclusions and Future Work
Parallel computing has been studied for several years, but its application to particulate processes described by population balance models has been limited to a few studies in crystallization [24,25].The procedure followed herein for parallelizing PBMs involved three steps: locating portions of the code that are most time consuming with tools like MATLAB profiler; applying one of the two approaches for parallelism as appropriate; and finally optimizing for minimal variable transfer overhead.We have proposed here two methods of efficiently parallelizing the integrodifferential equations comprising the aggregation term for the CPU, either using loop slicing or, alternately, a brute force, fork-join method in conjunction with MATLAB's parallel Computing toolbox.This approach to parallelism provides the modelling and research community with the necessary tools to reduce simulation times with minimal effort.The results show a speedup of 2.6 times with 8 workers over a sequential code, with potential for improving speedup by increasing problem size, using distributed data types and employing constructs like labSend and labReceive for better memory read/write patterns.The corresponding increased demand for RAM can be handled algorithmically by using distributed data types and MPI-based constructs like labSend and labReceive.For the first time, a method describing the utility of GPU computing for PBMs is demonstrated.For the GPU, we utilized the JACKET toolbox for MATLAB to efficiently parallelize for loops across the 240 cores on a single NVIDIA GTX 280 card.Although the performance advantage of the GPU over CPU initially did not seem encouraging due to lower clock frequency and on-board memory and JACKET's own restrictions, a closer analysis of speedup ratios revealed that the GPU has potential to outclass the CPU at very large grid sizes given the significant advances made in its architecture with each new generation.Finally, a relatively more complex code integrating several mechanisms was parallelized with the aid of the SPMD keyword, yielding a superlinear speedup of 15.5 times.Future work will include building better parallel algorithms with efficient task scheduling for better speedup; parallelization across processors on multiple networked machines using the MATLAB distributed computing server package for CPUs; utilizing next-generation Tesla architecture-based NVIDIA GPUs with advanced architecture to attain significant speedup considering the fact that once parallelized, PBMs are well suited for execution on massively parallel architectures.Furthermore, with increased computational power in terms of RAM and processing speed, CPU and GPU parallel computing will show increased efficiency as a result of enhanced memory requirements to store and process large amounts of data from increased number of grids and/or dimensionality of the problem.Methods developed (for CPU and GPU parallel computing) can also be easily extended to other particulate processes that are described by PBMs such as crystallization, milling, and polymerization with potential to aid in computer-aided modeling and simulation and offer economic benefit to industries that deal with such processes [64].
A. Aggregation Kernel
For our simulation purposes, we have considered the empirical aggregation kernel proposed by Madec et al. [54].It takes into account the various parameters such as the particle size and binder volume and can be considered to be a more appropriate empirical kernel for our multidimensional PBE:
B. Consolidation
Consolidation, a negative growth process representing the compacting of granules due to the escape of air from the pores, has been modeled using an empirical expression proposed by Verkoeijen et al. [31].It can be given as where the porosity is Here min is the minimum porosity of the granules and is the compaction rate constant.
C. Drying/Rewetting
Liquid binder is added to the granulating system in order to catalyze the process of forming aggregates.Drying/rewetting is associated with the change in the amount of liquid in the granulation system due to the addition of more liquid or removal due to evaporation.The liquid rate can be obtained from mass balance as In the above equations, ṁspray is the binder spray rate, binder is the concentration of solid binder in the slurry added, evap is the rate of liquid being evaporated (in this work ṁevap = 0, for the sake of simplicity), solid fraction is the volume of solid for the particles in each bin, and is the liquid content.Due to liquid addition, the liquid content of each particle changes from liquid to liquid + liquid , which cannot be represented by the values of liquid volume on the grid.Thus, a fraction is incorporated, which distributes the new volume of liquid contained in the particle into the two adjacent grids, such that the liquid volume can be conserved.The fraction can be written as fraction () = − () ( + 1) − () , (C.3) where = liquid + liquid , () is the representative liquid volume in the th grid and ( + 1) is the representative liquid volume in the ( + 1)th grid.
D. Numerical Solution
Using a multidimensional population balance with an appropriate kernel ensures an improved analysis/prediction of the granulation process.But besides developing the model, incorporating an efficient numerical technique for solution to such integropartial differential equation is yet another difficult task.The multiple time scales and multiple dimensions introduce various complexities to the solution technique.Hence, it is very crucial to develop robust models with efficient solution techniques for such a framework.Our approach for obtaining a solution to such equations is based on a hierarchical two-tiered algorithm as proposed by Immanuel and Doyle III [55].This involves using the finite volume approach for discretization with respect to each individual solid, liquid, and gas volume, followed by integration of the population balance over the domain of these subpopulations.Neglecting layering, (3), can be expressed in the discrete form as shown in (, , ) , is the value of the solid volume at the upper end of the th bin along the solid volume axis, is the value of the liquid volume at the upper end of the th bin along the liquid volume axis, and is the value of the gas volume at the upper end of the th bin along the gas volume axis.Δ , Δ , and Δ are the sizes of the th, th, and th bin with respect to the solid, liquid, and gas volume axes.Using this technique, the population balance equation is reduced to a system of ordinary differential equations in terms of the rates of nucleation (R nuc ( , , )), aggregation (R agg ( , , )), and breakage (R break ( , , )).The triple integral for the aggregation term can thereby be evaluated by casting it into simpler addition and multiplication terms.Using this approach, the numerical steps are hard coded in MATLAB to obtain the final efficient solution of the population balance equation.Nucleation and breakage are included for clarity but are not considered in this study.
E. Calculation of Output Properties
Bulk properties such as average diameter, total number distribution of particles, and total volume are obtained from the simulation results in order to qualitatively and quantitatively analyse the macroscopic properties.The total number distribution of particles in the system is calculated as , , ) , gfor(a) Evolution of total number distribution of particles over time Evolution of total volume of particles over time Evolution of average diameter of a particle over time
Figure 1 :
Figure 1: Comparison of temporal evolution of granule physical properties simulated using gfor, GPU-for, and CPU-for.
code GPU-for code gfor code (a) Semilog plot comparing simulation times of gfor, GPU-for, and CPU for versions time/gfor time) (b) Speedup ratio of gfor over CPU for version -for time/gfor time) (c) Speedup ratio of gfor over GPU-for version
Figure 2 :
Figure 2: Comparison of simulation times and speedup ratios of PBM code incorporating gfor, GPU-for, and CPU-for.
Figure 3 :
Figure 3: Comparison of temporal evolution of granule physical properties simulated for different worker pool classes, grid size = 36.
and efficiency obtained for a grid size of 60 Figure 4 :
Figure 4: Plots of simulation times and obtained speedup of the PB code incorporating the SPMD construct.
Evolution of total number distribution of particles over time diameter (m) (c) Evolution of average diameter of a particle over time
Figure 5 :
Figure 5: Comparison of temporal evolution of granule physical properties for a sequential and parallel PBM code, grid size = 15.
Plot comparing simulation times of SPMD and single worker version with increasing grid size Semilog plot comparing simulation times of SPMD and single worker version with increasing grid size highlighting positive speedup after a grid size of 6
Figure 6 :
Figure 6: Comparison of simulation times for a sequential and parallel PBM code. | 2018-12-29T05:02:55.044Z | 2013-01-01T00:00:00.000 | {
"year": 2013,
"sha1": "03ed1095f6878102bfa682ede3cd6cf2577ad3b1",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/mse/2013/475478.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "03ed1095f6878102bfa682ede3cd6cf2577ad3b1",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
14225932 | pes2o/s2orc | v3-fos-license | Baseline characteristics of an incident haemodialysis population in Spain: results from ANSWER—a multicentre, prospective, observational cohort study
Background. The ANSWER study aims to identify risk factors leading to increased cardiovascular morbidity and mortality in a Spanish incident haemodialysis population. This paper summarizes the baseline characteristics of this population. Methods. A prospective, observational, one-cohort study, including all consecutive incident haemodialysis patients from 147 Spanish nephrology services, was conducted. Patients were enrolled between October 2003 and September 2004. Sociodemographic, clinical, laboratory and health care characteristics were collected. Results. Baseline characteristics are described for 2341 incident haemodialysis patients [mean (SD) age 65.2 (14.5) years, 63% males]. The main cause of renal failure was diabetic nephropathy (26%). The majority of patients (57%) had a Karnofsky score of 80–100 and 27% were followed up by a nephrologist for ≤6 months. In total, 86% of the patients had hypertension, 43% had dyslipidaemia and 44% had a history of cardiovascular disease. Initial vascular access was obtained via a temporary catheter in 30% of patients, via a permanent catheter in 16% and via an arteriovenous fistula in 54%. Albumin levels were <3.5 g/dl in 43% of patients. Immediately prior to the onset of haemodialysis, the mean (SD) glomerular filtration rate (GFR) was 7.6 (2.8) ml/min/1.73 m2, and only 6.7% of the patients were within the K/DOQI guidelines for all four bone mineral markers. In addition, a high proportion of patients had anaemia markers outside the EBPG guidelines (haemoglobin <11 g/dl, 59%, ferritin <100 or >500 ng/ml, 41% and saturated transferrin <20 or >40%, 50%) despite previous treatment with erythropoiesis-stimulating agents in 41% of cases. Conclusions. There is excessive use of temporary catheters and a high prevalence of uraemia-related cardiovascular risk factors among incident haemodialysis patients in Spain. The poor control of hypertension, anaemia, malnutrition and mineral metabolism and late referral to a nephrologist indicate the need for improving the therapeutic management of patients before the onset of haemodialysis.
Introduction
Haemodialysis has become an increasingly safe and welltolerated therapy for patients with end-stage renal disease (ESRD). Nevertheless, life expectancy of dialysis patients remains significantly shorter than that of the general population with similar demographics [1]. There is also a high incidence of cardiovascular morbidity and mortality in this population [2,3]. Large, prospective, observational studies, including the Dialysis Outcomes and Practice Patterns Study (DOPPS) [4], and the United States Renal Data System Dialysis Morbidity and Mortality Wave 2 study [5,6] have provided important insights into the characteristics and likely prognosis of haemodialysis patients. A number of prospective epidemiological studies from several European countries have also described the incident haemodialysis population [7][8][9][10][11][12][13][14][15], which can help to assess the influence of a multitude of risk factors on the increased mortality among these patients. In this regard, the ANSWER study is currently underway in a large incident haemodialysis population in Spain.
The primary objective of the ANSWER study is to determine and quantify the risk factors influencing cardiovascular morbidity and mortality in incident haemodialysis patients in Spain. In addition, the study also aims to provide information on the baseline characteristics of the incident C The Author [2008]. The online version of this article has been published under an open access model. Users are entitled to use, reproduce, disseminate, or display the open access version of this article for non-commercial purposes provided that: the original authorship is properly and fully attributed; the Journal and Oxford University Press are attributed as the original place of publication with the correct citation details given; if an article is subsequently reproduced or disseminated not in its entirety but only in part or as a derivative work this must be clearly indicated. For commercial re-use, please contact journals.permissions@oxfordjournals.org haemodialysis population; in this paper, we report these data and make comparisons with other incident and prevalent populations reported in the literature.
Subjects and methods
ANSWER is a multicentre, prospective, observational cohort study in incident haemodialysis patients all over Spain. Most dialysis facilities from Spain (n = 235) were invited to participate in the study, of which 147 (62.5%) centres agreed to participate. The local ethics committees approved the study and all patients enrolled in the study provided informed consent.
Patients
All incident haemodialysis patients (i.e. patients starting chronic haemodialysis treatment, who had received haemodialysis for ≤30 days) aged ≥18 years were eligible for inclusion in the study. Patients were excluded if they had undergone renal replacement therapy previously, were already receiving haemodialysis (≥30 days) or peritoneal dialysis, or had received a kidney transplant.
Following initiation of the study at each site in October 2003, patients were consecutively enrolled as they started haemodialysis treatment. Enrolment was stratified by region according to the incidence of haemodialysis in a reference population [16], in order to obtain a sample in which all Spanish regions would be represented in the same proportion as in the target population.
Patient assessments
Sociodemographic, clinical, laboratory (maximum 30 days before start of haemodialysis) and health care (concomitant drug therapy and haemodialysis characteristics) variables were recorded at baseline (within first 30 days of haemodialysis) and assessed at regular intervals during the study period, with all the study patients followed up for at least 2 years.
Variables recorded at baseline included waist measurement, smoking status (active smoker, non-smoker, exsmoker), alcohol consumption (grams of alcohol [17]), employment status and education. The clinical variables assessed included history of renal failure and various comorbidities: diabetes, dyslipidaemia [cholesterol >220 mg/dl or low-density cholesterol (LDL-C) >100 mg/dl or treatment with statins], hypertension [systolic blood pressure (SBP) ≥140 mmHg or diastolic blood pressure (DBP) ≥90 mmHg or treatment with antihypertensives], parathyroidectomy, malnutrition (physician's subjective assessment) and cardiovascular disease (heart failure, left ventricular hypertrophy, cardiac arrhythmia, ischaemic heart disease, cerebrovascular disease, peripheral vascular disease and any other diseases of the circulatory system). The Charlson age-comorbidity index [18,19], performance status [Karnofsky score (KS)] and health-related quality of life (QoL) assessed with the Medical Outcome Survey Short questionnaire [20], previously validated for the Spanish population [21], were also recorded.
Parameters describing the patients' initial haemodialysis experience (first month after starting) were also obtained. Dialysis intolerance was defined as hypotension recorded at >50% of dialyses performed during the past month. The urea reduction ratio (URR) and Kt/V were calculated for each patient according to a standard formula (secondgeneration Daugirdas formula for eKt/V [22]). Glomerular filtration rate (GFR) was estimated according to the MDRD equation [23].
Statistical analysis
Summary statistics were calculated for continuous and categorical endpoints. Differences between subgroups were assessed using chi-square tests for categorical variables and Student's t-test or the Mann-Whitney U-test for continuous variables (according to normality). The Bonferroni method [26] was applied for adjusting the significance level in these analyses. Differences were considered significant at P < 0.00022 (0.05/228). Differences between means and odds ratios with respect to the reference subgroup, together with their 95% confidence interval, are displayed only for the variables with significant results. The calculations were performed using SPSS R 14.0.
Sociodemographic characteristics and aetiology of kidney disease
A total of 2406 incident patients undergoing dialysis were enrolled from 147 hospital nephrology services and associated haemodialysis centres throughout Spain between 1 October 2003 and 30 September 2004. Sixty-five patients were excluded from analysis, as they did not meet the inclusion criteria. The resulting sample, 2341 patients, accounts for ∼58% of the total incident patients during the study period (according to the 2003 and 2004 estimates of the incidence of haemodialysis in Spain of the National Registry [27,28]). Table 1 summarizes the patient demographics and baseline characteristics. Most patients were elderly (29% over 75 years), male (63%) and overweight [59% had body mass index (BMI) >25 kg/m 2 ]. The education level was low (38% had no primary education). The most common reason for renal failure was diabetic nephropathy (26%), and 27% of patients had been followed up by a nephrologist for <6 months prior to the onset of haemodialysis (24% in the subgroup with diabetic nephropathy and 26% in the subgroup with vascular nephropathy). The prevalence of hepatitis C virus positive patients was 5.4%. Excluding left ventricular hypertrophy. c Cholesterol >220 mg/dl or LDL-C >100 mg/dl or treatment with statins. d On a scale 0-100, with 100 = the normal ability to carry out daily activities. e Age adjusted; on a scale 0-37, with 37 = the highest comorbidity. f SF-36 Physical Component Summary Scale (PCS) and Mental Component Summary Scale (MCS) are calculated based on T transformations, so that the mean score of the general Spanish population is 50 and the standard deviation is 10 (a value between 45 and 55 is considered 'normal', between 40 and 45 'somewhat worse' and <40 'worse' than 70% of the general population). g Short daily haemodialysis or nocturnal haemodialysis.
Comorbidities, functional status, medications and quality of life
Comorbidities were common, particularly hypertension (86%), with almost all hypertensive patients being non-controlled (89% with SPB ≥140 mmHg or DBP ≥90 mmHg), despite most of them receiving antihypertensive treatment (80%). There was also a high frequency of previous cardiovascular disease (44%) and dyslipidaemia (43%) ( Table 1). The prevalence of diabetes mellitus was 10% higher than that of diabetic nephropathy. Approximately 1 in 10 patients had developed a tumour. As expected, the use of concomitant medications reflects the comorbidities in this population ( Table 2). Half of the patients were treated with iron supplements either before (47%) or after (53%) the initiation of haemodialysis. Most patients were receiving or were starting treatment with erythropoiesis-stimulating agents (ESA, 80%) and phosphate binders (71%). Of the patients on ESAs, 52% were treated prior to dialysis initiation (62% in the subgroup with >6 months of predialysis nephrological care versus 38% in the ≤6 months group, P < 0.0001) and 48% began ESA treatment at the time of dialysis initiation. The use of beta-blockers was lower than expected in view of the comorbidities (24% of total sample, 16% as antihypertensive treatment and 8% as cardiovascular therapy).
The presence of comorbidities [mean of Charlson Index of 6.2 (SD 2.4)] resulted in a severely decreased quality of life when compared with the general Spanish population (Table 1). Over half of the patients (57%) had a Karnofsky score between 80 and 100. Younger patients had a better functional status [mean KS of 82 (SD 14) for patients <65 years] than the older patients [71 (16) for patients ≥65 years, P < 0.0005]. Table 3 summarizes the patients' baseline blood chemistry values. A high proportion of diabetic patients had uncontrolled glycaemia (48% >126 mg/dl, 34% with HbA1c >7%), whereas LDL-C was mostly within the normal range and high-density lipoprotein (HDL)-cholesterol was below the normal range in one-third of cases. The nutritional status of the patients was quite poor (43% had albumin levels <3.5 g/dl) and the inflammation status was highly variable (SD 6.2 mg/dl for the C-reactive protein). The mean GFR prior to dialysis onset was 7.6 (SD 2.8) ml/min/1.73 m 2 and the mean 24-h diuresis was 1602 (SD 920) ml.
Anaemia and mineral metabolism
A large proportion of patients were outside the EBPG targets for haematological parameters related to the management of anaemia (haemoglobin <11 g/dl in 59%). Ferritin and saturated transferrin levels were decreased in 31% and 39% of patients, respectively. Most patients were also outside the K/DOQI guideline target ranges for bone mineral markers (Table 4). Overall, only 6.7% of the patients were within the four K/DOQI target ranges at the same time. The population means were also outside the K/DOQI guideline target ranges for iPTH and phosphorus, but not for total albumin-adjusted calcium or Ca × P, probably due to the large percentage of patients with low total calcium levels.
Baseline haemodialysis characteristics
Baseline haemodialysis variables are detailed in Table 1. The majority of patients received three haemodialysis sessions per week with a mean of 3.6 h of dialysis per session. Similar proportions of patients had high-flux or low-flux membranes and similar proportions received lowmolecular-weight or standard heparin. Vascular access in patients at the start of haemodialysis was achieved by using either catheter (46%) or arteriovenous fistula (AVF) (52%) and in a small proportion of patients using polytetrafluoroethylene AVF (2%).
Characteristics of the patients with initial vascular access via a catheter
The patients with initial vascular access via a permanent catheter were older and had worse nutritional status, more comorbidities (higher Charlson index) and worse residual renal function (lower 24-h diuresis) than patients with a temporary catheter or an AVF ( Table 5). The subgroup with temporary catheters was characterized by greater use of low-molecular-weight heparin and a higher degree of anaemia, hypocalcaemia and hyperphosphataemia ( Table 5).
Characteristics of patients with late referral to the nephrologist
In the subgroup analyses, patients who were referred to the nephrologist <6 months before the start of dialysis had worse functional and nutritional status, a lower degree of dyslipidaemia and hypertension (and more recently diagnosed) and worse residual renal function (higher creatinine and lower 24-h diuresis) than patients who referred >12 months ago ( Table 6). As expected, systemic aetiologies (e.g. myeloma and vasculitis) were also related to the late referral to the nephrologist. Anaemia, hyperferritinaemia and uncontrolled mineral metabolism (hypocalcaemia and hyperphosphataemia) were much more frequently observed in the late referral group. Vascular access was obtained via an AVF in only 25% of patients who were referred late compared with 52-64% in the other subgroups.
Characteristics of patients with previous ischaemic cardiovascular disease
The presence of previous ischaemic cardiovascular disease in the incident dialysis population was related to all the classic cardiovascular risk factors in the general population (advanced age, male gender, former or current smoker, diabetes mellitus, history of dyslipidaemia or hypertension) except obesity (Table 7). It is notable that despite a higher percentage of dyslipidaemia and lower HDL-C levels, the mean total cholesterol was lower in the patients with previous cardiovascular disease. This inverse relationship was not due to the greater use of statins in 582 R. Pérez-García et al. the cardiovascular group (42% versus 58% in the noncardiovascular groups). Table 7 also shows greater catheter use, worse residual renal function and nutritional status, and a lower degree of hyperphosphataemia in this subgroup of patients.
Discussion
ANSWER is the first large, prospective, observational study of incident haemodialysis patients in Spain, which will help to clarify, together with other recent ongoing Percentages calculated on valid sample for each variable (indicated in the second column). N = sample size; na = not applicable. a 'Normal' represents the K/DOQI guideline target range for bone mineral markers. b Adjusted with the following formula: Adjusted Ca = calcium + 0.8 * (4-albumin). Effect measures are expressed as a difference in means for quantitative variables and odds ratio for qualitative variables, together with their 95% confidence interval, with respect to the reference subgroup (AV fistula as initial vascular access). For qualitative variables with more than one category, the odds ratio has been calculated with respect to the absence of the displayed category. N = sample size; CKD = chronic kidney disease. * Bonferroni-corrected significance limit: P < 0.00022 (0.05/228). a 2% patients with PTFE graft not included in the subgroup analysis. b Only analysed in the subgroup of hypertensive patients where the information was available, N = 1034. Effect measures are expressed as a difference in means for quantitative variables and odds ratio for qualitative variables, together with their 95% confidence interval, with respect to the reference subgroup (predialysis nephrological care >12 months). For qualitative variables with more than one category, the odds ratio has been calculated with respect to the absence of the displayed category. N = sample size; CKD = chronic kidney disease. * Bonferroni-corrected significance limit: P < 0.00022 (0.05/228). a Only analysed in the subgroup of diabetic patients where the information was available, N = 460. b Only analysed in the subgroup of dyslipidaemic patients where the information was available, N = 512. c Only analysed in the subgroup of hypertensive patients where the information was available, N = 1156. studies in Europe (the Netherlands [7,8], France [10][11][12], Italy [13,14] and Sweden [15]) and North America (CHOICE [29], Wave-2 USRDS [30][31][32]), the risk factors associated with cardiovascular morbidity and mortality in these patients. The ANSWER study enrolled all consecutive incident haemodialysis patients, whereas most other haemodialysis studies have excluded patients who did not survive the first 3 months [7,14,29,31] or have included 'prevalent' patients (DOPPS [33], MAR [34]). Studies of 'incident' populations are needed to verify the previously described associations for 'prevalent' populations, because those studies suffered from the bias of not enrolling patients with higher cardiovascular risk, that is, those who die in the first months after dialysis onset. The sociodemographic characteristics of our cohort are similar to those reported for other European incident populations. The mean age and percentage of patients older than 65 or 75 years in our sample are similar to those reported in other European countries [8][9][10]14]) and the USA [35].
About a quarter of the patients developed renal failure due to diabetic nephropathy. This figure is similar to that reported in other European studies [12,13,16,[36][37][38]. The prevalence of vascular nephropathy in the present study is also similar to that reported in other Spanish and Italian studies [13,16], but it seems slightly lower than the prevalence reported from the Netherlands [8] or France [9,10,12]. Our results support the findings of López Revuelta and colleagues [16] that the aetiology of chronic kidney disease in European incident haemodialysis populations is different from the aetiology among the incident population in the USA, where diabetes and hypertension account for >70% of cases, compared with <50% in Europe.
Due to the high mean age of the study population and significant prevalence of comorbidities, the functional status was moderately affected, consistent with findings of previous Spanish studies in the incident haemodialysis population [38]. Interestingly, the functional status in our patients is better than that of incident patients in the UK of similar mean age [39,40], but is similar to that of American patients, who were an average of 10 years younger [41]. The gender distribution in both the UK and US samples was different from ours (more males in the UK and US samples), but the worse functional status of the UK sample may be related to the higher proportion of unplanned initiation of haemodialysis in that population (44-47%) [39,40]. The QoL results revealed severely affected physical and Effect measures are expressed as a difference in means for quantitative variables and odds ratio for qualitative variables, together with their 95% confidence interval, with respect to the reference subgroup (non-previous ischaemic cardiovascular disease). For qualitative variables with more than one category, the odds ratio has been calculated with respect to the absence of the displayed category. N = sample size; CKD = chronic kidney disease; CVD = cardiovascular disease. * Bonferroni-corrected significance limit: P < 0.00022 (0.05/228). a Only analysed in the subgroup of hypertensive patients where the information was available, N = 1174. mental health, similar to previous reports of Spanish incident haemodialysis patients [42,43].
Regarding the use of catheter as first vascular access, we found fewer shunts than reported in the DOPPS study for Spain. This may be attributed to the differences among recruiting facilities [44,45]. The fact that there are many more facilities participating in the ANSWER study (147 compared with 20 in the DOPPS) probably provides a more confident estimate of the real situation of vascular access in Spain. Furthermore, our results are in agreement with previous studies in Spain, in which between 46% and 51% of incident patients were found not to have permanent AVF access [46,47], and this proportion has remained stable during the past few years [48].
Although a minimum nephrological follow-up of 6 months prior to haemodialysis onset is recommended, late referral has been reported for about a quarter of Spanish incident patients, similar to previous findings from other European countries [9,49,50]. The high proportion of catheter use in the late referral group, also described in DOPPS [51], highlights the need for early referral as far as possible. A shorter time of nephrologist follow-up has been associated with higher mortality in haemodialysis patients independent of catheter vascular access [52], indicating the presence of other negative factors in these patients. The worse clinical status at the onset of haemodialysis in the late referral subgroup may contribute to this phenomenon [53,54].
With respect to kidney function at haemodialysis onset in our patients, the GFR was lower in our patients compared with reports from previous studies in Spain and other European countries [49,50,55], and 8 in 10 patients were below the limit of 10 ml/min recommended by the K/DOQI guidelines [56]. These data suggest a delayed onset of haemodialysis in our settings. Initiation of haemodialysis above this limit may prolong survival according to the NECOSAD study [57]. Other haemodialysis quality indicators, such as the low initial eKt/V, also suggest inadequate haemodialysis onset although almost half the patients are using high-flux membranes.
About 10% of the population had history of neoplasia. This figure is a bit higher than that reported for the Spanish DOPPS cohort (6%), but agrees with the 9% of the EURO-DOPPS [36] prevalent patients and also with the 11% reported for the incident French population [10] (although that study considered only active neoplasia).
The antecedents of cardiovascular disease in our sample were, as expected, very common. The prevalence of ischaemic heart disease was similar to that in Italy, France and Sweden [11,14,15,58], but lower than that in the UK, Germany and the USA [58,35]. The prevalence of peripheral vascular disease was similar to that in Sweden and the USA [14,35], but lower than the prevalence in France, Germany, Italy and the UK [9,11,58]. The prevalence of heart failure was similar to that in France, Germany and Italy [9,58], but lower than the prevalence in the UK and the USA [58,35]. Finally, the prevalence of cerebrovascular disease was a little higher than or similar to that in Italy, France, Sweden and the USA [9,11,13,15,35]. The differences in these prevalences must be viewed with caution, as they may be related to different disease definitions or methods of collection.
With regard to the prevalence of classic cardiovascular risk factors, the majority of patients had uncontrolled hypertension, despite almost all patients receiving antihypertensive treatment. Diabetes affected one-third of our patients, similar to other European studies [12,15], and far from one-half in the USA [59,35]. Glycaemic control was poor, and 1 in 5 patients were obese. Both the proportion of obese patients and the mean BMI were highly consistent with the findings from almost all previously described incident European and North American populations [7,15,32,35,50,59]. However, malnutrition was less prevalent in our sample than in the Netherlands or Sweden [7,15]. This discordance is probably due to an underestimation of malnutrition by Spanish physicians, as one-third of our patients had low albumin levels. The high prevalence of other emergent cardiovascular risk factors (hyperhomocysteinaemia, hyperfibrinogenaemia and elevated lipoprotein (a)) in our sample with respect to the general Spanish population [60] agrees with previous results in maintenance [61] and incident [10] haemodialysis patients. Since this is a cross-sectional analysis, the causal relationship between all these findings and cardiovascular status cannot be verified. Future data will produce more reliable results regarding the predictive value of the collected variables.
Anaemia-related target ranges, which are strongly predictive of reduced mortality in chronic kidney disease [2,6], were achieved by a relatively low percentage of patients. Despite almost 1 in 2 patients receiving ESAs prior to haemodialysis onset, >50% were anaemic, and more than one-third had iron deficiency, which suggests incorrect ESA administration and insufficient correction of iron stores. Less than 1 in 10 patients were within the K/DOQI targets for all four bone and mineral metabolism parameters, iPTH being the most uncontrolled. These findings correlate with those from the NECOSAD study [7]. Data from prevalent populations indicate that the degree of control of bone mineral disease is not better after the onset of haemodialysis [62]. Studies of the recently available therapies (calcimimetics, calcium-free P-chelating agents or new vitamin D analogues) may help to resolve this issue in the near future.
With regard to the classic cardiovascular risk factors, the expected associations with previous smoking, dyslipidaemia, hypertension and diabetes were observed in patients with a history of cardiovascular disease. However, the cholesterol levels were lower in the group with cardiovascular disease, which could be related to the worse nutritional status in those patients.
This cohort study has some limitations. The non-random (but consecutive) patient selection may have resulted in some selection bias. However, the extended inclusion period and the fact that the final sample represents more than half of all incident Spanish patients during this period [27,28] support the validity of the recruited cohort. In addition, as enrolment at each site was stratified according to the incidence of haemodialysis in a reference population, the 2341 patients in the study were considered to be representative of the target population in Spain.
For the purposes of international comparisons, our study confirms that there are important differences in the prevalence of cardiovascular and mortality risk factors, especially with respect to North American populations. Spain has an extremely high rate of renal transplantation (47% of patients on renal replacement therapy in 2004 [28] versus 29% in the USA [63]). This should be taken into account when comparing the prospective cardiovascular morbidity and mortality results in the future.
In summary, the ANSWER study provides valuable new data, thus adding to our knowledge of the characteristics of incident haemodialysis patients in Spain and Europe. Most patients present at an advanced age and have hypertension, diabetes and previous cardiovascular disease. Their functional status is moderately affected, considering the high mean age. Our results also show that in the Spanish setting, haemodialysis is started too late and that patients are also referred too late to the nephrologist (late referral for 1 in 4 patients with diabetic and vascular nephropathy). Also, not enough effort was made to place a permanent AVF before haemodialysis onset in patients referred >6 months ago, and such efforts must be specially made in older and diabetic patients.
This study has also revealed an extremely high prevalence of emergent and uraemia-related cardiovascular risk factors and poor glycaemic control, low HDL-C, hypertension, anaemia, malnutrition, hypo-and hyperparathyroidism, hyperphosphataemia and hypo-and hypercalcaemia. These results reflect the need for improving the therapeutic management of incident dialysis patients before the onset of haemodialysis. | 2016-05-12T22:15:10.714Z | 2008-11-07T00:00:00.000 | {
"year": 2008,
"sha1": "f91aae4f140627864310233a094d680cfe3463d5",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1093/ndt/gfn464",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "47e3c7f04d57eafcf3b0221f2ef8253bad67b74a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
17438410 | pes2o/s2orc | v3-fos-license | Identifying involvement of Lys251/Asp252 pair in electron transfer and associated proton transfer at the quinone reduction site of Rhodobacter capsulatus cytochrome bc1
Describing dynamics of proton transfers in proteins is challenging, but crucial for understanding processes which use them for biological functions. In cytochrome bc1, one of the key enzymes of respiration or photosynthesis, proton transfers engage in oxidation of quinol (QH2) and reduction of quinone (Q) taking place at two distinct catalytic sites. Here we evaluated by site-directed mutagenesis the contribution of Lys251/Asp252 pair (bacterial numbering) in electron transfers and associated with it proton uptake to the quinone reduction site (Qi site). We showed that the absence of protonable group at position 251 or 252 significantly changes the equilibrium levels of electronic reactions including the Qi-site mediated oxidation of heme bH, reverse reduction of heme bH by quinol and heme bH/Qi semiquinone equilibrium. This implicates the role of H-bonding network in binding of quinone/semiquinone and defining thermodynamic properties of Q/SQ/QH2 triad. The Lys251/Asp252 proton path is disabled only when both protonable groups are removed. With just one protonable residue from this pair, the entrance of protons to the catalytic site is sustained, albeit at lower rates, indicating that protons can travel through parallel routes, possibly involving water molecules. This shows that proton paths display engineering tolerance for change as long as all the elements available for functional cooperation secure efficient proton delivery to the catalytic site.
Introduction
Proton translocation across energy conserving membrane is crucial for generation of proton motive force. In Peter Mitchell's redox loop mechanism, proton translocation is achieved by a functional coupling of two reactions: an oxidation of quinol with release of two protons at one side of the membrane and a reduction of quinone with uptake of two protons at the opposite side of the membrane [1][2][3]. The quinol oxidation and quinone reduction sites can be located in two separate enzymes (bacterial examples [4]), or they can be assembled within one enzyme. The latter case concerns cytochrome bc 1 , a key component of many photosynthetic and respiratory systems including mitochondrial respiration [5,6].
Cytochrome bc 1 is a functional dimer [7]. The quinol oxidation and quinone reduction sites are located within cytochrome b subunit, which together with cytochrome c 1 and iron-sulfur (ISP) subunit form the catalytic core of the monomer [8]. The quinol oxidation and quinone reduction sites are named the Q o and Q i sites, respectively. In the Q o site, the oxidation of quinol releases two protons to the intermembrane space. The electrons from this reaction are directed into two separate cofactor chains. The high potential c-chain transfers one electron to cytochrome c via iron-sulfur cluster [2Fe-2S], while the low potential b-chain delivers the second electron through hemes b L and b H to the Q i site. The sequential reduction of quinone to quinol through a semiquinone intermediate (SQ i ) is associated with an uptake of two protons from the mitochondrial matrix or cytoplasm [9,10]. It follows that a complete reduction of one quinone molecule at the Q i site requires oxidation of two quinol molecules at the Q o site. In addition, the electron transfer between two hemes b L is possible [7,[11][12][13]. This secures functional connection of the two Q o and two Q i sites in the dimer.
While the electron paths within cytochrome bc 1 are well defined, the proton paths are much less known. This is in part due to the lack of methods that can directly monitor proton transfers. While uncertainties related with proton transfers concern both the Q o and Q i sites, here we focus just on the Q i site. Before X-ray structures of cytochrome bc 1 were known, early sitedirected mutagenesis successfully identified several key protonable residues associated with the operation of the Q i site [10,14,15]. However, the majority of models incorporating the protonation/deprotonation steps at this site were inferred from the inspection of X-ray structures [16][17][18]. Complementary studies based on electron paramagnetic resonance spectroscopy provided information on paramagnetic semiquinone bound to the Q i site [19][20][21]. In addition, Poisson-Boltzmann electrostatic calculations described redox-linked protonation state changes for this site [22]. All these studies point towards several important polar residues (His217, Asp252, Lys251, Asn221 in bacterial numbering) that can potentially be involved in the substrate binding (Q and SQ i ) and/or its protonation/deprotonation. Besides these amino acid side chains, cardiolipin (CL) was also postulated to facilitate proton transfers at the entry point from the protein exterior (dimer interface) to the Q i site. In this scenario, CL together with a neighboring lysine residue (Lys251) and water molecules can form the CL/K pathway delivering protons to the site [16,23,24].
Our recent MD simulation study [25] suggests that the role of Lys251 is more direct than the prior CL/K pathway hypothesis implied. After acquiring a proton from the dianionic CL head group the positively charged Lys251 could rotate into the Q i site to form a salt bridge with the deprotonated and negatively-charged Asp252 side chain. This fully bent Lys251 conformation, which is not seen in any substrate-bound X-ray crystal structures, results from semiquinone binding in the simulations, but pKa calculations indicate that the switch-like motion would be pHdependent and possible even without a bound substrate at the Q i site.
The rotation of the Lys251 side-chain implicates the possibility of functional connection between Lys251 and Asp252 for proton transfers to the Q i site. In view of this new finding, we examined the consequences of replacements of Lys251 and Asp252 with non-protonable residues for the functioning of cytochrome bc 1 in vivo and for the kinetics of electron and proton transfers. Comparative analysis of separate replacements of either Lys251 or Asp252 side chains (single mutants) and simultaneous replacements of both side chains (double mutants) supports the idea that functional cooperation between Lys251 and Asp252 facilitates proton transfers to the Q i site. It also reveals a limited plasticity of this path to accommodate a lack of one, but not two of protonable groups from the Lys251/Asp252 pair.
Mutant preparation
Rhodobacter (R.) capsulatus cells containing substitutions at 251 and 252 positions in cytochrome b subunit were obtained using a genetic system originally developed by Dr. F. Daldal [26]. Mutations K251M, D252A, D252N were introduced in the cytochrome b gene using QuikChange site-directed mutagenesis system (Stratagene) and the following PCR primers: As a template DNA pPET1 plasmid containing wild type (WT) petABC operon was used. The BstXI-XmaI fragment of the operon containing the desired mutations, and no other mutations, were inserted into pMTS1 vector and introduced into MT-RBC1 R. capsulatus strain using triparental crossing [26]. The presence of introduced mutations was confirmed by sequence analysis of petB gene on plasmid isolated from mutated R. capsulatus strains. R. capsulatus bacteria were grown under semiaerobic or photoheterotrophic conditions as described previously [27]. To test for the occurrence of reversion mutations, 100 μl of 1 l overnight liquid culture of the mutant strains were spread on mineralpeptone-yeast extract (MPYE) plates and kept in selective photosynthetic cultures for 12 days. Single colonies that acquired the Ps + phenotype (photosynthetic competence) were isolated, and reversion mutations were identified by sequencing the entire petABC operon.
Isolation of chromatophores and protein purification
Procedure described previously in ref. [28] was used to obtain the chromatophore membranes from R. capsulatus cells growing under semiaerobic conditions. After isolation, chromatophores were homogenized and suspended in MOPS pH 7.0 or Tris pH 9.0 buffer (for lightinduced electron transfer measurements) or in 50 mM Tris pH 8.0 buffer containing 100 mM NaCl, 0.01% DDM and 20% glycerol (for protein purification). Cytochrome bc 1 complexes were isolated from detergentsolubilized chromatophores using ion-exchange chromatography (DEAE-BioGel A) as described [28].
Light-induced electron transfer measurements
Double-wavelength time-resolved optical spectrophotometer [29] was used to measure the kinetics of electron transfer through hemes of cytochrome bc 1 in chromatophores. Transient kinetics of hemes b were measured at 560-570 nm after activation by single saturating flash (~10 μs). Measurements were performed at pH 7.0 (50 mM MOPS, 100 mM KCl, 1 mM EDTA) or pH 9.0 (50 mM Tris, 100 mM KCl, 1 mM EDTA) under conditions of low (100 mV) or high (200 mV, 250 mV) ambient redox potential. Experiments were performed under anaerobic conditions in the presence of redox mediators and valinomycin as described in [29] except the carotenoid bandshift measurements for which the valinomycin was omitted. The rates of flash-induced electron transfer reactions were calculated from single exponential function fitted to: heme b H reduction in the presence of antimycin, b H re-oxidation without inhibitors and to heme b H reduction from reverse reaction in the presence of myxothiazol (Table 1).
EPR measurements of semiquinone
CW EPR spectra of semiquinone were obtained for isolated cytochrome bc 1 complexes. Samples of WT and mutants were measured at 200 K in 50 mM Tris buffer pH 8.0 containing 100 mM KCl, 0.01% DDM and 1 mM EDTA. All spectra were obtained using the following parameters: microwave frequency -9.39 GHz, sweep width -180 G, modulation amplitude -10 G, microwave power -1.9 mW. Semiquinone was generated in samples by incubation of 50 μM cytochrome bc 1 with myxothiazol (Q o site inhibitor) and subsequent addition of 2,3-dimethoxy-5-methyl-6-decyl-1,4-benzohydroquinone (DBH 2 ) as a substrate. The negative control was obtained by addition of antimycin (Q i site inhibitor) to samples treated previously with myxothiazol and DBH 2 . Both DBH 2 and myxothiazol were used at final concentration of 200 μM while antimycin was used at 400 μM. Quantitative EPR analysis of the semiquinone was performed using 4-Hydroxy-TEMPO (TEMPOL) as a standard as described in [30]. To obtain the calibration curve, TEMPOL was measured under the same buffer, temperature and EPR parameters conditions as those used for SQ i measurements.
General biochemical and phenotypic properties of mutants of D252 and K251
Conclusions drawn from MD simulations described by Postila et al. [25] and other studies [10,18,19] point out four important side chains in SQ binding: Lys251, Asp252, Asn221 and His217 (Fig. 1B). From those we chose Lys251 and Asp252 for experimental testing through site-directed mutagenesis. For this purpose we constructed three single mutants K251M, D252A, D252N and two double mutants K251M/ D252A, K251M/D252N. The rationale behind the substitutions of Lys to Met and Asp to Asn was to change the protonable side chains into the non-protonable ones with minimal structural distortions. The substitution of Asp to Ala also tested the removal of protonable group with, possibly, additional structural effects. The properties of those mutants and the most insightful kinetic data are summarized in Table 1 and The electrophoretic analysis of isolated complexes indicated that in all cases the mutant cells expressed cytochrome bc 1 with all three catalytic subunits (SDS-page profiles showed the presence of three bands corresponding to cyt c 1 , cyt b and the FeS subunit). The difference optical spectra of all mutated complexes in the isolated form were similar to that of the native complex. The ability to grow under photosynthetic (Ps) conditions, which tests functionality of cytochrome bc 1 in vivo [12,26,31,32] indicated that among the mutants only K251M showed a Ps+ growth rate comparable to WT (Table 1). D252A showed a very weak Ps growth indicating severe functional impediment. The Ps growth in D252N was better than D252A, however still less robust than that of WT. Both double mutants did not grow under photosynthetic conditions indicating that cytochrome bc 1 is not functional in vivo ( Table 1).
Incubation of D252A under photosynthetic conditions allowed us to isolate single colonies that exhibited faster Ps growth than original D252A. The DNA sequence analysis of these cells revealed that Ala at position 252 was replaced by Glu. In addition, the reversions were indicates Ps growth comparable to WT; ++, indicates Ps growth slower than WT (colonies appear on Ps plates with approximately one day delay comparing to WT); − (+), indicates very weak Ps growth (small colonies appear with approximately five days of delay comparing to WT). b nd, not determined. (Table 1).
Kinetics of light-induced electron transfer
To assay the Q i site function in the mutants we analyzed the rates and amplitudes of light-induced electron transfer in chromatophore membranes under various redox conditions in the absence or presence of inhibitors specifically inactivating Q o or Q i sites [29,33,34]. Kinetic transients shown in Fig. 2 compare redox changes of heme b H (measured at 560-570 nm) under ambient redox potential setting hemes b oxidized and the quinone pool half-reduced prior to flash activation. Under these conditions, heme b H in the native enzyme undergoes light-induced reduction followed by re-oxidation ( Fig. 2A, black trace). The reduction phase is associated with the oxidation of quinol at the Q o site. The re-oxidation phase occurs through the action of the Q i site (reduction of quinone to semiquinone and then semiquinone to quinol) and is blocked by antimycin, a potent inhibitor of this site ( Fig. 2A, red trace) [35]. In the presence of both antimycin and myxothiazol (inhibitor of the Q o site [36]) the enzyme is fully blocked and changes in the redox state of heme b H do not occur ( Fig. 2A, blue trace). The kinetic transients shown in Fig. 2 indicate that the mutants do not impede the reduction phase observed in the presence of antimycin (red traces in Fig. 2, and rates in Table 1). However, the re-oxidation phase observed in the absence of any inhibitor is clearly slowed down or blocked (Fig. 2, black traces, and rates in Table 1). In the group of single mutants D252A and D252N showed approximately six fold decrease in the rate of this phase, comparing to WT while in K251M, the slowing was less severe (did not exceed two times). In double mutants (K251M/D252A, K251M/D252N), reoxidation of hemes b did not occur on a millisecond timescale (Table 1).
Kinetic transients shown in Fig. 3 compare redox changes of heme b H under ambient redox potential setting hemes b and quinone pool oxidized prior to flash activation. Under these conditions the amount of quinol molecules after flash activation is limited and approximately only one quinol is oxidized in every Q o site. This leads to reduction of heme b H which equilibrates with the occupant of the Q i site. This equilibration is reflected in a difference in amplitudes of heme b H reduction in the absence and presence of antimycin (black and red, respectively). While the reduction rates in the presence of antimycin in all mutants are similar and comparable to WT ( Table 1) the level of heme b H reduction in the absence of any inhibitors is elevated in the mutants. In single mutants (K251M, D252A, D252N) this level approaches approximately 70% of the maximum reduction level (seen in the presence of antimycin), in the double mutants, it reaches the maximum reduction level (the amplitude of black and red trace are comparable).
Kinetic transients shown in Fig. 4 (blue traces) monitor the electron transfer from QH 2 to heme b H at the Q i site (reverse reaction) under conditions where the Q o site is blocked by myxothiazol and the reduction power of Q pool is increased (by increasing pH). Reduction of heme b H under these conditions is not observed on a millisecond time scale in D252A and in both double mutants. In D252N this reaction is 70 times slower than in WT (see the rates in Table 1). In K251M, the slowing of the rate is not as severe as in D252N (5 times). At the same time, the amplitude of reverse heme b H reduction in K252M is much higher and, unlike in WT, exceeds the amplitude of heme b reduction in the absence of inhibitors (compare blue vs black in WT and K251M).
Monitoring electrogenic reactions associated with cytochrome bc 1
To get information on proton uptake from bulk solution to the Q i site, we conducted a series of measurements of electrogenic reactions associated with the operation of cytochrome bc 1 by following the antimycin-sensitive phase of carotenoid bandshift (Fig. 5 and Table 1) [37,38]. In K251M this phase is comparable to WT. D252A and D252N show decrease in the amplitude of this phase which in D252A additionally has a clearly slower rate. In contrast to single mutants, both double mutants (K251M/D252A, K251M/D252N) do not reveal antimycinsensitive phase of carotenoid bandshift.
Testing the SQ i levels by EPR
Semiquinone in the Q i is observed by EPR as antimycin-sensitive radical signal with g x transition -2.004 (Fig. 6). Typically, the signal is generated in the samples of isolated cytochrome bc 1 exposed to excess of quinol in the presence of myxothiazol. These conditions favor reverse reaction in the Q i site in which reduction of heme b H by QH 2 leads to formation of stable SQ i [9,19,[39][40][41]. Fig. 6 shows that under these conditions (and with comparable concentrations of cytochrome bc 1 ) clear SQ i signal can be observed only in WT and D252N (Fig. 6A, C).
Experimental evidence for involvement of Lys251 and Asp252 in electron/proton reactions in the Q i site
The roles of Lys251 and Asp252 in proton management of the Qi site, suggested by MD simulations [25] are supported by the effects of mutations observed here and in previous studies [16,18,19]. The results consistently indicate that mutating Lys251 and/or Asp252 alters the operation of the Q i site without much influence on the Q o site.
The unaffected Q o site was inferred from little influence of the mutations on the rates of Q o site-mediated heme b H reduction (Figs. 2-3, red traces). The influence of mutations on the Q i site was revealed by various changes in both the electron transfer reactions associated with redox reactions of the Q i site and cytochrome bc 1 -related proton translocation. The observation that the rate of the re-oxidation of heme b H (Fig. 2, black traces) was slowed down (single mutants) or blocked (double mutants) indicates impediments in electron and proton reactions that involve first electron transfer from heme b H to Q and subsequent electron transfer from heme b H to SQ to complete Q reduction.
Similar slowing of the re-oxidation of heme b H was observed in K251M mutant of R. sphaeroides, but not in the other mutant at this position (K251I) for which the kinetics comparable to WT were reported [14]. The two mutants of Asp252 (D252A and D252N) in this species exhibited lack of heme b H re-oxidation in the light-induced kinetics in the absence of inhibitors [14]. This was clearly a more severe impediment comparing to the respective mutants shown here. The redox equilibrium level between heme b H and Q or SQ was shifted in the mutants towards reduction of heme b H in comparison to WT (Fig. 3, black vs red traces), implicating that heme b H in mutants faces difficulty in delivering electron to quinone occupying the Q i site. This effect is apparently not a result of a changing in the redox midpoint potential (E m ) of heme b H given the values of E m determined by redox potentiometry (Table 1). These changes of equilibrium are also evident from the measurements of reverse reactions at the Q i site, associated with electron transfer from quinol to oxidized heme b H (Fig. 4).
For all these mutants the process of proton uptake from bulk solutions to the Q i site in the mutants, was inferred from the measurements of blue-shift of absorption spectra of carotenoids (carotenoid bandshift) upon generation of transmembrane electric field. The antimycinsensitive phase of carotenoid bandshift is associated with the action of cytochrome bc 1 complex. Concerning the previous studies [38,[42][43][44] and our results we assume that this phase reflects the reactions associated with two protons uptake from aqueous phase into the Q i site after the full quinone reduction is completed. This concerns protonation of oxygen atoms at both the C-1 (through the K251/D252 path) and C-4 groups (through the H217 path) of reduced quinone.
In light of this assumption, the diminished amplitude of the carotenoid bandshift phase in D252A and D252N, and additional slowing in D252A, reflect overall difficulty in uptake of protons to the Q i site, while the elimination of this phase in double mutants indicate much more severe blocking of this process. Single K251M does not influence much the proton uptake, as indicated by similar rate and amplitude of the carotenoid bandshift phase in this mutant (comparing to WT). The mutants of Asp252 in R. sphaeroides also affected this phase: D252N showed a slowing, with diminished amplitude while in D252A this phase was abolished. K251M showed a slower phase without amplitude change. In all three cases, changes in the carotenoid bandshift appear to be more severe in R. sphaeroides than the effects of respective mutants shown here [14]. They, however, seem to reflect the same phenomenon: perturbed proton transfers to the Q i site.
This, in view of electron transfer measurements, MD simulations and crystal structure data, is most likely associated with the hampered K251/D252 path affecting protonation of quinone C-1 carbonyl. The role of His217 in C-4 carbonyl protonation is inferred from previous studies which showed that replacing His217 to Asp or Arg yielded enzymatically active complexes functional in vivo but replacement to Leu deactivated the enzyme leading to loss of its functional competence in vivo [10]. Interestingly, H217L fully abolished the antimycinsensitive phase of carotenoid bandshift, similarly to the effects of double mutants reported here. Thus, the lack of this phase in H217L or double mutants suggests that blocking of just one proton path (either K251/ D252 path or H217 path) eliminates the proton uptake in both paths, implicating functional coupling (connection) between them.
We note that, if this and other mutational works including [9,14], are considered, there is a correlation between the occurrence of antimycinsensitive carotenoid bandshift phase and the functionality of cytochrome bc 1 in vivo: only mutants that show this phase at measurable rates and amplitudes are able to grow photosynthetically. This is understandable, if one considers that the efficiency of proton transfers ultimately defines proton motive generating capacity of the enzyme in vivo. This further substantiates the notion that this phase reflects the protons uptake from aqueous phase into the Q i site.
Additional indication for involvement of D252 in proton transfer came from the observation that barely functional D252A and non-functional K251M/D252A or K251M/D252N mutants regained functionality by restoring protonable group (either E or D) at position 252 (Table 1).
The role of H-bonding network in binding of quinone/semiquinone and defining thermodynamic properties of Q/SQ/QH 2 triad
Considering all kinetic traces shown in (Figs. 2-4), the data from measurements of carotenoid bandshift (Fig. 5) and the EPR data on SQ i (Fig. 6) we may draw the general conclusions on the influence of the mutations on changing the equilibrium of electron transfer and associated with it protonation/deprotonation within the Q i site. The most obvious results are found for the double mutants for which the mechanistic picture is rather simple. Removing of two important protonable side chains within the Q i site exerts a synergistic effect on both electron transfer (there is neither Q /SQ reduction in forward mode (Fig. 2E, F) nor QH 2 oxidation via reverse reaction (Fig. 4E, F) nor detectable SQ i (Fig. 6E, F)) and proton transfer (no observable cytochrome bc 1 -mediated proton transfers from outside of the protein to the Q i site (Fig. 5E, F)). All these effects could result from a lack or improper binding of substrate at the site.
The more complex effects are associated with single replacements of either K251 or D252 with non-protonable amino acids. Although the reactions associated with electron transfer between Q or QH 2 and heme b H are generally similar for K251M, D252A and D252N we notice some differences that result from different effect of Lys and Asp on Q/SQ/QH 2 binding and proton transfer between protein interior and exterior. The sharpest differences between Lys and Asp mutants become visible when analyzing traces in which only theoretically one-electron reactions are involved. It is clear that when Q is awaiting electron from heme b H in all three mutants K251M, D252A or D252N the electron is mostly retained at the level of heme b H as if the potential of Q /SQ couple was lowered. For K251M, it may reflect a higher degree of deprotonation of Asp carboxyl group that cannot be stabilized by interaction with amine group of Lys which leads to destabilization (weaker binding) of Q or SQ within the Q i site. This destabilization seems to be even more severe for mutant having Asp replaced with non-protonable residues (D252A and D252N) for which there is no direct partner for quinone or semiquinone that may deliver proton and stabilize the binding.
Interestingly, when considering reverse reaction (QH 2 oxidation by heme b H in the Q i site) the differences between the mutants shed light on the proton reactions associated with the SQ /QH 2 couple. A lack of QH 2 oxidation in D252A mutant indicates that deprotonation of QH 2 is blocked when direct proton exchanger (Asp) is replaced by hydrophobic residue. As a result, the semiquinone at the Q i site cannot be effectively formed (Fig. 6B) nor detectable heme b H reduction is observed (Fig. 4B). This is even though the proton path from the site to the bulk still exists (with the help of Lys251). D252N mutant encounters similar difficulty, yet the reverse reaction follows but at a very slow rate when compared to WT. In contrast to Ala in D252A, the polar Asn does not repel water molecules from the vicinity of quinone. They, in turn, may alleviate the lack of COOgroup of Asp, however they are not as efficient in proton exchange as the K251/D252 pair. Thus, the reverse reaction leads to the reduction of heme b H . This reaction is two orders of magnitude slower than WT but proceeds to higher level (Fig. 4C, Table 1). Correspondingly, clear EPR signal of SQ i can be detected in this mutant, although its amplitude is lower, when compared to WT (Fig. 6C). In K251M, unlike in D252A or D252N, the efficiency of reverse reaction is unexpectedly high, exceeding the level of WT, as if the interior of the protein was much more alkaline. To explain this, we assume that amine group of Lys251 in WT stabilizes "proper" protonation of Asp carboxyl group and the removal of the amine group in the mutants promotes fast deprotonation of SQ/QH 2 within the site. Consequently, protons from QH 2 are sequentially removed with a help of Asp and then full deprotonation promotes transfer of two electrons to the b-chain yielding high level of reduced hemes b. This apparent lowering of the redox potential of QH 2 /SQ/Q triad, induced by a very efficient deprotonation, leads to disappearance of the semiquinone EPR signal (Fig. 6D) due to the fact, that upon reverse reaction, the Q i site is overwhelmingly occupied by Q instead of being occupied by QH 2 or SQ.
In summary, the changes in electron transfer drawn from the reverse reactions associated with different deprotonation reactions allow us to make a general picture of possible equilibration states of Q i -site occupant and heme b H (Fig. 7). Single mutant D252A and double mutants K251M/D252A and K251M/D252N show neither semiquinone signal nor reduced heme b H as the impaired deprotonation of QH 2 prevents any efficient reactions in the site. In WT, Asp252 side chain interacting with K251 allows the deprotonation of QH 2 promoting a generation of relatively high level of SQ and moderate level of heme b H reduction. It can be envisaged that in this case amount of QH 2 oxidized to SQ equals the amount of reduced heme b H . In D252N the deprotonation is even more efficient than in WT, however this is not associated with an elevated level of SQ. This is simply because the electronic equilibrium is shifted from SQ to heme b H yielding lower amplitude of SQ and higher level of b H heme reduced. In this case more than one electron from QH 2 is transferred to the b-chain. In K251M, two protons are removed from the vicinity of the bound QH 2 of SQ which leads to the most efficient reverse reaction -two electrons from QH 2 eventually go to the b-chain. Thus in equilibrium the Q i site is occupied by Q instead of SQ while the level of reduced heme b H is the highest among the tested cytochrome bc 1 forms.
Parallel routes for proton transfer to the Q i site
In several studies, Lys251 and Asp252 have been considered as good candidates for residues securing proton delivery from the peripheral CL to the C-1 carbonyl of quinone [16,[18][19][20]22,24]. The possible cooperation of these two residues in proton transfer became most evident in recent MD simulations which demonstrated that the side chain of Lys251 can rotate from the periphery of the complex towards the Q i site where formation of a salt bridge with the side chain of Asp252 is possible. In view of this observation, the most obvious scenario leading to protonation of the C-1 carbonyl of quinone involves a sequential protonation of Lys251 and Asp252, as described in detail by Postila et al. [25].
We emphasize, however, that in light of experimental results, any scenario assuming a sequential mechanism of transfer of protons involving Lys251 and Asp252 should be considered as a possible, but certainly not the unique path available for protons to enter the Q i site. Alternative pathway/pathways omitting either Lys251 or Asp252 must exist in single mutants having non-protonable side chains at either of these positions (K251M or D252N), as these mutants still retain much of the electron and proton transfer capabilities and remain functional in vivo. This could be result of another protonable group/groups, possibly water molecules, taking over the function of the original side chains that are missing in the mutants, or a reminiscence of natural existence of parallel (multiple) paths for protons in native protein [45]. The latter explanation is quite reasonable in light of the multiplicity for proton paths considered in the case of other quinone binding sites, such as the Q B site of photosynthetic reaction center [46][47][48]. However, the double mutants show that the simultaneous presence of non-protonable side chains at both positions (K251M/D252A, K251M/D252N) effectively deactivates proton entry to the Q i site which yields mutants non-functional in vivo with fully inactive Q i site. This indicates that at least one of the protonable side chains at either position 251 or 252 must by present. In addition, in R. sphaeroides it was observed that the inversion of charges at positions 251 and 252 (double mutant K251D/D252K) had little effect on enzymatic activity and did not affect the function of enzyme in vivo [49]. This all indicates that proton paths in this system display engineering tolerance for change as long as all the elements available for functional cooperation secure efficient proton delivery to the catalytic site.
Transparency document
The Transparency document associated with this article can be found, in online version. | 2018-04-03T01:17:52.061Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "fa74e477a66959230c2faa27e15cc6ac3810b03d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.bbabio.2016.07.003",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa74e477a66959230c2faa27e15cc6ac3810b03d",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
225773717 | pes2o/s2orc | v3-fos-license | Realized Measures to Explain Volatility Changes over Time
: We studied (i) the volatility feedback e ff ect, defined as the relationship between contemporaneous returns and the market-based volatility, and (ii) the leverage e ff ect, defined as the relationship between lagged returns and the current market-based volatility. For our analysis, we used daily measures of volatility estimated from high frequency data to explain volatility changes over time for both the S&P500 and FTSE100 indices. The period of analysis spanned from January 2000 to June 2017 incorporating various market phases, such as booms and crashes. Based on the estimated regressions, we found evidence that the returns of S&P500 and FTSE100 indices were well explained by a specific group of realized measure estimators, and the returns negatively a ff ected realized volatility. These results are highly recommended to financial analysts dealing with high frequency data and volatility modelling. JEL Classification: F65; G11; G12; G24
Introduction
Asset pricing in financial literature is predicated on the importance of volatility in asset returns. Hence, financial market volatility is a key factor ranging from investment decisions to derivatives pricing and financial market regulation (Poon and Granger 2003). Volatility, as a measure of market risk, is always an anxiety-generating factor for market participants not only in terms of its nature but also its level (Giot et al. 2010). Admittedly, volatility is an unobservable variable and its effects on financial markets are hard to anticipate. This is one of the reasons that asset return volatilities (henceforth, volatility) are of utmost importance to empirical finance. In other words, it remains the ingredient in assessing asset or portfolio risk, playing an important role in asset pricing models which heavily depend on underlying asset return dynamics. Asset management and asset pricing models require the proper volatility modeling of financial assets.
The existing literature has laid emphasis on time-varying volatility suggesting various tools for its modeling (see Andersen and Bollerslev 1998;Andersen et al. 2001a;Shephard 2002, 2004;Andersen et al. 2007; Barndorff-Nielsen et al. 2008;Hansen and Horel 2009;Meddahi et al. 2011;Alghalith et al. 2020; among others). Andersen and Bollerslev (1998) showed that ARCH models produce strikingly accurate inter-daily forecasts for the latent volatility on most financial applications. Andersen et al. (2001a) found evidence that realized volatilities and correlations move together in a manner broadly consistent with latent factor structure. Barndorff-Nielsen and Shephard To give full answers to the above questions, we empirically examine the magnitude and the sign of the linear relationship between market-based volatility and contemporaneous returns, the so-called volatility feedback effect both on S&P500 and FTSE100. Consequently, we study (1) the existence of positive relationship market-based volatility and contemporaneous returns and (2) the existence of negative relationship between the lagged returns and the volatility. According to Carr and Wu (2017), "this effect can show up as a negative correlation between the index return and its volatility, regardless of the market's financial leverage level". Furthermore, we test the relationship between the lagged returns and the current market-based volatility, the so-called leverage effect. We refer to the study implemented by Zumbach (2013) for a detailed analysis of the leverage effect. The literature often refers to a negative relation between equity return and return volatility as a "leverage effect" (Carr and Wu 2017).
In this paper, we provide empirical results from intraday data to derive a negative relation between returns and volatilities; to the best of our knowledge, this is the first study that considers several realized measures to explain these hypotheses. Asset management and asset pricing models require a proper volatility modeling of financial assets. High frequency data contains more information about the market, such as the intraday changes and the market microstructure, making realized measures more accurate than those constructed by employing lower frequencies (see Hansen and Huang 2016). For modelling volatility with high-frequency data, see Degiannakis and Floros (2015). However, market participants face problems when modelling volatility using intraday data. They should 1 The relation between asset returns and expected volatility motivates the research to produce volatility forecast models using several techniques (see Xu 1999;Bali and Theodossiou 2007;Wang and Zhu 2010;Kiliç and Ugur 2018;Shang et al. 2018;among others). choose the correct sampling frequency as well as whether to sample prices in calendar time (every n seconds) or tick time (every m trades). Nevertheless, when both quotation prices and transaction are available, the selection of which price to use arises too. What is more, some realized measures require selections about tuning parameters, such as kernel bandwidth. All realized measures are based on the minute-by-minute intraday data of S&P500 (US) and FTSE100 (UK) indices.
For the analysis, we examined a large number of realized measures across S&P500 and FTSE100 indices, we considered thirteen realized estimators from three classes, and we applied these to 17 years of data. Our objective was to compare a large number of available realized measures to examine the volatility and leverage effects, providing a comparison of realized measures in environments with varied price processes and market microstructures. The existing literature shows that realized measures estimated by high frequency data are the most appropriate tools to deal with good and bad volatility (see Floros 2015, 2016). The importance of the examination of volatility can also emerge from upside and downside volatility associated the positive and negative price increments (see Bollerslev et al. 2019). In volatility modeling, it is necessary to decompose the volatility into two parts, the perceived as directional-persistent volatility (good volatility) and the jumpy-relatively hard to anticipate volatility (bad volatility; see Giot et al. 2010). As a consequence, many estimators of asset return volatility constructed using high-frequency price data have been suggested by Andersen and Bollerslev (1998), Andersen et al. (2001a), Shephard (2002, 2004), Andersen et al. (2007), Barndorff-Nielsen et al. (2008), Horel (2009), andMeddahi et al. (2011), among others; these include realized variance, realized bi-power variation, and median realized variance. In this study, we also considered a simple realized variance estimator with a reasonable choice of sampling frequency, namely the 5-min realized variance or 10-min realized variance, and tested whether it was a "good" estimator, following the recent literature (see Gkillas et al. 2019b). As both frequencies are "good" to construct realized estimators, Liu et al. (2015) found that the adequate balance between high and low sampling frequencies vary across different assets. However, the comparison of estimation accuracy shows that "it is difficult to significantly beat RV 5 estimator" (Liu et al. 2015); in this paper we also used the realized bi-power variation and median realized variance to deal with good volatility. On the other hand, Low et al. (2016) found that, by using copula-based models, several mean-variance-based rules exhibit statistically significant and superior performance improvements. For the superior performance of portfolio optimization, they follow Low (2018) by applying the Clayton canonical vine copula. For pairs trading strategy, see Rad et al. (2016).
The results for the S&P500 index show that eight estimators explain approximately 72% of the returns of the S&P500 index. As for the FTSE100 index, six estimators explain approximately 74% of the returns of the FTSE100 index. Also, in both indices, the returns negatively affect the realized volatility. Such findings are recommended to high-frequency financial analysts and volatility modelers. The findings are robust, using not only several realized measures estimated by high frequency data, but also using different sub-samples, providing evidence that realized measures have significantly impacted on return and vice versa. This paper is organized as follows: Section 2 details the data selection and introduces the realized measures used in this study. Section 3 introduces the hypotheses and discusses the estimation results. Section 4 discuss further practical implications of the results. Section 5 concludes by summarizing the results of this study.
Data and Methodology
In this section, we present the methodology used to compute the realized measures and provide the data and data adjustment.
Realized Measures
We first provide some theoretical considerations of the realized volatility estimators. Second, we present the methodology of construction realized measures, and thus we classify the realized measures considered in this study into three classes. 2 2.1.1. Theoretical Considerations Realized measures are theoretically sound high frequency, nonparametric estimators of the variation of the price path of an asset during the times at which the asset trades frequently on an index. The background to realized measures can be found in the survey articles by McAleer and Medeiros (2008) and Barndorff-Nielsen and Shephard (2006).
In this section, we present the theory of quadratic variation (QV). For discrete prices process, the volatility at the given time of trade t can be estimated by quadratic variation (QV) as: where t t−1 σ 2 s ds denotes the continuous variation and t−1<s≤t κ 2 s t denotes the discontinuous variation. The QV is considered as a biased estimator of integrated volatility.
Under weak regularity conditions and as N → ∞ , realized volatility RV is a consistent estimator of quadratic variation QV t : Following Shephard (2004, 2006), the quadratic variation is separated into its continuous and discontinuous components. The continuous sample path variation t t−1 σ 2 s ds can be estimated by the realized bi-power variation (BPV) or by the median realized variance (MRV). The difference between realized variance and realized bi-power variation estimates the discontinuous component t−1<s≤t κ 2 s t of the quadratic variation (Barndorff-Nielsen and Shephard 2004).
Classes of Realized Measures
First class includes the measures on realized variance (RV) constructed as the sum of squared intraday returns introduced by Andersen et al. (2001b). Under certain conditions on the market microstructure noise, these estimators are consistent at the optimal rate (Liu et al. 2015). The realized variance is given by: However, RV estimator is a consistent nonparametric estimator (see Barndorff-Nielsen and Shephard 2002). Hansen and Lunde (2006) concluded that RV estimator is biased at high frequency sampling. This class also includes the two-scale realized variance (TSRV) and the multi-scale realized variance (MSRV) introduced by Zhang et al. (2005) and Zhang (2006), respectively. These estimators are a combination of subsampled RV on slower frequency sampling and RV on higher frequency sampling.
To capture the sign asymmetry of the price process, the downside and upside semi-variance (realized volatility) are constructed. Following Barndorff-Nielsen et al. (2010), the downside realized semi-variance (RSV D t ) can be defined as: 2 Our description is compact due to the fact that the details of the realized methodology used in this study have been laid out in recent contributions by Gkillas (Gkillas ) Gkillas (Gillas); Demirer et al. (2019); Gkillas et al. (2019aGkillas et al. ( , 2019bGkillas et al. ( , 2020aGkillas et al. ( , 2020bGkillas et al. ( , 2020c. Second class is a generalization of realized volatility RV. It includes realized kernel (RK) estimators introduced by Barndorff-Nielsen et al. (2008). This class includes three realized measures using several different kernels; first RK with the "two-scale" (RK Barlet ) Bartlett estimator, second the modified Tukey-Hanning kernel (RK TH2 ) estimator, and third the "non-flat" Parzen (RK Parzen ) estimator. All estimators fit a wider variety of microstructure effects (i.e., noise), leading to consistent estimators.
Following Barndorff-Nielsen et al. (2008), realized measures using several different kernels can be defined as: where γ h = n j=|h|+1 r t,j r t,j−|h| and k(x) is a kernel weight function such that k(0) = 1 and k(1) = 0 and the optimal bandwidth parameter H is estimated using the procedure of Barndorff-Nielsen et al. (2009). The realized kernel measure is guaranteed to be non-negative, which is quite important, as some of our time series methods rely on this property. The first special case of the so-called flat-top Bartlett kernel defined where k(x) = (1 − x) is particularly interesting as its asymptotic distribution is the same as that of the two-scale estimator. The second special case of the so-called Tukey-Hanning defined where k h p (x) = sin 2 π 2 (1 − x) p . Third class are the jump-robust realized measures as bi-power variation (BPV) introduced by Barndorff-Nielsen and Shephard (2006), which is a scaled sum of products of adjacent absolute returns and the median realized variance (MedRV) estimator introduced by Andersen et al. (2012), which is a scaled square of the median of three consecutive intraday absolute returns. As BPV is influenced by the sampling frequency, MedRV is characterized as a more robust estimator to jumps and microstructure noise than BPV (see Barndorff-Nielsen and Shephard 2004;Andersen et al. 2012). Shephard (2004, 2006) separated the quadratic variation into its continuous and discontinuous components. The continuous part of QV can be estimated by the realized bi-power variation BPV t as follows: where µ α is equal to E(|Z| α ), where Z ∼ N(0, 1) and α > 0, thus µ 1 is equal to √ 2/π. Andersen et al. (2012) proposed the median realized variance MedRV t as the robust estimator instead of BPV t . Andersen et al. (2012) proposed the median realized variance MedRV t as a robust estimator to test the continuous variation which attenuates the effect of noise, instead of BPV t which is influenced by the sampling frequency.
Three estimators which are somewhat robust to market microstructure noise have been suggested in the following literature: pre-averaging by Jacod (2007), multi-scale by Zhang et al. (2005) and Zhang (2006) and realized kernel by Barndorff-Nielsen et al. (2008).
Data and Data Adjustments
We followed Patton (2011) by using the data-based ranking method. This method makes no assumption with regard to the market microstructure noise properties, apart from standard moment and mixing conditions. In this study on the relative performance of estimators of intraday quadratic variation from three classes of realized measures, we used transactions and quotation prices from January 2000 to June 2017, sampled in calendar time and tick-time, for sampling frequencies ranging from 5-min to 10-min (see Table 1). We used intraday measures for the reason that they shed light on information not easily seen at lower sampling frequencies (see Hansen and Huang 2016). The source of our data is the Oxford-Man Institute of Quantitative Finance (http://www.oxford-man.ox.ac.uk/).
Realized variance (5-min sub-sampled)
Note: This table reports the list of realized estimators, codes, and full names of the 13 estimators of realized measures of three classes. The source of our data is the Oxford-Man Institute of Quantitative Finance. All realized measures require a choice of sampling frequency (e.g., 5-min or 10-min sampling), sampling scheme (calendar time or tick time), and whether to use transaction prices or mid-quotes.
We considered tick-time scheme sampling by means of samples that yield average durations that match the values for calendar-time sampling, as well as a "tick-by-tick" estimator that simply uses every available observation. By using subsampling, we aimed to improve efficiency of some sparse-sampled estimators (Zhang et al. 2005). The sub-sampled version of RV is called the "average RV" estimator and is given by Andersen et al. (2011) and Ghysels and Sinko (2011). Furthermore, by reporting RV estimators, we always subsampled them to the maximum degree possible from the data as this averaging is at least quite beneficial on theoretical grounds, particularly in the presence of modest amounts of noise.
The database contains intraday financial returns denoted as: where r i,t is the i th return i = (1, . . . , N), where N is the total number of intraday observations, on the given time of trade t as r i,t = R t j.t − R t j−1.t , with R being the financial intraday return and t j.t the times of trades or quotes (or a subset of them) on the t th time of trade. The theoretical justification of this measure is that if the prices are observed without noise, then as min j t j.t − t j−1.t ↓ 0 , it consistently estimates the quadratic variation of the price process on the t th time of trade. It was formalized econometrically by Andersen et al. (2001a) and Barndorff-Nielsen and Shephard (2002). Then, a corresponding sequence of intraday realized measures estimated by high frequency returns denoted as: where RM is the realize measure considered.
Empirical Results
We adopted a hypothesis-based description strategy in order to discuss our results. Firstly, we discussed the relationship between returns and contemporaneous volatility (volatility feedback effect). Secondly, the relationship between lagged returns and the current market-based volatility (leverage effect) is examined.
Volatility Feedback Effect
According to Bekaert and Wu (2000) "If volatility is priced, does an anticipated increase in volatility raise the required return on equity, leading to an immediate stock price decline?". The empirical evidence related to the relationship between returns and contemporaneous volatility concluded that the volatility feedback effect is mostly statistically insignificant. According to Bollerslev and Zhou (2006), trade-off between monthly returns and volatilities in a regression framework shows that b < 0. Furthermore, "continuous-time" volatility feedback effect is captured directly by the risk-return trade-off parameter (i.e., b > 0). The volatility asymmetry effect similar to other markets and time periods is documented in other studies (see Schwert 1990;Nelson 1991;Gallant et al. 1997;Engle and Ng 1993;Duffee 1995;Bekaert and Wu 2000;Wu 2001, among others).
J. Risk Financial Manag. 2020, 13, x FOR PEER REVIEW 7 of 19 effect). Secondly, the relationship between lagged returns and the current market-based volatility (leverage effect) is examined. We examined these relationships using S&P500 and FTSE100 datasets. The three classes of realized measures of S&P500 and FTSE100 are depicted in Figure 1 and Figure 2, respectively. These figures show the realized measures for S&P500 and FTSE100 indices (from left to right) as follows: (i) realized bi-power variation (5-min sub-sampled) estimator, (ii) realized bi-power variation (5-min) estimator, (iii) index closing price, (iv) median realized variance (5-min) estimator, (v) realized Kernel variance estimator (two-scale/Barlett), (vi) realized variance (5-min sub-sampled) estimator, (vii) realized variance (10-min) estimator, (viii) realized variance (10-min sub-sampled) estimator, (ix) realized semi-variance (5-min) estimator, (x) realized semi-variance (5-min sub-sampled) estimator, (xi) realized variance (5-min) estimator, and (xii) return of the index. Against this backdrop, we tested the first hypothesis, namely "volatility feedback effect". To this end, we estimated the following equation, which presents the returns as a linear function of realized measures: where R t are the daily returns either for S&P500 or FTSE100 index, BPV t is realized bi-power variation (5-min) measure, BPV SS t is realized bi-power variation (5-min sub-sampled) measure, MedRV t is median realized variance (5-min), RK Barlet t is realized Kernel variance (two-scale/Barlett), RSV D t is realized semi-variance (5-min), RSV D,SS t is realized semi-variance (5-min sub-sampled), RV t is realized volatility, RV 10,t is realized variance (10-min), RV SS 10, t is realized variance (10-min sub-sampled), and RV SS 5,t is realized variance (5-min sub-sampled). In order to deal with multicollinearity issues that may exist across the realized measure, we employed the variance inflation factor (VIF). We selected 10 out of 13 realized variance estimators in which multicollinearity does not affect our estimation results. estimator, the fifth figure depicts the realized Kernel variance estimator (two-scale/Barlett), the sixth figure depicts the realized variance (5-min sub-sampled) estimator, the seventh figure depicts the realized variance (10-min) estimator, the eighth figure depicts the realized variance (10-min subsampled) estimator, the ninth figure depicts the realized semi-variance (5-min) estimator, the tenth figure depicts the realized semi-variance (5-min sub-sampled) estimator, the eleventh figure depicts the realized variance (5-min) estimator, and the twelfth figure depicts the return on the S&P500 index.
Volatility Feedback Effect
According to Bekaert and Wu (2000) "If volatility is priced, does an anticipated increase in volatility raise the required return on equity, leading to an immediate stock price decline?". The empirical evidence related to the relationship between returns and contemporaneous volatility concluded that the volatility feedback effect is mostly statistically insignificant. According to Table 2 reports the results on the impact of realized measures on returns. Panel A refers to the results from S&P500, where MedRV, RSV D,SS , RV and RV SS 10 have a statistically significant positive impact on return R. Additionally, BPV, RK Barlet , RSV D , RV 10 and RV SS 5 have a statistically significant positive impact on return R. Thus, the hypothesis of volatility feedback effect is accepted in the case of S&P500. Furthermore, Panel B refers to the results of FTSE100. The results show that BPV SS , MedRV, RSV D,SS , RV and RV 10 have a statistically significant positive impact on return R. Further, RSV D and RV SS 5 have a statistically significant positive impact on return R. Thus, the hypothesis of volatility feedback effect is accepted for the case of FTSE100. Our results show that realized measures which affect most the returns are: (i) RSV D , RSV D,SS and RV with the marginal impact equal to 0.2244, 0.1341 and 0.0896, respectively, regarding S&P500, and (ii) RSV D , MedRV and RV SS 5 with the marginal impact equal to 0.0409, 0.0231 and 0.0226, respectively, regarding FTSE100. We found that BPV RK Barlet and RV SS 10 have impact only on S&P500 index. Further, BPV SS has impact only on FTSE100 index. The rest of the realized measures have impact on both indices. Note: This table reports the results of Hypothesis 1: Volatility feedback effect "If volatility is priced, does an anticipated increase in volatility raise the required return on equity, leading to an immediate stock price decline?". The following estimators in absolute values are reported: BPV is the realized bi-power variation (5-min) measure, BPV SS is the realized bi-power variation (5-min sub-sampled) measure, MedRV is the median realized variance (5-min), RK Barlet is the realized Kernel variance (two-scale/Barlett), RSV D is the realized semi-variance (5-min), RSV D,SS is the realized semi-variance (5-min sub-sampled), RV is the realized volatility, RV 10 is the realized variance (10-min), RV SS 10 is the realized variance (10-min sub-sampled), and RV SS 5 is the realized variance (5-min sub-sampled). Standard errors are in parentheses. Panel A refers to the results from the S&P500 index, and Panel B refers to the results from the FTSE100 index. ***, ** and * indicate significance at 1, 5 and 10%, respectively.
Leverage Effect Results
According to Black (1976), "Does a drop in the value of the stock (negative return) increase financial leverage, so that it makes the stock riskier and increases its volatility?". In the context of empirical assessment related to leverage effect, many volatility models and volatility-return regressions have been used in the existing literature (see, Bekaert and Wu 2000). Several studies confirmed that the hypothesis that aggregate stock market volatility asymmetrically reacts to past negative returns (i.e., d < 0). Leverage effect delineated a negative relation between returns and volatilities; it was first discussed by Black (1976) and Christie (1982). Other studies include, Schwert (1990), Nelson (1991), Gallant et al. (1992), Glosten et al. (1993), Engle and Ng (1993), Duffee (1995), Bekaert and Wu (2000), and Bollerslev and Zhou (2006).
Against this backdrop, we tested the second hypothesis, namely "leverage effect". To this end, we estimated realized volatility following Bekaert and Wu (2000), which presents realized variation as a linear function of returns: where RV is the realized volatility and R is the return. Table 3 reports the impact of realized measures on return. Panel A shows that S&P500 returns have a statistically significant but negative impact on their past returns, with coefficient d being equal to −1.1040 × 10 −3 . Moreover, Panel B shows that FTSE100 returns have a statistically significant negative impact on their past returns with coefficient d being equal to −5.6477 × 10 −4 . Hence, in both cases, the hypothesis of leverage effect is accepted.
Practical Implications
In this section, we investigate the out-of-sample forecasting accuracy of our model to forecast returns of S&P500 and FTSE100. First, we provide the out-of-sample forecasting results, and second, we proceed to a portfolio exercise.
Out-of-Sample Analysis
We studied the forecasting accuracy of our model and tested Equation (8), which presents the returns as a linear function of realized measures. We provided four different forecast evaluation measures considering (i) mean forecast error (MFE), (ii) mean absolute deviation (MAD), (iii) tracking signal and (iv) the forecast error. We used one step-ahead rolling estimation windows, using a fixed number of the most recent data at each point of time, on the daily equity returns of S&P500 and FTSE100 indices. To this end, we used a rolling-estimation window of 1000 observations.
As for S&P500, the mean forecast error was equal to 7.83 × 10 −3 , and the mean absolute deviation was equal to 2.61 × 10 −6 with tracking signal equal to 107.3424. As for FTSE100, the mean forecast error was equal to −6.10 × 10 −3 , and the mean absolute deviation was equal to 2.03 × 10 −6 with tracking signal equal to −222.2500. Figure 3 depicts the forecast error both for of S&P500 and FTSE100 index.
As for S&P500, the mean forecast error was equal to 7.83 × 10 −3 , and the mean absolute deviation was equal to 2.61 × 10 −6 with tracking signal equal to 107.3424. As for FTSE100, the mean forecast error was equal to −6.10 × 10 −3 , and the mean absolute deviation was equal to 2.03 × 10 −6 with tracking signal equal to −222.2500. Figure 3 depicts the forecast error both for of S&P500 and FTSE100 index. Forecast Error FTSE100 We further computed the forecasting strength of our model (Equation (8)) and removed the periods of high volatility. In particular, we estimated Equation (8) excluding high and low volatility regimes. Figure 4 depicts various volatility regimes obtained from S&P500 (Panel A) and FTSE100 (Panel B). In both panels, green and red colors depict various periods of high and low volatility, respectively. Further, Figure 5 depicts two sets of figures (i) the volatility regimes in returns (upper figure) and (ii) the probability of the high and low volatility regimes (bottom figure) of S&P500 and FTSE100 indices. Panel A refers to S&P500 index sample. Panel B refers to FTSE100 index sample. We further computed the forecasting strength of our model (Equation (8)) and removed the periods of high volatility. In particular, we estimated Equation (8) excluding high and low volatility regimes. Figure 4 depicts various volatility regimes obtained from S&P500 (Panel A) and FTSE100 (Panel B). In both panels, green and red colors depict various periods of high and low volatility, respectively. Further, Figure 5 (8)) in low and high volatility regimes. In particular, we picked 10 regimes (obtained by Figures 4 and 5), five lower volatility regimes and five higher volatility regimes (we selected the regimes with the higher number of observations). R adjusted (R adj.) of Equation (8) for low and high volatility regimes are reported in Table 4, where again Panel A refers to S&P500 index and Panel B refers to FTSE100 index. As for S&P500 (Panel A) in low volatility regimes, we found that R adj had max equal to 0.8274 and min equal to 0.7377, while in high volatility regimes the R adj had max equal to 0.8809 and min equal to 0.7452. As for FTSE100 (Panel B) in low volatility regimes, we found that R adj had max equal to 0.7934 and min equal to 0.6416, while in high volatility regimes the R adj had max equal to 0.8731 and min equal to 0.7697. Such evidence indicates that the predictability of our model is qualitatively similar between periods of low and high levels of volatility. Figures 4 and 5) were picked, five lower volatility regimes and five higher volatility regimes (the selected regimes were the regimes with the higher number of observations).
Portfolio Implications
We now proceed to a portfolio exercise to provide practical implications of the results obtained from our study. We took the point of view of investors in equity markets and we constructed two-asset portfolios: the first included the actual returns of the S&P500 and FTSE100 indices, while the second included the expected returns of the S&P500 and FTSE100 indices obtained from Equation (8). Consequently, we analyzed the optimal portfolio weights using actual and expected returns. We employed mean-variance portfolios based on Markowitz (1952) to compute the optimal weighs (see Gkillas and Longin 2019). Considering the portfolio with the actual returns, the weights of FTSE100 and S&P500 asset were equal to 0.7172 and 0.2825, respectively, while in the portfolio with the expected returns, the weights of FTSE100 and S&P500 asset were equal to 0.7055 and 0.2939, respectively. Again, such evidence indicates that our estimates are reasonable accurate.
Summary and Conclusions
In this study, we investigated the impact of the realized measures on returns and vice versa, considering S&P500 and FTSE100 indices for the period spans from January 2000 to June 2017. Specifically, we empirically examined two research questions as follows: "If volatility is priced, does an anticipated increase in volatility raise the required return on equity on S&P500 and FTSE100 index?" and "Does a drop in the value of the stock (negative return) increase financial leverage, so that it makes the stock riskier and increases its volatility on S&P500 and FTSE100 index?". We considered 10 realized measures from the Oxford-Man Institute of Quantitative Finance database to examine two hypotheses associated with the financial modelling and decision-making: (i) the volatility feedback effect as the relationship between the contemporaneous returns and the market-based volatility and (ii) the leverage effect as the relationship between lagged returns and the current market-based volatility.
As for the first research question, we found a positive relationship between the volatility and returns. The results were as follows for volatility feedback effect: for S&P500, most of the realized measures had a significant positive effect on daily returns BPV, MedRV, RK Barlet , RSV D , RSV D,SS , RV, RV 10 , RV SS 10 and RV SS 5 . For FTSE100, most of the realized measures had a significant positive effect on daily returns: BPV SS , MedRV, RSV D , RSV D,SS , RV, RV 10 and RV SS 5 . Both realized RK Barlet and RV SS 10 had no effect on returns for FTSE100. An increase on "continuous-time" volatility raised the required return on both equity markets capturing the risk-return trade-off effect, which is consistent to Bollerslev and Zhou (2006). Furthermore, as for the second hypothesis, we found a negative relationship between the lag-returns and the volatility. With regard to the leverage effect hypothesis, both S&P500 and FTSE100 indices show that returns negatively affect realized volatility, which indicates that downside returns make the stocks riskier and increase their volatility. We confirmed the stylized fact of leverage effect, in which returns are negatively correlated with realized volatility (Corsi et al. 2012). Such evidence for S&P500 and FTSE100 indices is consistent with the existing literature. Overall, we conclude that realized measures change and affect differently the daily returns of financial indices.
Future research should examine forecasting accuracy, that is, if implied volatilities provide unbiased and informationally efficient forecasts of the corresponding future realized volatilities.
Author Contributions: Authors have equal contributions. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding. | 2020-06-18T09:09:03.097Z | 2020-06-13T00:00:00.000 | {
"year": 2020,
"sha1": "a16c1570ca0205a7d9dbdff8414b35196c8a1d2d",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1911-8074/13/6/125/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c5767e7695aefc4db28cc6924a47db28d08d7729",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Economics"
]
} |
577439 | pes2o/s2orc | v3-fos-license | Coverage and system efficiencies of insecticide-treated nets in Africa from 2000 to 2017
Insecticide-treated nets (ITNs) for malaria control are widespread but coverage remains inadequate. We developed a Bayesian model using data from 102 national surveys, triangulated against delivery data and distribution reports, to generate year-by-year estimates of four ITN coverage indicators. We explored the impact of two potential 'inefficiencies': uneven net distribution among households and rapid rates of net loss from households. We estimated that, in 2013, 21% (17%–26%) of ITNs were over-allocated and this has worsened over time as overall net provision has increased. We estimated that rates of ITN loss from households are more rapid than previously thought, with 50% lost after 23 (20–28) months. We predict that the current estimate of 920 million additional ITNs required to achieve universal coverage would in reality yield a lower level of coverage (77% population access). By improving efficiency, however, the 920 million ITNs could yield population access as high as 95%. DOI: http://dx.doi.org/10.7554/eLife.09672.001
Introduction
Insecticide-treated nets (ITNs), which comprise conventional (cITNs) and long-lasting insecticidal nets (LLINs), are the single most widely used intervention for malaria control in Africa, proven to significantly reduce morbidity and mortality via direct protection and community-wide reductions in transmission (Lim et al., 2011;Lengeler and Lengeler, 2004;Eisele et al., 2010;Killeen et al., 2007).
The World Health Organization (WHO) promotes a target of universal coverage for all populations at risk with either ITNs or indoor residual spraying (IRS), with the former representing the primary vector control tool in nearly all endemic African countries (WHO, 2013a). The international community has invested billions of dollars in the provision of at least 700 million LLINs since 2004 (WHO, 2013a). While these investments have led to enormous scale up in population access to ITNs (Noor et al., 2009;Monasch et al., 2004), the target of universal coverage remains distant and millions of African households at risk remain unprotected (WHO, 2013a).
Bridging this gap is a key component of future strategies to reduce further the burden of malaria in Africa (WHO, 2014), and will require sustained commitment from donors, policy makers and national programmes. Central to these efforts is the capacity to monitor reliably current levels of ITN coverage in populations at risk and evaluate the systems that give rise to this coverage. This, in turn, enables progress towards international goals to be tracked and opportunities for efficiency gains to be identified. Such information is essential for evaluating the existing commodity and financing shortfalls and assessing future requirements if the target of universal coverage is to be achieved.
Modelling coverage
To facilitate standardised and comparable monitoring of ITN coverage through time, WHO and the Roll Back Malaria Monitoring and Evaluation Reference Group (RBM-MERG) has over the past decade defined a series of indicators to capture two different aspects of ITN coverage: access and use. Gold standard measurements of these indicators are provided by nationally representative household surveys such as Demographic and Health Surveys (DHS) (Measure, 2014), Multiple Indicator Cluster Surveys (MICS) (UNICEF, 2012), and Malaria Indicator Surveys (MIS) (RBM, 2014a). These surveys are carried out relatively infrequently, however, meaning they cannot be used directly for evaluating year-on-year coverage trends or for generating timely estimates of continent-wide coverage levels. In contrast, programmatic data such as the number of ITNs delivered and distributed within countries, while not describing coverage directly, are available for most countries and years (WHO, 2013a ). In a 2009 study, Flaxman and colleagues (Flaxman et al., 2010) used a compartmental modelling approach to link these programmatic and survey data, generating annual estimates of the two ITN indicators recommended at that time on access (% households with at least one ITN) and use (% children < 5 years old who slept under an ITN the previous night). eLife digest Malaria is a major cause of death in many parts of the world, especially in sub-Saharan Africa. Recently, there has been a renewed emphasis on using preventive measures to reduce the deaths and illnesses caused by malaria. Insecticide-treated nets are the most prominent preventive measure used in areas where malaria is particularly common. However, despite huge international efforts to send enough nets to the regions that need them, the processes of delivering and distributing the nets are inefficient. This problem is compounded by the fact that little information is available on how many nets people actually own and use within each country. ' Bhatt et al. have now created a mathematical model that describes the use and distribution of nets across Africa since 2000. This is based on data collected from national surveys and reports on the delivery and distribution of the nets. The model estimates that in 2013, only 43% of people at risk of malaria slept under a net. Furthermore, 21% of new nets were allocated to households that already had enough nets, an inefficiency that has worsened over the years. Nets are also lost from households much more rapidly than previously thought.
It's currently estimated that 920 million additional nets are required to ensure that everyone at risk from malaria in Africa is adequately protected. However, Bhatt et al.'s model suggests that given the current inefficiencies in net distribution, the extra nets would in reality protect a much smaller proportion of the population. Taking measures to more effectively target the nets to the households that need them could improve this coverage level to 95% of the population. The next challenge is to devise distribution strategies to send nets to where they are most needed.
Since that study, there has been increasing recognition that a richer set of indictors is required to identify the complex nature of ITN coverage . An intra-household 'ownership gap' may exist whereby many households with some nets may not have enough for one net between two occupants (the recommended minimum level of protection (WHO, 2013b). Similarly, a 'usage gap' may exist whereby individuals with access to a net do not sleep under it. In response, the measurement of two additional indicators was recommended: % households with at least one ITN for every two people and % population with access to an ITN within their household (assuming each net was used by two people) (RBM, UNICEF, WHO, 2013;RBM, 2011). In addition, the indicator on usage was extended to include the entire population rather than only children under 5 years old. This updated set of four indicators, used individually and in combination, has the potential to provide a nuanced picture of ITN access and use patterns that can directly guide operational decision making . To achieve this, there is a need to develop modelling frameworks to allow all four to be tracked through time.
Evaluating efficiency
Countries have an ongoing struggle to maintain high LLIN coverage in the face of continuous loss of nets from households due to damage, repurposing, or movement away from target areas. In response, systems need to be responsive to emerging coverage gaps by ensuring nets are distributed to households that need them and avoiding over-allocation (i.e. distribution of nets to those that already have them). Together, the rate of net loss and the degree of over-allocation of new nets play a key role in determining how efficiently delivery to countries will translate into household coverage levels. These factors are not currently well understood but triangulation of survey and programmatic data allows new insights into both.
Estimating future needs
The WHO define universal access to ITNs on the basis that two people can share one net. Using the working assumptions of a 3-year ITN lifespan and a 1.8 person-per-net ratio (one-between-two but allowing for odd-numbered households), a simple calculation yields an indicative estimate of 150 million new nets required each year to provide universal coverage to an African population at risk of around 810 million (WHO, 2013a). To support country planning and donor application processes (RBM-HWG, 2014), a more elaborate needs assessment approach has been developed by the RBM Harmonization Working Group (RBM-HWG) and implemented by 41 of the 47 endemic African countries (RBM, 2014;Paintain et al., 2013). The tool takes into account the size and structure of national target populations, a 1.8 person-per-net ratio for mass campaigns, additional routine distribution mechanisms employed by countries, and volumes of previously distributed nets and their likely rates of loss through time. Countries have used these inputs to calculate requirements for new nets to achieve national coverage targets, leading to an estimated continent-wide need for 920 million ITNs over the 2014-2017 period (approximately 230 million per year) (RBM, 2014). This tool provides a transparent, intuitive and standardised mechanism for comparing forecasted needs against current financing levels and identifying likely shortfalls. However, calculated needs are sensitive to assumptions about how a given volume of new nets will translate into population coverage, and inefficiencies in the system such as such as over-allocation and rate of net loss are not accounted for explicitly in the current needs assessment exercise.
The purpose of this study is to define a new dynamic modelling approach, triangulating all available data on ITN delivery, distribution and coverage in sub-Saharan Africa in order to (i) provide validated and data-driven time-series estimates for all four internationally recommended ITN indicators; (ii) explore and quantify different aspects of system efficiency and how these contribute to reduced coverage levels; and (iii) estimate future LLIN needs to achieve universal access by 2017 under different efficiency scenarios and how these compare to existing needs assessment estimates. Figure 1A summarises the main inputs to and outputs from the stock-and-flow model for LLINs when aggregated at the continental level. Some 718 million LLINs have been delivered across the 40 endemic countries since their introduction in 2004. As is well documented (WHO, 2013a), annual LLIN deliveries increased year-on-year from 2004 to 2010, reaching 145 million in that year, but then declined dramatically in 2011 and 2012 to less than half that amount before rising again to 143 million in 2013 (green line). Taking into account rates of loss in households, these LLIN deliveries led to a continental net crop shown by the red line. We estimate that there were 252 million LLINs in sub-Saharan households by the end of 2013, with that net crop growing approximately linearly from 2004, with the exception of a slow-down resulting from the reduced supply of nets in 2011-2012. Figure 1B shows equivalent distribution and resulting net crop estimates for cITNs, which constituted nearly all ITNs prior to 2005 but diminished rapidly in importance following the introduction of LLINs thereafter. Figure 2 shows continent-level time-series estimates of the four internationally recommended ITN indicators, along with the 'access gap' indicator. All four indicators show a similar temporal trend: very low coverage levels and modest year-on-year increases for the first 5 years from 2000, with a marked inflexion point in 2005 and much more rapid gains thereafter. Importantly, however, all four indicators show that the pace of increase has, overall, slowed since 2005. By the end of 2013, we estimate that around two-thirds (66%, 95% CI 62%-71%) of households at risk owned at least one ITN. However, less than one-third (31%, 29%-34%) owned enough for one ITN between two people. This much lower level of adequate ownership is reflected in the levels of access and use, with 48% (45%-51%) of people at risk having access to an ITN within their household (on a one-between-two basis) and 43% (39%-46%) sleeping under an ITN the previous night. Comparison of Figure 2A,B demonstrates that many households that own some ITNs do not own enough for one-between-two, and this is captured in the time-series for the 'ownership gap' ( Figure 2E). Encouragingly, this gap has been narrowed from 77% (76%-78%) of net-owning households having insufficient nets in 2000 to 56% (54%-57%) in 2013. Analysis of the 'use gap' suggested a large majority (89%, 84%-93%) of those with access to an ITN in the household slept under it the previous night, and we found no evidence of significant change in this proportion through time.
Coverage estimates
The relatively smooth temporal trends seen at continental level obscure a great deal of complexity in the patterns of ITN scale-up occurring at national level ( Figure 3). Nearly all countries began with very low coverage levels in 2000 and display a marked inflection point towards the middle of the decade, although there was considerable variation in the timing of onset of concerted scale-up activities. Importantly, the monotonic increases in coverage seen at the aggregated continental level are often replaced at national level with pronounced periods of rise and fall, and in many cases, 2013 does not represent the peak year. Variation in contemporary levels of coverage remains stark. The population with access to ITNs within the household, for example, was at or below 15% in seven countries in 2013, while above 70% for the top four.
The year-on-year increase in over-allocation is to some extent an expected consequence of the overall growth in ITN provision: we found that over-allocation increased approximately 15 percentage points for each one-ITN-per-capita increase in net crop. Over-allocation also varied substantially between countries, for example ranging in 2013 from 50% (36%-65%) in the Republic of the Congo to 11% (9%-15%) in Cô te D'Ivoire. (B) % households with at least one ITN for every two people; (C) % population with access to an ITN within their household; (D) % population who slept under an ITN the previous night; (E) 'ownership gap', the % of ITN-owning households with insufficient ITNs for one-between-two. Black circles are the annual estimates; pink envelopes denote the 95% posterior credible interval. ITNs, insecticide-treated nets. DOI: 10.7554/eLife.09672.004
Net loss
Averaged over all years and all countries, we found the median retention time for LLINs in households was 23 (20-28) months. We found no statistically significant evidence of continent-wide temporal trends in retention times, but substantial between-country variation. Figure 5 plots the LLIN loss function representing the most recent three years (2011)(2012)(2013) for each country individually Each plot shows the four ITN coverage indicators: % households with at least one ITN (black); % households with at least one ITN for every two people (red); % population with access to an ITN within their household (green); % population who slept under an ITN the previous night (blue). CAR = Central African Republic; DRC = Democratic Republic of Congo; ITNs, insecticide-treated nets; HH = household. DOI: 10.7554/eLife.09672.005 (blue lines), along with the aggregated continental-level curve (red line). For reference, we also overlay on Figure 5 some alternative loss functions that have been proposed. Flaxman et al. (orange line) fitted very small annual loss rates (5%) for years 1, 2 and 3 -with all LLINs then assumed lost after 3 years (Flaxman et al., 2010). The RBM-HWG proposed rate of loss (green line) is 8, 20 and 50% of LLINs to remain after 1, 2 and 3 years, respectively, with all nets being lost thereafter (Networks, 2014). As can be seen, we found rates of loss for the first 3 years to be greater than both these alternatives for all countries. Both alternatives impose a three-year maximum retention time Over-allocation refers to insecticide-treated nets distributed to households already owning enough nets for onebetween-two, measured as the percentage of over-allocated nets among all nets in households. DOI: 10.7554/eLife.09672.006 and our decision not to do so meant that we modelled a small proportion of LLINs lasting some years beyond that point. Figure 6 shows the projected levels of coverage that we estimate would be achieved by the end of 2017 with LLIN deliveries for the 2014-2017 period varying from zero to 2.5 billion and under a range of different efficiency scenarios. The most important characteristic of our results is the pronounced shallowing of the delivery-coverage curves: proportionately smaller gains are made in coverage as more LLINs are delivered in an archetypal 'law of diminishing marginal returns'. This means that under a business-as-usual scenario, where current levels of over-allocation and LLIN loss persist, very large increases in LLIN delivery would be required to achieve high coverage. Under this scenario, we estimate that 1 billion LLINs (i.e. an average of 250 million per year) would be required to achieve 80% of the population with access to an LLIN in the household by the end of 2017, although this would only translate into 70% population use.
ITN requirements to achieve universal coverage
The extent to which coverage gains diminish as deliveries increase is mitigated substantially when over-allocation and ITN loss rate are reduced. In a scenario with minimised over-allocation (where over allocation is set to zero), 80% population access in 2017 would be achievable with just 700 (E) 'ownership gap', the % of ITN-owning households with insufficient ITNs for one-between-two. For each indicator, we project likely coverage under four scenarios: current levels of over-allocation and net loss (i.e. 'business as usual'); with minimised overallocation; with longer average net retention (3-yr median); and with both minimised over-allocation and longer net retention. The vertical dashed lines indicate the number of LLINs calculated as required over the period under the country programmatic needs assessment supported by Roll Back Malaria Harmonization Working Group. LLINs, long-lasting insecticidal nets; ITNs, insecticide-treated nets. DOI: 10.7554/eLife.09672.008 million nets (175 million per year). Reducing ITN loss rate to a 3-year median retention time would have a broadly similar impact, acting in isolation, to minimising over-allocation. If these two hypothetical efficiency gains were combined, however, 80% access could be reached in 2017 with around 560 million nets (140 million per year). We found that the relative importance of the over-allocation and LLIN loss rates changed as more LLINs were introduced. Increasing LLIN retention times was the most important factor at low levels of net delivery, but as more and more nets were provided, overallocation became progressively more important. This is intuitive since it becomes increasingly difficult to avoid over-allocation as more households obtain adequate numbers of nets.
For reference, we also plot on Figure 6 the 920 million additional LLINs calculated by countries as required for universal coverage of targeted populations by 2017 under the RBM-HWG needs assessment exercise. Under current levels of over-allocation and net loss, we estimate that by the end of 2017 this quantity of new LLINs would translate into 77% access (among those populations targeted by countries for ITN coverage) and, assuming current behaviour patterns continue, 68% sleeping under an ITN. Under the combined efficiency scenario with minimised over-allocation and 3-yr median ITN retention time, however, the 920 million nets would approach universal access (slightly over 95%).
Discussion
By linking manufacturer, programme and national survey data using a conceptually simple model framework, the intention has been to provide a transparent and intuitive mechanism for tracking net crops and resulting household coverage that reflects the input data while simultaneously providing a range of insights about the system itself. In doing so we have been able to (i) provide a new approach for estimating past trends and contemporary levels of ITN coverage; (ii) explore the effects of uneven net distribution between households and the rates of net loss once in households; and (iii) use these insights to estimate how many LLINs are likely to be required to achieve different coverage targets in sub-Saharan Africa.
We have, for the first time, extended dynamic model-derived estimation of ITN coverage to all four internationally recognized indicators, along with the two 'gap' metrics. Our results reinforce a simple message: while gains in ITN coverage have been impressive, there remains an enormous challenge if the goal of universal access is to be achieved and sustained. The importance of the new expanded suite of indicators is also exemplified: while an encouraging two-thirds of households now own at least one ITN, less than half of these have enough to protect everyone who lives there. This ownership gap is narrowing but the disparity remains evident across nearly all countries. Conversely, there is little evidence that non-use of available nets contributes substantially to low coverage levels. We therefore reinforce earlier studies that suggest the overwhelming barrier to not sleeping under an ITN is lack of access rather than lack of use WHO, 2013a; Eisele et al., 2009;2011;. Of course, non-use may be important in certain local contexts, and finer-scale analysis can support identification of areas where behaviour change communication interventions may be appropriate to reduce it .
We found substantial over-allocation of nets to households already owning a sufficient quantity, and that this became more pronounced as overall ownership levels increased through time. Mass distribution campaigns can, in principle, be designed to minimise over-allocation and maximise evenness of nets allocated to households strictly on the basis of households members and pre-existing nets. As other studies have highlighted, however, any possible commodity savings achieved by such strategies must be compared against the operational cost of these more complex distribution mechanisms (Yukich et al., 2013). What is certain is that over-allocation becomes a major barrier to achieving universal coverage when levels of ITN provision are high because most new incoming nets are simply leading to surpluses in many households, while elsewhere there remains a shortfall. This may have a disproportionately high public health impact if those surplus nets are concentrated in households at lowest risk. Wealthier, better educated and more urban households may be better placed to obtain available nets but are often located in regions of lower transmission (Steketee and Eisele, 2009;Webster et al., 2005). While beyond the scope of the present study, the approaches we have developed here could be extended to consider these issues of equity in coverage versus risk in more detail.
One of the most important observations in our study is that LLINs may be lost from households at a substantially faster rate than is currently assumed. Importantly, we assess loss by comparing total inputs to countries (from deliveries) to total numbers in households (net crop), and so we measure real losses rather than, for example, reallocation of nets between relatives . Longer retention times of the sort observed in some local studies are not supported by the body of evidence we have provided by triangulating large-scale net distributions and household survey data. This more rapid loss rate has potentially important implications for existing guidelines. Current RBM guidance is for mass ITN campaigns to be conducted every 3 years, complemented by continuous distribution of nets via routine channels in order to maintain coverage levels between those campaigns. However, whatever levels of coverage are achieved by a given campaign, we estimate that one-half of the campaign nets distributed, on average, will not be present in households just 2 years later. Our coverage time-series for many countries suggest that routine distribution channels are not yet compensating fully for this rate of loss, often displaying pronounced dips in coverage levels between mass campaigns. Maintaining higher continuous coverage therefore clearly requires some combination of more frequent campaigns, greater ongoing distribution between campaigns, or more durable nets and improved care behaviour by users that lead to longer overall retention times.
We considered nets in households as simply present or absent, with no allowance for their condition. In reality, of course, nets may be retained by households (and thus 'present' in our calculations) even when they are badly torn, or have diminished insecticidal properties. As such, our estimates of 'coverage' would be revised downwards if additional measures of net efficacy were included. Our model is able to provide an estimate for every country and every year of the age-profile of ITNs in households. This raises the possibility of extending the predictions to incorporate modelled or observational data on average rates of net degradation in different contexts (Briët et al., 2012) to explore measures of entomologically effective coverage.
Tools developed to assist countries to calculate LLIN requirements, have tended to define need using a simple ratio to populations at risk (such as 1.8 people per net), and have made allowances for net loss from households using pre-defined rates of loss. We have been able to show that true LLIN requirements are likely to be considerably larger when the more rapid rates of loss are taken into account, along with the additional effect of likely over-allocation patterns. This more realistic framework not only provides the basis for more accurate needs assessments but identifies the relative importance of these different factors in determining the coverage that can be achieved for a given delivery level. Our analysis of future LLIN needs from the present time to 2017 demonstrates how these factors lead to a pronounced law of diminishing returns: as more nets are introduced to a population, proportional increases in coverage diminish, with over-allocation a particular problem at high net provision levels.
Under business-as-usual, the number of nets required to approach full coverage is prohibitively large. Clearly, however, reducing current system inefficiencies and increasing net retention are not straightforward and already the subject of much attention by countries and international partners. Over-allocation is the complex result of different distribution strategies and varying levels of population access to services, and any solution comes with its own cost. Net retention can doubtless be increased by improved LLIN technology coupled with behaviour-change communication efforts, although it is also feasible that retention times may reduce when overall net provision increases (with new nets displacing older ones). Additionally, we look only at the RBM definition of use and ignore the effectiveness of nets in repelling mosquitoes once they are being used. This is potentially an important confounder when considering retention times. While not aiming to provide solutions to these complex challenges, the results we present here provide an analytical framework in which the impact of theoretical efficiency gains can be assessed and this could be extended to include formal cost-effectiveness analysis.
In conclusion, our results provide evidence that LLIN requirements to achieve universal coverage have been underestimated. If obtaining higher coverage remains an accepted goal of the international community, then larger LLIN volumes must be considered and planned for at national and international levels. We emphasise, however, that this would be best achieved in parallel with a renewed focus on maximising the efficiency of coverage achieved for each new net financed. Given that the pattern of diminishing coverage returns for each dollar spent is likely to be unavoidable, the cost-effectiveness of pursuing universal coverage rather than a lower operational target must ultimately be weighed against alternative malaria control investments.
Materials and methods
Overview Two important preceding studies have sought to model national-level ITN delivery, distribution, and coverage: the Flaxman et al. study (Flaxman et al., 2010) and the work of Albert Killian culminating in the NetCALC tool (Networks, 2014) and a series of related publications (Paintain et al., 2013;Yukich et al., 2013). Although very different in implementation, both approached the problem in a similar two-stage process. First, a mechanism was defined for estimating net crop -the total number of ITNs in households in a country at a given point in time-taking into account inputs to the system (e.g. deliveries of ITNs to a country) and outputs (e.g. the discard of worn ITNs from households). Second, empirical modelling was used to translate estimated net crops into resulting levels of coverage (e.g. access within households). We have adopted a similar analytical outline, but the models we have developed for each stage differ structurally and conceptually from these earlier efforts. Our underlying principle has been to represent the ITN system in a simple and intuitive way and to parameterise that system using a data-driven approach that minimises the reliance on assumptions or small external datasets. In this Methods section, we describe: (i) the main data sources used; (ii) a new compartmental model for estimating net crop that also offers insights into rates of ITN loss from households; (iii) a new coverage model linking net crop to household net access and use that also assesses the efficiency of between-household distribution (i.e. the extent of overallocation); and (iv) the use of our models to predict future ITN requirements to meet the goal of universal access. A schematic overview of our analytical framework is provided in Figure 7, and additional methodological detail is provided in the Supplementary Information.
Data
We used three principal sources of data to fit our models. These are described briefly below and in more detail in Supplementary Information.
i. LLINs delivered to countries: data provided to WHO by Milliner Global Associates on the number of LLINs delivered by approved manufacturers to each country each year (WHO, 2013a; AMP, 2014). These were complete for each country from 2000 to 2013 inclusive. ii. ITNs distributed within countries: data provided to WHO by National Malaria Control Programmes (NMCPs) on the number of cITN and LLINs distributed annually within each country (WHO, 2013a). Data were available for 365 of the 560 country-years addressed in the study. We treated these data as only partial records of distribution activities because the extent to which NMCP reporting captures distribution by non-government agencies is not known for all countries. iii. Nationally representative household surveys. We assembled 99 national surveys from 39 sub-Saharan African countries from 2001-2013, covering 18% of all possible country-years since 2000 ( Figure 8). More recent surveys provided household-level data on the number of cITNs, LLINs, people within each dwelling, and people sleeping under nets the previous night. RBM-MERG guidelines detail the conversion of these data into the standardised ITN indicators (RBM, UNICEF, WHO, 2013) and, in combination with national population data (UNPD, 2012), they can yield an estimate of national net crop (see Supplementary Information). Older surveys had less information: providing data on use but not ownership, for example, or for cITNs but not LLINs (see Supplementary Information). For most surveys (95/99), we were able to access the underlying data, while for the remaining four we used only the survey report.
Countries and populations at risk
Our main analysis covered 40 of the 47 (WHO, 2013a) malaria endemic countries of sub-Saharan Africa. We excluded six endemic countries on the basis that ITNs do not form an important part of their vector control programme, as reported by the respective NMCPs to the African Leaders Malaria Alliance, ALMA (M. Renshaw, pers. comm. 3rd August 2014). These were Botswana, Cape Verde, Namibia, Sã o Tomé and Príncipe, South Africa and Swaziland. We also excluded the small island nation of Mayotte, for which no ITN delivery or distribution data were available. We limited all analyses to those populations categorized as being at risk by NMCPs (WHO, 2013a). When interpreting NMCP distribution and household ownership data, we made the simplifying assumption that all reported ITNs were distributed among, and owned within, households situated in malaria endemic regions (Burgert et al., 2012). Additionally, we used data from African Leaders Malaria Alliance (ALMA) on the proportion of populations at risk targeted for ITNs versus IRS, and downscaled targeted populations at risk accordingly. It should be noted that restricting the distribution of ITNs to populations at risk makes the assumption that no ITNs are distributed to populations not at risk.
Estimating national net crops through time
Like Flaxman et al. (Flaxman et al., 2010), we represented national ITN systems using a discrete time stock-and-flow model. In this structure, a series of compartments were defined that contained a given number of nets at each time-step, with possible movement of nets from one compartment to another between time-steps (see Supplementary Information). Nets delivered to a country by manufacturers were modelled as first entering a 'country stock' compartment (stored in-country but not yet distributed to households). Nets were then available from this stock for distribution to households by the NMCP or other distribution channels. Years where NMCP distributions were smaller than available country stock represented potential 'under-distribution', with nets left to stockpile rather than reaching households. However, because of the uncertainty associated with NMCP distribution data, these discrepancies could simply reflect under-reporting of distribution levels. To accommodate this uncertainty, we specified the number of nets distributed in a given year as a range, with all available country stock as one extreme (the maximum nets that could be delivered) and the NMCP-reported value (the assumed minimum distribution level) as the other.
New nets reaching households joined older nets remaining from earlier time-steps to constitute the total household net crop, with the duration of net retention by households described by a loss function. In this representation, the net crop simply reflected the differences over time between inputs to and outputs from households. This meant that distribution, net crop, and the loss function together formed a closed system: the three must triangulate exactly and knowledge of any two components allowed the third to be calculated directly. Flaxman et al. (Flaxman et al., 2010) assembled data from six studies on ITN durability and rates of loss. Using a loss function fitted to these data, however, they found that the three components tended not to triangulate: net crops observed in surveys were too small, given the data on nets distributed to households and their modelled rate of loss. Their interpretation was that the number of ITNs distributed each year may be systematically over-reported by NMCPs, and a 'bias parameter' was included in the model, adjusting downward the volume of nets entering households in each country compared with reported levels. As described above, we took a different approach: with no a priori expectation that NMCP distribution reports exaggerate distribution levels. Rather than fitting the loss function to a small external dataset, we fitted this function directly to the distribution and net crop data within the stock-and-flow model itself. Conceptually, this reflected the view that the 560 country-years of distribution data triangulated against the 102 survey-derived national net crop values represented a more impartial and data-driven way of inferring rates of loss than using limited data from local ITN retention studies. Loss functions were fitted on a country-by-country basis, allowed to vary through time, and defined separately for cITNs and LLINs. We compared these fitted loss functions to existing assumptions about rates of net loss from households. The stock-and-flow model was fitted using Bayesian inference and Markov chain Monte Carlo (MCMC), providing time-series estimates of national household net crop for cITNs and LLINs in each country along with evaluation of under-distribution, all with posterior credible intervals. A complete technical description is provided in the Supplementary Information.
Estimating national ITN access and use indicators from net crop
Levels of ITN access within households depend not only on the total number of ITNs in a country (i.e. net crop), but on how those nets are distributed between households. In simple terms, a more even distribution yields a greater proportion of households owning nets than if those same nets are concentrated in fewer households. Many recent national surveys report the number of ITNs observed in each surveyed household. This allows, a histogram to be generated that summarises the net ownership pattern (i.e. the proportion of households with zero nets, one net, two nets and so on). By analysing such data from multiple surveys, previous studies have demonstrated that histograms for different countries vary in a broadly predictable way according to national net crop (Flaxman et al., 2010;Yukich et al., 2013). By representing these histograms using a formal statistical distribution (such as the negative binomial), and linking its parameters to net crop, predicted histograms can be generated for any country-year for which a net crop estimate is available (Flaxman et al., 2010;Yukich et al., 2013). These histograms, in turn, allow direct calculation of the first access coverage indicator (% households owning one or more ITN). We took the view that this approach-linking net crop to a statistical distribution, and using the distribution to calculate access indicators-is preferable to the alternative of regressing the access indicators against net crop directly. The latter approach, used in the NetCalc tool (Networks, 2014), is simpler but provides less direct insight into the patterns of between-household ITN distribution that ultimately link net crops to access levels.
One aspect that is known to strongly influence the relationship between net crop and household ownership distribution is the size of households found in different countries (Networks, 2014;Yukich et al., 2013), which varies greatly across sub-Saharan Africa (Swaziland, for example has an average household size of around three members, while in Senegal the average is nearly ten). Household size also, of course, determines whether a given number of owned nets will be sufficient to provide access to all residents. We extended earlier analyses (Flaxman et al., 2010;Yukich et al., 2013) to explicitly account for household size: using a bivariate (i.e. two-rather than one-dimensional) histogram model to link net crop to ownership distributions for each household size stratum (see Supplementary Information). We replaced the negative binomial distribution with a 2-d zerotruncated Poisson distribution and, for each household size stratum, fitted the distribution using two parameters: (i) the proportion of households with zero ITNs and (ii) the mean number of ITNs per ITN-owning household. Using the household-level data from 83 national surveys, we found that both parameters were strongly related to national net crop, allowing bivariate histograms to be generated for every country-year that were closely representative of the true ITN ownership distribution.
Stratifying our analysis by household size had three important advantages over earlier approaches. First, the distribution of net ownership tended to vary substantially between households of different sizes within a given country and this variation would be missed if all households were considered together. Accounting for this enabled better fits to the data. This makes sense: all else being equal, larger households would be expected to own more nets than smaller ones and so distribution patterns would differ systematically. Second, the bivariate ownership histograms predicted for each country-year could be used to directly calculate all three indicators of household access. While a simple univariate histogram allows calculation of % households with at least one ITN, a bivariate histogram means the number of both ITNs and people in every household can be triangulated which, in turn, allows direct calculation of the two additional indicators: % households with at least one ITN for every two people and % population with access to an ITN within their household, along with the 'ownership gap' (see Supplementary Information). Linking these bivariate histograms to our annual net crop estimates for each country meant we could predict time-series of the access indicators at the national level from 2000-2013, with all parameters fitted in a Bayesian framework providing posterior credible intervals around each time-series. We also combined the country-level results to generate a set of continent-level indicator time-series, representing overall coverage levels among populations at risk in the 40 endemic countries. Third, the bivariate histograms allowed analysis of over-allocation: certain cells of the histogram represented households owning more ITNs than were required to achieve access on a one-between-two basis, and the proportion of the total net crop falling in this category was examined through time for every country.
We took a different approach for the final indicator, % population who slept under an ITN the previous night. ITN use is less directly linked to national net crop and is primarily determined by the availability of nets within households (Eisele et al., 2009). A total of 83 of the 102 national surveys contained data allowing the relationship to be explored between ITN use and each of the three access indicators with, perhaps unsurprisingly, % population with access to an ITN within their household displaying the largest correlation (adjusted R 2 = 0.96). We fitted this relationship across the 83 surveys using a simple Bayesian regression model (see Supplementary Information) and used it to predict time-series of the ITN use indicator for every country. The ratio of population use to access revealed the 'usage gap'-the fraction of the population with access to ITNs not using them-and between-country variation in this ratio was also explored.
Estimating ITN requirements to achieve universal access
Our two-stage modelling framework represented the pathway from ITN delivery into countries through to resulting levels of net access and use in households. It also accounted for two potential factors that act to reduce access levels, and allowed these to be quantified through time for each country. Using this architecture, it was possible to simulate delivery of any hypothetical volume of ITNs to a given country over a given future time period, to predict the levels of access and use that would result, and to examine the impact of different amounts of over-allocation and net loss. The current needs assessment exercise that countries are undertaking (RBM-HWG, 2014; RBM, 2014) is designed to identify the number of LLINs required to achieve coverage targets by 2017. We used our model to estimate the levels of access likely to be achieved if these forecast LLIN commodity needs were met across the 2014-2017 period under a 'business as usual' scenario, that is, with current levels of over-allocation and net loss, and compared these predicted levels with the objective of universal access among target populations. We then generalized this experiment to predict the likely level of coverage (for all four indicators) achievable by 2017 under a broad spectrum of LLIN delivery levels, equivalent to a total for sub-Saharan Africa (two of the 40 endemic countries in our study did not participate in the RBM-HWG needs assessment exercise [Djibouti and Equatorial Guinea], and so our scenario analysis is based on the set of 38 remaining countries; to maintain comparability through time, we combined needs assessment data for mainland Tanzania and Zanzibar, and for Sudan and South Sudan) of between zero and 2.5 billion nets across the 4-year period. Further, we ran these simulations under four scenarios: (i) 'business-as-usual' (where current levels of over-allocation and net loss were maintained); (ii) with no over-allocation (new LLINs are distributed preferentially to those households with zero LLINs, then to those with less than one-between-two); (iii) with reduced LLIN net loss by households (using a modelled 3-year median retention time); and (iv) with both no over-allocation and a 3-year median retention time. itoring and Evaluation Technical expert Group (SME-TEG) for their feedback and suggestions. We thank Clara Burgert of the DHS Program for her assistance with DHS Survey access and interpretation.
Additional information
Competing interests SIH: Reviewing editor, eLife. The other authors declare that no competing interests exist. Appendix 1. Supplementary information on data and methods
Outline of document
In this Supplementary Information document, we augment the main manuscript by providing additional details on the data and modelling architecture developed in this study to allow prediction of insecticide treated bednet coverage, system efficiency and future needs. In Section S2, we describe the acquisition and processing protocols for data from national household surveys, NMCPs and ITN manufacturers. In Section 3, we describe the Bayesian compartment model which estimates, at quarterly intervals, the total number of ITNs in households in each country (the net crop). In Section 4, we describe a second modelling stage that predicts national-level ITN coverage indicators as a function of predicted net crop. Finally, in Section 5, we describe the use of the modelling framework to predict future ITN coverage levels under a range of hypothetical ITN delivery volumes and efficiency scenarios.
Data collection
The various modelling stages were fitted to data from three principal sources.
ITN manufacturer reports
Manufacturer reports provided information on the number of LLINs delivered to each country each year by international manufacturers. These data were provided to the WHO by Milliner Global Associates and were complete for each country from 2000 to 2013 inclusive.
National malaria control program reports
NMCP reports provided information about the number of LLINs and cITNs distributed in a county within a given year. These data were provided to WHO by NMCPs and were available for 365 of the 560 country-years addressed in the study. We treated these data as only partial records of distribution activities because the extent to which NMCP reporting captures distribution by non-government agencies is not known for all countries.
Household survey reports
We identified and obtained data for the ownership and use of ITNs from household surveys conducted in sub-Saharan countries since 2000, including DHS, MIS, MICS, AIDS indicator surveys (AIS) and a malaria and anaemia prevalence survey (EA & P) (Supplementary file 1). Data at the household level were acquired from 95 national surveys from 39 countries from 2001 to 2014. In addition, we acquired national level data from four household survey reports for which we were unable to obtain the household level dataset. The range and number of the household survey data collected is depicted in Appendix Figure 7. For those surveys where household-level data were available on the type of nets owned (see Supplementary file 1), the number of ITNs owned was determined by the sum of each ITN in the household. A net was considered an ITN if it was an LLIN, or a pre-treated net obtained within the past 12 months, or a net that has been soaked with insecticide within the past 12 months. ITNs were then subdivided into the two classes, LLINs and cITNs.
For the surveys where data on the type of net was only available for one net in each household (see Supplementary file 1), the overall survey-level proportion of total nets for each net classification (non-ITN, LLIN, cITN) was determined and multiplied by the number of nets in the surveyed households to estimate the number of LLINs and conventional ITNs owned by each surveyed households.
Compartment model 3.1 Introduction
This section describes our implementation of a Bayesian hierarchical model to impute, at quarter-annual intervals, the total number of insecticide-treated bed nets (which can be LLINs or cITNs) in a given country.
To achieve this goal, we built a model incorporating the three distinct sources of information on ITNs described in the preceding section: manufacturer reports on ITNs delivered to countries, NMCP reports on ITNs distributed to households within countries, and household survey data providing direct cross-sectional estimates of net crop in households at a given time point.
The key challenge in our model was linking the delivery and distribution data sources to the net crop measurements. Figure 2 shows our schematic representation of the system and the evidence synthesis of these three data sources. In a given year, a given volume of nets are delivered to a country from manufacturers (Appendix figure 8, green arrows), giving rise to a (as yet undistributed) country stock (Appendix figure 8, orange arrow). Nets from this stock then may be distributed to households in that or subsequent years, as captured by NMCP distribution data (Appendix figure 8, blue arrows). To link the NMCP distributed nets to the direct estimates of the net crop existing within households, we needed to account for the rate of loss of nets from households. That is, once nets are distributed, they remain within households for a given length of time until they are discarded. By tracking the net loss from NMCP deliveries (Appendix figure 8, brown arrows), we were able to estimate the total number of nets in a country at a given time point (e.g. in 2008, when a household survey was conducted [ Appendix figure 8, red arrow] by summing the nets of all ages [Appendix figure 8, purple line]). The compartment model was therefore calibrated by parameterising the loss function in such a way that the net crop observed in each national survey was consistent with the known influx of nets to households (following manufacturer delivery and NMCP distribution) and an estimated rate of loss from households. Our loss function was modelled to be temporally varying for a given country and parameterises the proportion of nets discarded through time.
Model specification
3.2.1 Observed data
Conceptual description
Household survey reports provided a direct measure of how many ITNs, of any age, existed in households in a country over a sampling period. For use in fitting the compartment model, household-level survey data on ITN ownership needed to be translated into a representative estimate for the total number of nets in a given country. To accomplish this, we summarised each household survey into two variables: the average number of LLINs and cITNs per household and the average household size (i.e. number of residents). The total number of ITNs was then calculated as the product of the population of the country at the time of the survey and the estimated ITNs per-capita as observed in the survey data, taking into account the survey weighting to ensure the arithmetic mean was nationally representative. The calculation of the total number of nets was completed by specifying probability distributions that allowed propagation of uncertainty from the household surveys.
Formal description
To estimate national net crop (total ITNs in households) from each national survey, three summary statistics were required: a) The average household size, h h ; , with an associated standard error, hh 2 s .
b) The average number of LLINs per household, AvgLLIN 0 , with an associated standard error, AvgLLIN 2 s .
c) The average number of cITNs per household, AvgcIT N 0 , with an associated standard error, AvgcIT N 2 s .
It should be noted here that, throughout this Supplementary document, we use the term standard error when referring to the standard deviation of the sample mean or the variability of an estimator.
Survey data came from four different sources-DHS Program, MICS4, MICS3 and other surveys for which only the published report (and not underlying data) were available, mainly MIS undertaken unilaterally by NMCPs or supporting partners. Different protocols were required to obtain summary statistics (a), (b), and (c) above for each survey type.
DHS and MICS4 data: These household survey reports provided direct data about the number of LLINs and cITNs per household and the household size. Therefore, means across all households were calculated using a weighted average that incorporated the published survey sample weights and the standard errors obtained through Taylor linearization (Kish and sampling, 1965).
MICS3 data: These household surveys provided direct data about the household size and number of bed nets per household, but did not provide information on the type of each net observed (e.g. LLINs, cITNs or untreated bet nets). The surveys did, however, provide a full description of a single net within each household. Therefore, across all such single nets in each survey, we determined the mean proportion of nets that were LLINs, cITNs or untreated nets along with the standard errors. We then multiplied these proportions with the computed average number of nets (of any kind) to determine the average number of LLINs and cITNs per household. Propagation of uncertainty was achieved using MCMC sampling (Plummer, 2003;Gelman et al., 2013).
Other reports: These surveys contained no disaggregated household data and only reported the averages of household size and numbers of LLIN and cITN nets with no standard errors. We therefore assumed a small 1% error on these estimates, consistent with the magnitude of sampling errors seen in other surveys.
Using metrics (a), (b) and (c) obtained from the 95 processed surveys, the total number of LLINs and associated standard error in a country reported by a survey were defined probabilistically as: And similarly the total number of cITNs: Where E½. is the expected value, ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi VAR½. p is the standard error, c is a given country and t is the mean sampling time of the survey. Equations 1-4 SURV EYLLIN t ;c ; s SURV EYLLIN t ;c , SURV EYcITN t ;c and s SURV EYcITN t ;c were therefore observed data inputs into the compartment model.
Manufacturer reports of LLIN deliveries
As shown in Appendix figure 8, manufacturer reports provided data on the number of LLINs delivered to a country in a given year. There were no corresponding reports for cITNs.
Conceptual description
The main purpose of the manufacturer data was to determine stock levels and 'cap' estimated NMCP distributions in each year (i.e. more nets could not be distributed than were potentially available in country stock). The manufacturer reports were complete (no missing values) and assumed to be of high fidelity. We therefore modelled the likelihood of the manufacturer data in a given country-year as a normally distributed random variable. The manufacturer reports did not report standard errors in the delivery numbers and therefore we assumed a uniformly distributed prior probability on the error. Because the manufacturer data served only to inform stock and cap NMCP distributions, no considerations of sub-annual timing were required.
Formal description
We define Manufacturer c;t (observed data) as the number of manufacturer LLIN nets delivered to a country c at year t. The number of manufacturer LLIN nets sent ( c;t ) was modelled as: Manufacturer c;t~N ormalð c;t ; Manufacturer c;t s 2 m;t Þ With error s m;t ¼ Uniformð0; 0:075Þ0.3
NMCP reports of ITN distributions
As shown in Appendix figure 8, NMCP reports provided information on the number of both LLIN and cITNs nets distributed in a given country c at year t.
Conceptual description
As with the manufacturer data, the NMCP reports were assumed to be of reasonably high fidelity. Unfortunately, NMCP reports were not complete and contained missing values where countries failed to report the number of nets distributed in a year. To impute these missing values, we defined an informative prior probability distribution on the NMCP distributions for both LLINs and cITNs. We tested multiple different parameterisations of NMCP prior distributions, and evaluated performance of parameterisations using out-of-sample cross validation to choose the best model.
Our final choice of NMCP prior distributions was data-driven, using combined reports across all African country-years that had NMCP data. We scaled the reports to per-capita NMCP distributions to remove country-specific differences. We observed that this combined percapita distribution approximately followed an exponential distribution with a zero inflation hurdle (to account for no deliveries). Additionally, when looking at the combined reports across time, it was clear that combined distribution varied temporally. We therefore modelled NMCP LLINs and cITNs separately for each year by disaggregating the continent wide percapita distribution temporally into separate exponentials with zero inflation. Finally, to account for variability due to sample size, we fitted splines through these time series.
Formal description
We defined NMCP LLIN c;t (observed data), NMCP ITNc c;t (observed data) and NMCP TOTAL c;t (observed data) as the number of NMCP LLIN nets, NMCP cITN nets and the sum of LLIN and ITN NMCP nets reported distributed in a country c at time t.
As described above, the NMCP reports did not have standard errors and contained missing values. We therefore defined informative prior distributions on NMCP LLIN c;t and NMCP cITN c;t . First, we defined two sets L t and I t as the combined set of per-capita rates of NMCP LLIN and cITN across all African countries at times t respectively. The sets L T and I T with T 2 ½2000; 2001; . . . ; 2012 therefore contained all NMCP LLIN and cITN distributions across all country-years.
To characterise the hurdle exponential distribution, we used two parameters: the zero inflated components (p LLIN 0 t ,p cIT N 0 t ), which defined the probability that in a given country-year no nets were distributed, and the exponential component (p LLIN 1 t ; p cIT N 1 t ) which, given some distribution of nets took place, gave the per-capita rate of distribution (i.e. how many nets per-capita were distributed).
We therefore define the proportion of zero NMCP deliveries for LLINs and cITNs as: And the mean delivery rate at time t as: Using these two parameters we then defined the prior distributions on NMCP LLIN and NMCP cITN as: Where: The terms rate LLIN;t and rate cIT N;t therefore characterised the prior distribution on the percapita rates of NMCP LLIN and cITN distributions, respectively. The values of terms rate LLIN;t and rate cIT N;t also contained missing values for some years and showed variability due to different sample sizes. Therefore, we fitted penalised regression splines through the time series for each parameter to create smooth prior parameters. The spline fitting was done using restricted maximum likelihood with rigorous selection done to find the optimal number of basis functions for the spline (see Appendix figure 1 for the spline fits).
To form the likelihoods, these prior rates needed to be scaled by the population in a given country-year. Therefore, we defined terms d c;t ¼ P opulation c;t rate LLIN;t and p c;t ¼ P opulation c;t rate cIT N;t as the final prior distribution on the NMCP distributions (not per-capita).
Using these prior distributions and allowing for some uncertainty, we model the likelihoods for the observed NMCP LLIN c;t , NMCP ITNc c;t and NMCP TOTAL c;t as random normal variables with added error: NMCP cITN c;t~N ormalðp c;t ; s 2 d2;t Þ NMCP TOTAL c;t~N ormalðd c;t þ p c;t ; s 2 d3;t Þ Where s d1;t ¼ s d2;t ¼ s d3;t ¼ Uniformð0; 0:01Þ. Additionally, all distributions (equations 14 and 16) were zero truncated to prevent negative numbers of nets.
It should be noted that NMCP distribution numbers are not always accurate and potentially under-or over-estimate the number of nets distributed. This may result, for example, from nets being distributed through other sources and therefore not contributing to the total NMCP distribution sum. To account for this uncertainty, we allow the number of nets distributed to take a uniformly distributed value with a lower bound of the NMCP distribution and an upper bound of the number of nets able to be distributed (found from the net stocksee Equation 22).
Compartment model structure
Following the methods outlined above, we arrived at a set of observed data, with standard errors, of the number of nets delivered for all country-years (manufacturer), the number of nets distributed within-country for all country-year (NMCP) and sparse estimates of the number of nets of any age in households (i.e. net crop) in 95 country-years with available surveys Our compartment model linked all these processes together by modelling four different processes. (i) Using delivery and distribution information to allow for LLIN net stock to accumulate, thereby allowing for more LLIN nets to be distributed than were delivered in a country-year. (ii) Disaggregating the total nets delivered in a country-year into quarter-yearly intervals to allow for a more realistic modelling of the temporal dynamics. (iii) Linking year-byyear distributions through a loss function that accounted for the rate of nets being lost from households after distribution as a function of the time from distribution. (iv) Calibrating the compartment model quarter-yearly estimates on the observed survey reports. These four modelled processes are now described in turn.
Conceptual description
Following the methodology described in Flaxman et al (2010) (Flaxman et al., 2010), we define a net stock variable Stock c;t for a given country-year. Stock c;t links together LLIN manufacturer deliveries and NMCP LLIN distributions by allowing for a surplus of nets to be built up in a country. Essentially, Stock c;t added a 'cap' on the number of nets NMCPs report to be delivered, and therefore created an upper bound on erroneously reported deliveries.
We also allowed for NMCP data to represent an under-estimate of true distribution levels. This could occur, for example, if the NMCP reporting system did not capture those nets being distributed by non-governmental agencies. To accommodate this uncertainty, we specified the number of nets distributed in a given year as a range, with all available country stock as one extreme (the maximum nets that could be delivered) and the NMCP-reported value (the assumed minimum distribution level) as the other. It should be noted that due to the lack of manufacturer data for cITNs, this uncertainty was only incorporated for LLINs.
Formal description
First we defined c;t , the adjusted d c;t , as the modelled parameter for the number of NMCP As shown in the above equations, if a country did not distribute as many nets as were delivered, stock levels can increase, but with a limit that a country cannot deliver more nets than stock permits.d c;t has a probabilistic interpretation reflecting our uncertainty about whether the NMCP values reported the total number of LLINs that were able to be distributed or if the calibration of the stock and flow model on the survey data required more nets to be delivered. For cITNs, it was not possible to include a stock component in the compartment model as there were no manufacturer reports for cITNs.
Conceptual description
Modelled variablesd c;t and p c;t defined the number of LLINs and cITNs distributed within a country-year. However, these variables were modelled on data that did not provide any information about when in the given year nets were distributed. This led to the potential for temporal inconsistency when calibrating survey estimates at an average time in a country-year against NMCP distribution information with no-sub annual temporal resolution. Therefore, we needed to specify a prior on when NMCP distributions occurred within a given country-year. In striving to keep the model as parsimonious as possible, we first modelled a scenario where all NMCP nets were distributed at either the start or end of the year. This, however, did not represent reality adequately, and led to poor calibrations with survey estimates in some instances. We then relaxed this assumption to allow all nets to be delivered at a random point in the year, but again this led to poor calibrations. Finally, we opted for a more realistic distribution scenario where we disaggregated distributions to a quarter-yearly temporal resolution. We then assigned priors on NMCP quarterly net distributions to allow any proportion of nets to be delivered at the start, first quarter, second quarter, third quarter or end of the year. This scheme allowed for maximum flexibility in the model with minimal subjective prior assumptions, and yielded excellent calibrations with survey estimates.
We disaggregated modelled variablesd c;t and p c;t (the number of LLINs and cITNs distributed within a country-year) into intervals Q 2 ½0:25; 0:5; 0:75; 1, representing the number of nets distributed by the first, second, and third quarter or end of a given year. We defined the proportions of d^c ;t and p c;t in each interval as: Where i 2 Q and P i2Q q i ¼ 1 and of course P i2Q d^c ;t q i ¼ d^c ;t and P i2Q p c;t q i ¼ p c;t i.e. the sum across the year is preserved.
Conceptual description
Our compartment model estimates NMCP distributions at quarterly intervals through time from manufacturer delivery data and the estimated stock accumulation. The final link to calibrate these quarterly distributions with survey observations of net crop is the rate of net loss. We model the net loss function as a smooth compactly supported function defined previously as part of the NetCALC tool . We also model the loss function as non stationary in time, and represent this change through time using a moving average. By using a moving average as opposed to individual loss functions for each year, or quarter, we were able to learn temporal changes given the sparse data and not over represent the prior.
Formal description
We tried several different functional forms for net loss (Weibull, exponential, hill) and decided on specifying the form using a smooth-compact function defined previously : Where k and L are loss function parameters with k; L > 0. The smooth-compact loss function produced models with a the lowest information criteria (DIC) from the other forms and has been validated in previous studies Yukich et al., 2013).
For both LLINs and ITNs we use the same functional form in Equation 25 but restrict the bounds on parameter k as uniform priors on this parameter produced strongly non-uniform functions (Bornkamp, 2012). Therefore, to achieve a diffuse uniform prior on the loss functions, we allowed L to vary within large bounds, thereby producing priors that allowed candidate loss functions with half-lives from 0.7 to 5 years. These priors were necessarily vague to allow for adequate flexibility in fitting country-specific loss functions.
k~Uniformð16; 18Þ
L~Uniformð4; 20:7Þ To model the loss function through time, we define a moving average on parameters k and L. Therefore the moving average on the loss function for both LLINs and cITNs is defined as: Where n is the moving average lag and t 2 ½2000; 2001; . . . ; 2013, and any terms with t < 2000 are ignored. From out-of-sample cross validation we found the optimal lag to be n ¼ 5, i.e. a balance between over-and under-smoothing. It should be noted that t is restricted to the range 2000 t 2013, which is the range for which we have real data on NMCP reports, manufacturer reports and household surveys. For the future scenario predictions (described later), we assume any future net loss behaviour is the same as that occurring in 2013, that is:
Conceptual description
Given the net distributions defined at quarterly intervals and the temporally varying loss function, a continuous prediction of the number of nets of any age in a country could be found at quarterly intervals simply by summing across nets of all ages for a given quarter. These quarterly predictions of the number of nets of any age were then calibrated in a likelihood of the observed survey estimates. In the presence of observed survey estimates, this likelihood helped 'learn' values for all the prior probability parameters outlined in the compartment model. In the absence of survey information, the model defaulted to the prior probabilities for all parameters and relied on the NMCP and Manufacturer reports.
Formal description
Consider the stock and flow model evaluated over a period 2000:2017, which yielded 73 quarterly intervals. Now consider two 73Â 73 matrices labelled M LLIN and M cIT N . The rows and columns of these matrices represent the entire time period in quarterly intervals.
Consider the stock and flow model progressing column wise through these matrices, at year t and quartert 2 Q, the column and row index ind ¼ 4t þt (e.g 2004.5 or year 5 quarter 3 would be index 23) stores d^c ;t qt and p c;t qt (LLINs and cITNs in year t and quartert 2 Q).
Then for each quarter after the distributions d adj c;t qt and p c;t qt , the remaining nets in subsequent quarters were filled row wise according to the loss function defined in Equation 25.
By summarising the stock and flow process in this manner, the total number of LLIN and cITN nets of all ages in a given quarterly time period was simply the column sums P 73 j¼1 M LLIN;ind , P 73 j¼1 M cIT N;ind .
Finally, we calibrated estimates of the total number of nets of all ages, against those reported from the household survey reports: Where the normal standard deviation is given by those found from the survey reports (Equations 2 and 4). Additionally, P LLIN and P cIT N was defined as a linear interpolation between the two closest quarters and the average survey time t.
It should be noted that, when calculating the nets per-capita, two scalings were used: (1) Countries with the proportion of population at risk being less than 1 were scaled according to the WHO-defined populations at risk proportion.
(2) Countries partially dependant on IRS as a means of vector control were scaled as the proportion of the population at risk targeted with ITNs.
Indicators model 4.1 Introduction
Section 2 provides details on how, using yearly data on manufacturer deliveries and NMCP distributions calibrated using household survey reports, we estimated the number of LLINs and cITNs in households in each country-year at quarterly intervals. From these net crop estimates, we used population information to derive the nets per-capita for LLINs and cITNs. Standardized ITN coverage indicators were then estimated from net crop by leveraging the household survey information with the estimates of nets per capita to derive a set of indicators on net ownership and usage. These were: .
Indicator model structure
Previous models attempting to evaluate total nets and nets per-capita have utilised negative binomial models unstratified by household size to estimate Indicator 1 (Flaxman et al., 2010). However, using these previous approaches, it was impossible to estimate Indicators 2-5. Here we introduce a new zero-truncated Poisson model stratified by household size, which has the ability to estimate all Indicators 1-5 with excellent precision.
To begin the model derivation, consider a household survey H. Contained within H we were able to calculate a density/histogram of the number of households with a given number of ITNs (both LLINs and cITNs). Appendix figure 2 summarises this density plot, and it is clear that Indicator 1 is trivially calculated (sum of the red bars divided by the total) from this histogram, but Indicators 2-5 are not. Previous modelling approaches (Flaxman et al., 2010) used this unstratified density and assigned a probability distribution (e.g negative binomial or Poisson) parameterised such that the observed density could be recreated using a small number of model parameters (for the negative binomial 2 and Poisson 1).
There are two key problems with this approach, first as highlighted above Indicators 2-5, which provide additional richness of information for decision makers, cannot be directly calculated from this one-dimensional histogram. Second, after experimenting with a large suite of probability distributions, we found that fitting two-dimensional summary histograms to household survey data often provided very poor fits. The key to these poor fits is the lack of stratification of the number of ITNs by household size, the absence of which ignores an important determinant on the number of ITNs per household. A more useful summary of H is the inclusions of a second dimension for household size (Appendix figure 3). From this twodimensional model, it becomes possible to estimate Indicators 1-3 (Appendix figure 4). However, the problem remains: how do we recreate this two-dimensional density when we have no household survey information?
To accomplish this, we developed a model which, given a household size distribution, translates an estimate of nets per-capita (derived from the compartment model) into an accurate realisation of the three-dimensional histogram.
Zero truncated poisson model
The most logical model to recreate the three-dimensional densities for a given household strata is the Poisson distribution (or a negative binomial distribution for added over dispersion). However, we found that these models did not recreate the observed pattern accurately. We tried more complicated zero-inflated versions but these did not improve fits.
After looking across all 83 surveys for which we had all the relevant information to recreate the three-dimensional histograms, we realised the process to create the histograms had to be separated into two processes: (a) a process which, for a given household strata, gave the density of households with no nets ðP 0 Þ, and (b) a process which, for a given household strata owning nets, gave the density of a given number of nets (1,2,3. . .) ðP 1 Þ. To think about this intuitively, consider a process that first fills the zero category of ITNs per household in Appendix figure 3, and then fills the categories 1,2,3 etc.
Consider a household strata h (e.g. households of size three persons) from H, it is easy to calculate the proportion of households with no nets. This is the P 0;h parameter. Of the remaining households owning one or more nets, we calculated P 1;h as the average number of nets in a household strata. The most logical probability distribution to fill the densities given P 1;h is again the Poisson distribution as P 1;h is simply the mean of the Poisson.
However, because we have already filled the household with no nets density, the correct distribution is a zero truncated Poisson distribution. Unfortunately, the mean of the zero truncated Poisson distribution is now no longer just P 1;h but ¼ l=ð1 À e l Þ, which does not have the same useful interpretation. Therefore, we solved (using a simple root finding) for the value of l that gave a zero truncated Poisson with the same mean as a standard Poisson with mean P 1;h , but excluded a zero category.
Using this model, parameterised by just two parameters per household strata P 0;h and P 1;h , we evaluated Indicators 1-3 across the 83 relevant surveys with correlation values of more than 0.98 showing that the model reproduced the complex density pattern in the two-dimensional histograms with excellent accuracy.
Zero truncated poisson model in the absence of survey information
The zero truncated Poisson is a probabilistic distribution that translates Noentity into parameters P 0;h and P 1;h that were able to re-create the two-dimensional densities in Appendix figure 4 from which Indicators 1-3 could be calculated. However, we still needed to estimate P 0;h and P 1;h for country years without survey information.
Given the logical dependence of P 0;h and P 1;h on the underlying nets per-capita, we created two functions which, given a household strata, h, translated nets per-capita, npc, into P 0;h and P 1;h . i.e.: f 0 ðnpc; hÞ ¼ P 0;h and f 1 ðnpc; hÞ ¼ P 1;h After experimenting with non parametric spline models, we found that simple polynomial surfaces worked remarkably well and had the added benefits of computational efficiency and compatibility with the compartment model.
We divided household sizes into 10 sizes ð1; 2; 3; 4; 5; 6; 7; 8; 9; ! 10Þ and then modelled f 0 ðnpc; hÞ as: and f 1 ðnpc; hÞ as: The model for f 0 ðnpc; hÞ is therefore a two-dimensional surface that varies according to household size but the model for f 1 ðnpc; hÞ is a separate linear straight line function for each household category.
Equations 34 and 35 were fitted using Bayesian linear regression with the uncertainty in the coefficients being propagated through the compartment model.
When performing 10-fold out-of sample cross validation (leaving out entire surveys), we found that f 0 ðnpc; hÞ predicted P 0;h with a correlation of 0.98 and f 1 ðnpc; hÞ predicted P 1;h with a correlation of 0.97, indicating extremely good fits.
Given these two functions, which parameterise the zero truncated Poisson model, we can calculate Indicators 1-3 from an estimate of nets per-capita for a country-year (whether we have a survey or not) from the resulting two-dimensional density.
4.2.3
Estimating % population who slept under an ITN the previous night and the 'ownership gap' The proportion of people who slept under an ITN (Indicator 4) was highly correlated with the proportion of people with access to an ITN (the 'use gap', Indicator 5). Therefore, to evaluate Indicator 4, we used a simple linear relationship between access and use evaluated across all 83 surveys with the relevant information (see Appendix figure 5). Therefore, all that was required to evaluate Indicator 4 was to take Indicator 3 (which contained all the rich information about household size strata) and translate it through a linear relationship with noise: Indicator 4~Normalð0:8838889 Ã Indicator 3; 0:06258131^2 Finally, Indicator 5 (the ownership gap) was calculated as 1-(Indicator 4/Indicator3).
Additional note on household sizes
It should be noted that one missing piece in this analysis is the distribution of household sizes for every country-year. This information does not exist and is very difficult to model. Therefore, we make two assumptions. First, while the populations are known to change over time, we assume that in the 13-year window of our analysis the distribution of household sizes stays constant within each country. This assumption is, to some degree, warranted as countries with serially sampled surveys showed extremely similar household size distribution patterns, and the resulting indicators do not change significantly if a different time point household size distribution is used. Second, for countries with no household size information (due to no relevant surveys), we use an average across all surveys.
Future predictions
Using the methods described in sections 2 and 3, we were also able to simulate the delivery of any volume of ITNs to a given country over a given future time period to predict the nets percapita and full suite of indicators. Additionally, we were able to change the dynamics of this simulated future period to allow nets to be retained for a longer period (by varying the net loss function prior) and account for over-allocation of nets (where there is a skewed distribution of net distributions in households with some households having too many nets and some too few).
When simulating forwards in time from 2013, we made several assumptions: 1. No cITNs were distributed or delivered. This is a justifiable as, with the exception of Gabon in 2013, none of the 40 countries in our analysis delivered or distributed any ITNs in 2012 or 2013. Therefore, it is reasonable to assume that for future years, these countries continued using LLINs exclusively. This assumption also follows country recommendation by the WHO to distribute LLINs and not cITNs (Measure, 2014). | 2016-05-12T22:15:10.714Z | 2015-12-29T00:00:00.000 | {
"year": 2015,
"sha1": "a74710442fb45c926dc33c1363d9f4455a791ab4",
"oa_license": "CC0",
"oa_url": "https://doi.org/10.7554/elife.09672",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba77b4c5e73c2d39095949cf06614f1d6adf7dd0",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Biology",
"Business"
]
} |
133527044 | pes2o/s2orc | v3-fos-license | University of São Paulo “Luiz de Queiroz” College of Agriculture Three-dimensional modeling of radiative transfer and canopy reflectance in Eucalyptus stands
Three-dimensional modeling of radiative transfer and canopy reflectance in Eucalyptus stands Radiative transfer models (RTM) have been successfully used to simulate the effect of forest structural and biochemical characteristics, such as tree sizes and shapes, leaf area index (LAI), leaf angle distribution (LAD), on the canopy radiative budget. One particular use of RTM is the analysis of the reflected light by the canopy, which can be measured by remote sensing techniques. RTM allows a physically based interpretation of the reflectance quantity measured by satellite and can help disentangling the multiple source of variation of the reflectance signal. The DART model Discrete Anisotropic Radiative Transfer is one of the most complex three-dimensional RTM, since it uses an accurate mathematical approach of physical processes and a great realism of the landscapes under simulation. Its main simulation outputs are the reflectance of the scene (e.g. a forest stand) at particular spectral wavelength from bottom and top of the atmosphere, the simulation of satellite images, and the simulation of localized radiative budget. Despite the DART potential in analyzing biophysical parameters from remote sensing data, few studies report its application in forest plantations in Brazil, which can have a large number of important field measurements to parameterize the model. The main objective of this project is to evaluate if the DART RTM can help understand the satellite-measured canopy reflectance of Eucalyptus plantations and in particular if DART RTM can improve LAI estimation rather than use only empirical models, as spectral vegetation indices. DART model was parameterized using extensive in situ data obtained from a clonal test, part of the EucFlux project. The specific objectives were: i) parameterize the DART model at different growth stages and for different clonal materials of Eucalyptus plantations and compare simulated reflectance with high resolution satellite images acquired on the same site; ii) analyze the relationship between the Leaf Area Index (LAI) and Spectral Vegetation Indices (SVI) based on empirical relationships, and then use the DART model; iii) analyze the advantage and drawbacks of using a generic relationship or a clone-specific relationship between LAI and SVI, and find other criteria for grouping the genotypes in the same. In Chapter 2, we demonstrated the good performance of DART to simulate canopy reflectance of Eucalyptus forest plantations. The simulated reflectance was similar to those measured by very high resolution images from satellite, despite some discrepancies found in the near infrared region. Then, in Chapter 3, we showed that empirical relationships between LAI and SVI were able to give a reasonable precision for generic relationships; however, genotype-scale relationships gave even better results. The same methodology applied on a DART simulated dataset lead to the same conclusions. An intermediate possibility of grouping the genotypes regarding their litter or leaf optical properties gave intermediate performance. We finally concluded about the superiority of NDVI to estimate LAI using a genotype-specific calibration. Overall, DART simulated datasets created in this work enable to calibrate different LAI -SVI relationships in terms of genotypes, sensors and acquisition characteristics.
Three-dimensional modeling of radiative transfer and canopy reflectance in Eucalyptus stands
Radiative transfer models (RTM) have been successfully used to simulate the effect of forest structural and biochemical characteristics, such as tree sizes and shapes, leaf area index (LAI), leaf angle distribution (LAD), on the canopy radiative budget. One particular use of RTM is the analysis of the reflected light by the canopy, which can be measured by remote sensing techniques. RTM allows a physically based interpretation of the reflectance quantity measured by satellite and can help disentangling the multiple source of variation of the reflectance signal. The DART model -Discrete Anisotropic Radiative Transfer -is one of the most complex three-dimensional RTM, since it uses an accurate mathematical approach of physical processes and a great realism of the landscapes under simulation. Its main simulation outputs are the reflectance of the scene (e.g. a forest stand) at particular spectral wavelength from bottom and top of the atmosphere, the simulation of satellite images, and the simulation of localized radiative budget. Despite the DART potential in analyzing biophysical parameters from remote sensing data, few studies report its application in forest plantations in Brazil, which can have a large number of important field measurements to parameterize the model. The main objective of this project is to evaluate if the DART RTM can help understand the satellite-measured canopy reflectance of Eucalyptus plantations and in particular if DART RTM can improve LAI estimation rather than use only empirical models, as spectral vegetation indices. DART model was parameterized using extensive in situ data obtained from a clonal test, part of the EucFlux project. The specific objectives were: i) parameterize the DART model at different growth stages and for different clonal materials of Eucalyptus plantations and compare simulated reflectance with high resolution satellite images acquired on the same site; ii) analyze the relationship between the Leaf Area Index (LAI) and Spectral Vegetation Indices (SVI) based on empirical relationships, and then use the DART model; iii) analyze the advantage and drawbacks of using a generic relationship or a clone-specific relationship between LAI and SVI, and find other criteria for grouping the genotypes in the same. In Chapter 2, we demonstrated the good performance of DART to simulate canopy reflectance of Eucalyptus forest plantations. The simulated reflectance was similar to those measured by very high resolution images from satellite, despite some discrepancies found in the near infrared region. Then, in Chapter 3, we showed that empirical relationships between LAI and SVI were able to give a reasonable precision for generic relationships; however, genotype-scale relationships gave even better results. The same methodology applied on a DART simulated dataset lead to the same conclusions. An intermediate possibility of grouping the genotypes regarding their litter or leaf optical properties gave intermediate performance. We finally concluded about the superiority of NDVI to estimate LAI using a genotype-specific calibration. Overall, DART simulated datasets created in this work enable to calibrate different LAI -SVI relationships in terms of genotypes, sensors and acquisition characteristics. Keywords: DART model; Reflectance; Satellite image; Forest plantations
INTRODUCTION
Brazilian forest plantations cover approximately7.74 million ha, and5.6 million ha are Eucalyptus plantations, representing 71.9 % of the total (INDÚSTRIA BRASILEIRA DE ÁRVORES -IBÁ, 2015). The area planted with Eucalyptus in Brazil is continuously increasing, for instance, there was an increase of 1.8 % in 2014 compared to 2013 (IBÁ, 2015). The expansion of fast-growing forest plantations in tropical and subtropical regions is related to the increasing global demand for forest products (FAO, 2007). In this context, in 2008, forest plantation areas were predicted to double by 2020, reaching ~9 MHa (BRASIL, 2008), mainly with Eucalyptus and Pinus species. However, the slowing down of Brazilian economic growth and the current economic crisis has revised down these projections for 2020. Conversely, this increase in plantation areas also represents a significant increase in carbon sequestration and greenhouse effect mitigation (CERRI et al., 2010).
Because of the economic importance of Eucalyptus plantations in Brazil, there is need to develop accurate and reliable methods to assess forest carbon stocks, understand and monitor their ecological process at large scales. Remote sensing is powerful technique to monitor and predict forest dynamics (MARSDEN et al., 2010), thus, the development of remote sensing applications has become an active research field (BAKER et al., 2010).
Studies conducted by Roberts et al.(2007) and le Maire et al. (2011a) are examples of these possibilities as they explore the response in terms of spatial variability, biomass production and reactivity of ecosystems to disturbances in forestry-related activities by using passive and active sensors to estimate forest variables (e.g. LAI, crown structure).
A more comprehensive remote sensing approach includes the use of Radiative Transfer Models (RTM) to extract parameters from sensor data, which deals with simulations of light transfer within the canopy, interception and scattering, and can also simulate the canopy light reflected into the atmosphere, used to compute reflectance. This model has been developed to describe the interaction of electromagnetic energy with several canopy components at crown and leaf levels (KUUSK; NILSON, 2000), allowing to quantify the effect of several vegetation characteristics (e.g. biophysical or biochemical parameters) (GASTELLU-ETCHEGORRY et al., 2004a) over space and time. Given the direct influence of radiation regime on photosynthesis and growth processes, the expected advantage of this modeling is the exact distribution of intercepted and absorbed radiation by forest plantations, which can be associated to other ecophysiological models simulating, for instance, leaf photosynthesis.
Within the RTM domain, the physically based radiative transfer models can be considered the most effective and robust to simulate canopies in terms of their generalization ability and accuracy, mainly when compared to empirical models (KIMES et al., 2002). While empirical relationships allow quick assessment of forest parameters, tridimensional (3D) radiative transfer models, such as the Discrete Anisotropic Radiative Transfer (DART) model, which will be used in this thesis (GASTELLU-ETCHEGORRY;ROMIER, 1996), enables new domains of data evaluation, especially using high-resolution images and/or field measurements and optical properties of canopy components. Therefore, 3D reflectance canopy models can be used to analyze the effects of optical properties of different canopy elements (stem, leaves, twigs, soil, others) and their associations, along with different geometric sun and view directions.
Originally, the DART model was developed to simulate bidirectional reflectance (BRF) behavior, remote sensing images and radiation budget of natural landscapes (e.g. trees, grass, soil and water) in the visible and infrared regions. Its approach involves an accurate mathematical modeling of physical processes and realism of canopy simulations (http://www.cesbio.ups-tlse.fr/us/dart/dart_pourquoi.html), considering a robust earthatmosphere system. Applications of DART in the forest domain meet the increasing demand to understand tree growth processes. The DART model could also be used to simulate remote sensing data, supporting sensor calibrations and evaluation of landscapes along different spatial-temporal scales and changing several calibration parameters, even for object properties (e.g. trees dimension and optical properties) and sensor characteristics (e.g. spectral and spatial resolution).Research on development of vegetation structure and canopy dynamics over large spatial and temporal scales is essential to predict the growth of Eucalyptus plantations and, consequently, to understand forest ecosystems. RTM approaches are suitable to address these issues, but they are still under explored, mainly in the context of Brazilian Eucalyptus plantations. In this thesis, we analyzed the accuracy of the DART model and its output on different clones and ages. We worked with high spatial resolution satellite images acquired at different dates along the period of analysis and with precisely field-measured structural, biophysical and biochemical parameters to provide robust analyses of the forest structure. This is necessary for a better understanding of carbon balance and water and nutrients dynamics.
A description of the main concepts and parameters addressed before is shown in the next topics.
Eucalyptus plantations in Brazil
Species of Eucalyptus genus are arboreal evergreen ‗tropical' rainforest trees belonging to the myrtle family (Myrtaceae) endemic to Australia. This genus plays an important role to meet the world's demand for woody products, accounting for 8% of the areas with productive planted forests worldwide and one third of tropical forest plantations . In Brazil, of the 7.74 million hectares cropped with forest plantations, 71.9 % is planted with Eucalyptus (IBÁ, 2015). Most eucalypt planted areas are concentrated in the southern and southeastern regions, because the main industries of forest segments (pulp and paper, wood panels, steel and processed wood) are based in these regions, and by favorable climate conditions. Eucalypts plantations in Brazil started in 1904 with investments of the Paulista Railway Company and coordinated by Edmundo Navarro de Andrade (GONÇALVES et al., 2013). At first (around 1930), these plantations were primarily aimed at producing firewood used as fuel for locomotives and sleepers for railways and, in the 1950s, large plantations were established for charcoal production. The pulp and paper industry, in this period, adopted eucalypt as the main source of raw material (LEMOS, 2012). However, the significant increase of the Eucalyptus plantation areas in Brazil occurred between 1960s and 1980s, mainly as result of tax incentives for reforestation and forest-based industries, increasing the planted area to 3 million hectares. Afterward, the period between the years 1980 and 2000 was marked by the consolidation of the Brazilian forest sector, including breeding programs, gains in productivity, areas expansion, product diversification, and increase in competitiveness and increases of concerns with social and environmental issues.
The consequence of gains in productivity of the Eucalyptus genus was an increase of the Mean Annual Increment from 15 m 3 ha -1 yr -1 in 1970s (QUEIROZ AND BARRICHELO, 2008) to 40.7 m 3 ha -1 yr -1 in 2012 (ASSOCIAÇÃO BRASILEIRA DE PRODUTORES DE FLORESTAS PLANTADAS-ABRAF, 2013). This productivity increase was achieved, mainly, by new eucalypt populations developed from inter-specific crosses, especially between E. grandis and E. urophylla and, consequently, the use of cloning techniques to select superior individuals to be planted at commercial scale (LEMOS, 2012). In addition, there were significant advances in silvicultural practices, such as the use of minimum tillage, control of weeds, pests and diseases, judicious fertilization recommendation and better control of forest fires. As a consequence, since the year 2000, Brazil has earned the position of a major international player of the planted forest sector (GONÇALVES et al., 2013).
Nowadays, eucalypts plantations are grown for 6-8 years before the first clearcutting, followed by another plantation or by coppice rotation (1-4 years). The wood produced is used for different purposes: energy (electricity and charcoal), pulp and paper, and constructions, and rotations varying from 20-25 years to sawmill wood (GONÇALVES et al., 2008).
Despite these silvicultural and genetic improvements and the recognized role in the international market, it is still necessary to improve the understanding of eucalypt plantations.
One of these issues refers to studies on the analysis of biophysical parameters that influence the ecophysiological processes and the ability to absorb radiation over different genetic materials.
Forest biophysical parameters
Forest stands can be characterized by several biophysical parameters collected directly or indirectly during field measurements, estimated by empirical relationships or extracted from remote sensing data. Some important parameters, linked to important processes in forest functioning, are: diameter at breast height (DBH), height, volume, biomass, leaf area index (LAI), leaf angle distribution (LAD), and fraction of absorbed photosynthetically active radiation (fAPAR). In this work, we will deal mainly with LAI and indirectly with LAD, as present below.
Leaf Area Index (LAI)
The leaf area index (LAI) is a critical vegetation structural parameter for applications in biogeosciences (ZHAO et al., 2011) and varies in terms of species, growth stage, site conditions, seasons and management practices (JONCKHEERE et al., 2004). This index was originally defined as the total one-sided area of photosynthetic tissue per unit ground surface area (WATSON, 1947). Although this definition is applicable for broad-leaved trees with flat leaves because both sides of a leaf have the same surface area, it is more problematic for needle and non-flat leaves, because the one-sided area is not clearly defined (JONCKHEERE et al., 2004). Other LAI definitions and interpretations have been proposed and the diversity comes mainly from the measurement techniques. These different definitions can result in significant differences between LAI values. Besides the Watson's definition, other possible LAI definitions are: i. one-sided LAI -half the total developed area of leaves per unit of horizontal ground surface area (CHEN; BLACK, 1992;LANG, 1991). This definition is therefore valid regardless of the vegetation element shape (WEISS et al., 2004) and topography, and is commonly used to represent the gas exchange potential (BARCLAY, 1998); ii. horizontally projected LAIsum of the shadow areas that would be cast by each leaf in the canopy with a light source at infinite distance and vertical, (sum for all leaves in the canopy) (RUNNING et al., 1986), common in remote sensing applications since it represents the maximum leaf area that can be seen by sensors from overhead. However, it is lower than the -real‖ LAI, described above, because leaves are not all horizontal (LAD section below); iii. inclined projected or silhouette LAI -projected area of leaves considering individual leaf inclinations to horizontal in their natural position on tree (SMITH et al., 1991;STENBERG, 1996). It is useful to model the effects of light penetration through the canopy, light interception efficiency and remote sensing (BARCLAY, 1998); since it represents the area of intercepted light and that would be observed by a nadir view from above. The LAI value calculated with this function is between the two estimates before; iv. non-overlapping inclined project LAI -projected area of leaves while accounting for leaf inclinations, counting overlapping leaf areas only once (BARCLAY, 1998). It represents the proportion of ground obscured by foliage in a remotely acquired image.
Generally, we do not call this a LAI, but we refer to it as -direct intercepting surface‖.
It is similar to the -fraction of intercepted direct radiation‖ (fIPAR when radiation is in the Photosynthetically Active Radiation).
LAI is directly related to gas vegetation exchange processes (GITELSON et al., 2014;CIGANDA et al., 2008) such as transpiration (CLEUGH et al., 2007;ROGERS, 2013), rainfall interception (GHIMIRE et al., 2012), thereby, it can be used in parameterization of dynamic models to estimate these variables. In particular, process-based ecophysiological models, such as MAESTRA (MEDLYN, 2004), use LAI directly as an input to simulate treescale photosynthesis and transpiration, together with meteorological variables and other structural variables. Different spatial and temporal scales of LAI quantification is therefore important (le MAIRE et al., 2012), allowing a better understanding of dynamic changes in productivity and climate impacts on forest ecosystems (ZHENG; MOSKAL, 2009). Another example is to use LAI to examine relationships between environmental stress factors and forest damage caused by insects (EKLUNDH et al., 2009).
Two main procedures can be used to estimate LAI: direct and indirect methods. Direct measurements are more accurate, but they have the disadvantage of being extremely timeconsuming and labor intensive (FASSNACHT et al., 1994), making long-term monitoring of spatial and temporal large-scale hard to conduct. However, these procedures are still used to validate indirect methods (JONCKHEERE et al., 2004) or they can be used in -simple‖ forest ecosystem like the Eucalyptus plantations under study. Some direct LAI measurements are leaf collection (harvesting and non-harvesting sampling) and planimetric and gravimetric techniques. Other measurements consist of measuring the length and diameter of each leaf, whose product is generally well correlated with individual leaf surface, or leaf counting or the use of an average leaf size, among others. Indirect ground-based LAI measurements can be divided into indirect contact of LAI measurements (inclined point quadrat and allometric techniques) and indirect non-contact measurements by the light transmission analysis (DEMON, ceptometer, LAI-2000, hemispherical canopy photography, and others). Detailed description of these methods are discussed inFassnacht et al. (1994( ), Jonckheere et al.(2004 and Weiss et al. (2004). In eucalypt plantations, the LAI estimations are based on a direct measurement of the leaf area in a subset of trees of different sizes, which are used to calibrate a local allometric relationship further applied to the inventory.
Leaf Angle Distribution (LAD)
Leaf angle distribution is defined as the mathematical description of the angular orientation of leaves in the vegetation, represented as the probability of a leaf element to have its normal vector within a specified angle. Since a uniform distribution of leaf azimuth angles (ϕ L ) is adopted, LAD becomes the probability density function of the zenith angle (θ L ) of leaf normal (ZOU et al., 2014) (Figure 1). A set of mathematical LAD functions is commonly used to classify measured leaf angle distributions, such as: planophile, erectophile, plagiophile, extremophile, spherical, ellipsoidal, rotated ellipsoidal, spherical, elliptical and two-parameter beta distribution (CAMPBELL, 1990;DE WIT, 1965;KUUSK, 1995;WANG et al., 2007) (Figure 2). ucl.ac.uk Of these distributions, the erectophile has the largest mean zenith angle of leaf normal, while the planophile has the smallest. The other distributions present intermediate zenith angles between these first angles. The elliptical and 2-parameter beta distribution has also been used in some studies (KUCHARIK et al., 1998;KUUSK, 1995;ZOU et al., 2014), which allows to parameterize the LAD to a given measured leaf distribution, instead of supposing a priori one of the well-defined distributions above mentioned. The ellipsoidal is the most widely used the function of leaf angle distribution. This distribution considers that the leaf angle density function is the same as the angle density function for the area on the ellipsoid surface (CAMPBELL, 1990) and it has gained extensively use as it provides a reasonably accurate description of empirical angle distributions of many different canopies.
Besides, it is described by only a single parameter -the average leaf angle (ALA). Actually, a more accurate ellipsoidal distribution, the rotated ellipsoidal distribution has been used (THOMAS; WINNER, 2000;WANG et al., 2007). This distribution corresponds to an ellipsoid in which small surface elements are rotated normally to the surface and better addresses the density of probability of zero at a zero inclination angle (THOMAS; WINNER, 2000).
This parameter is one of most import biophysical parameters to describe canopy structure and it is necessary to accurately estimate absorbed, reflected and transmitted radiation fluxes (ROSS, 1981). LAD is variable intra and inter-species (HUTCHISON et al., 1986) and exhibits spatial and temporal variability (WIRTH et al., 2001). LAD has an essential role to determine light competition between leaves, between trees within a canopy (HIKOSAKA; HIROSE, 1997), and therefore energy balance and microclimate (THANISAWANYANGKURA et al., 1997).
Direct LAD can be measured with mechanical clinometers in contact with leaf surfaces. However, this is time-consuming, laborious and demands careful field work in a large number of representative leaf surfaces. Alternative indirect measurements using 3D digitized canopy elements with specialized instrumentation (SINOQUET et al., 1998(SINOQUET et al., , 2005, also time-consuming. Faster laser scanning (HOSOI et al., 2011;HOSOI et al., 2009) has also been used, but these methodologies demand resources and computing time afterwards. A photographic method, based on analyzing digital images of leaves in the canopies, has been applied and shown fast and accurate results (PISEK et al., 2011;ZOU et al., 2014). Lang et al. (1985) proposed to extract LAD inverting the radiation transmitted through the canopy at different view angles; however, it was inaccurate due to the difficulty to distinguish between the effects of leaf angles on canopy transmittance from effects of other structural canopy parameters such as LAI. Huang et al. (2006) and Gao et al. (2003) have found good results using bidirectional canopy reflectance (section 1.2.4) models to retrieve leaf angle distribution.
Fraction of Absorbed Photosynthetically Active Radiation (fAPAR)
The fraction of Absorbed Photosynthetically Active Radiation (fAPAR) is defined as the fraction of incoming Photosynthetically Active Radiation (PAR) absorbed by green elements of the canopy. PAR is the solar radiation reaching the vegetation in the wavelength region between 0.4 -0.7 μm (FAO, 2009), which is the wavelength useful for the photosynthesis process, that is, fAPAR, along with other variables, can be therefore linked quantitatively to photosynthesis. Thus, fAPAR can express the energy absorption capacity of the canopies, as shown in Equation 1 (GOWER et al., 1999): Where, PAR ↓AC and PAR ↑AC are, respectively, incident and reflected PAR above de canopy, and PAR ↓BC and PAR ↑BC are incident and reflected PAR below the canopy (PAR ↑BC is the PAR reflected by the soil).
The Global Terrestrial Observing System (GTOS) and the Global Climate Observing System (GCOS) state that fAPAR is one of the fundamental climate variables (ECVs), a critical parameter to analyze energy and carbon balance of ecosystems (FAO, 2008;PICKETT-HEAPS et al., 2014). It is directly linked to the photosynthesis process and, consequently, has been related to canopy chlorophyll content (ZHANG et al., 2009), canopy architecture (GUILLEVIC, 1999, and evapotranspiration rates. fAPAR is one of the few parameters that relate ecosystem function and structure (ASNER et al., 1998). Moreover, time series of fAPAR can be used to monitor vegetation and environmental indicators (GOBRON et al., 2006), drought events (GOBRON et al., 2005), land degradation (SENNA et al., 2005), phenology (VERSTRAETE et al., 2008), biodiversity (COOPS et al., 2008) as well as to retrieve radiation fluxes for climate modeling (PINTY et al., 2006).
The basic direct fAPAR measurement requires the use of PAR sensors to measure each flux of the equation (1), with several commercial instruments that have been built and used for ground-based fAPAR measurements (FAO, 2009). This direct in-situ determination can be a challenge in a heterogeneous forest ecosystem, since it requires simultaneous measurements of PAR above and within the canopy, adequate spatial sampling and daily average representative. As an alternative, many studies conducted on canopy light interception use the fraction of PAR intercepted by the canopy (fIPAR) instead of fAPAR, since it is easier to measure and provide almost the same value (GOWER et al., 1999).
Remote Sensing of forest stands
Many important ecological and silvicultural issues concern forest ecosystem processes in large areas. However, understanding the functions of forest stand has come fundamentally from intensively in-situ studies conducted in small experimental areas, due to the difficulty to conduct direct measurements at large temporal and spatial scales. In this context, remote sensing products are alternatives for assessments of large forest stands and offer potential to complement or even replace field measurements of forest stands in larger areas (HOMOLOVÁ et al., 2013;KOKALY et al., 2009). Remote sensing can be defined as the means to obtain information about an object without physical contact. According to Novo (2010), in studies on terrestrial environment, remote sensing uses sensors and equipment for data processing to record and analyze interactions between incoming electromagnetic radiation and target objects on the earth's surface. In forestry, these objects are usually tree canopies and the gaps between canopies. Remote sensing data is collected using either passive or active sensors coupled on terrestrial, aerial or orbital platforms and represent a tradeoff among spatial, spectral and temporal resolutions . An illustration of active and passive sensors schemes are shown in Figure 3. Passive optical sensors are the most commonly used; however, radar images are also used in forestry. In general, optical sensors sample the reflected light in the shortwave part of the electromagnetic radiation spectrum, which includes the visible, near and middle infrared portions of the spectrum (350 up to 3000 nm). Some sensors also measure spectral bands in the long-wave part of the spectrum (i.e. thermal bands). Optical sensors can provide quantitative and qualitative information on foliage and biochemical properties (ROBERTS et al., 2007), as described above. On the other hand, active sensors, such as LiDAR and radar, emit microwave pulses and record the backscatter from targets, providing information about biomass and forest structure (BOYD; DANSON, 2005).
Interactions between incident radiation and canopy forests elements are complex and are described by absorption, reflection and transmission of the group of leaves and other objects (e.g. twigs, branch) that compose the canopy. The intensity of this process depends of the physical-chemical properties of the objects and the intensity of the incident source.
Reflectance is a property of a specific object to reflect the incident electromagnetic radiation and is expressed through reflectance factors (ρ) for given wavebands (PONZONI; SHIMABUKURO, 2007). The reflected radiation flux is also determined by the specific geometric characteristics of incident and reflected radiation and depending on these characteristics, these factors can be bidirectional (two geometries involved) in which one geometry is representative of the incident (radiation source, e.g. sun) azimuth and zenith angles. The other is characterized by the azimuth and zenith angles of the sensor that records intensity of the reflected flux (view angles). Reflectance can also be directional-hemispheric, which is measured by directional illumination and reflected radiation record using integrated spheres (PONZONI; SHIMABUKURO, 2007). The directional-hemispheric reflectance factor of a green leaf is presented in Figure 4, which can be characterized by a well-described absorption of foliar photosynthetic pigments (chlorophylls mainly) in the visible region (0.4 -0.7 µm), leaf structure in the near infrared region (0.7 -1.3 µm, NIR), and water and protein absorptions in the shortwave infrared region (1.3 -2.5 µm, SWIR) (HOMOLOVÁ et al., 2013). Studies on forest ecosystems using remote sensing techniques can benefit from a wide variety of data provided by different passive and active systems in different spectral, spatial and temporal resolutions. A summary of the ecological approaches and remote sensing spatial scales is shown in Figure 5. Satellite data have revolutionized the research to characterize and monitor vegetation dynamics at global scales by using, for example, vegetation indexes (NDVI, EVI, SR, among others, described in next section) and retrieving forest parameters.
Information on the main operational satellites and their spatial, temporal and spectral resolutions can be found in and athttp://www.itc.nl/research/products/sensordb/searchsat.aspx. LAI is one of the parameters that can be obtained using remote sensing images, from active or passive sensors, on board of terrestrial, aerial or spatial platforms. Estimations of LAI from satellites images in the visible part of the spectrum generally rely on spectral vegetation indices or radiative transfer model inversions (le MAIRE et al., 2012). These methods are based, mainly, on the use of spectral wavelength bands measured by satellite in the form of images (multispectral images if few broadband bands are measured, hyperspectral images if many narrow bands), in the visible spectrum (~350 to ~2500 nm). Some spectral bands (narrow or broad) are highly sensitive to vegetation structure, such as in the near infrared region. Other bands are linked to the canopy absorption by the chlorophyll, like in the red band. The combination of these bands in spectral vegetation indices (SVI) is therefore correlated to LAI, but this relation is highly dependent on the SVI used, vegetation type, among others. These aspects concerning SVI and Radiative Transfer Models inversion will be further discussed in this thesis. LAI retrieval using remote sensing tools are viable alternatives allowing assessments at large scales (NORTH, 2002) and have been considered an indispensable alternative to model and simulate ecological variables and processes at large scales. In the literature, several studies have reported different remote sensing techniques from more than 30 years (BANSKOTA et al., 2013;DELEGIDO et al., 2013;DUPUY et al., 2013;GITELSON et al., 2014;HERNÁNDEZ et al., 2014;le MAIRE et al., 2011ale MAIRE et al., , 2012MA et al., 2014;PROPASTIN, 2009).
As for LAI, empirical relationships calibrated between vegetation indices and field measurements are used to estimate fAPAR from satellite images (FENSHOLT et al., 2004). fAPAR can also be estimated from inversion of physically based radiation transfer models that use remote sensing data as input (D'ODORICO et al., 2014;FAO, 2009). Several spatial agencies and other institutional providers have created and delivered various fAPAR products at different temporal and spatial resolutions. However, in situ validation of fAPAR products and estimation of their uncertainty are seen as a critical task that remains incomplete (SEIXAS et al., 2009). Studies handling fAPAR products can be found in D'odorico et al., Gobron et al., (2008), Mccallum et al., (2010) and Pickett-Heaps et al., (2014).
In association with terrestrial land-surface models (HAVERD et al., 2013;KAMINSKI et al., 2012), remote sensing data can also be used to better understand carbon and water cycles (PICKETT-HEAPS et al., 2014).
Spectral Vegetation Indices (SVI's)
Canopy properties can be analyzed by empirical and physical remote sensing models (HOMOLOVÁ et al., 2013). Empirical methods are based on statistical relationships between field data collection and remote sensing data using regression techniques (SMITH et al., 2002). The sensitivity analysis of remote sensing data toward properties of interests is often improved by calculating vegetation indices (CHEN et al., 2010) or spectral transformations in case of contiguous hyperspectral data (SCHLERF et al., 2010).
Over the last decades, many remote sensing issues have been focused on collecting information on spectral measurements to characterize the presence and quality of vegetation elements, extracting and modeling several biophysical parameters from vegetation targets (BARET et al., 1987;DARVISHZADEH et al., 2008b;SCHLERF et al., 2005;WANG et al., 2005). Most of these efforts have used Spectral Vegetation Indices (SVI's), dimensionless radiometric metrics that indicate the relative abundance and green vegetation activity and estimate spatiotemporal variations in the biophysical and biochemical parameters of vegetation, such as LAI, percentage of vegetation cover, fraction of absorbed photosynthetically active radiation (fAPAR), canopy chlorophyll content, estimating and forecasting crop yields, crop type and conditions (DELEGIDO et al., 2013;LE MAIRE et al., 2008, 2011a, 2011bLIANG et al., 2015;WU;NIU;GAO, 2012;ZHAO et al., 2007).
SVI's were developed based on the characteristics of the vegetation reflectance along the spectrum. These characteristics can be primarily determined by pigments, especially chlorophyll concentration, influencing the vegetation reflectance spectra mainly in the visible domain, such as in the blue, green and red regions of the spectrum. The near-infrared (NIR) region is also important to analyze vegetation reflectance and is determined by the arrangement of cells within the mesophyll layer of leaves (influence of leaf reflectance) and by canopy structure (e.g. LAI). An ideal SVI for vegetation parameter retrieval should be well correlated for these biophysical parameters in a wide range of vegetation conditions. It should minimize external and internal effects and differences related to non-photosynthetic components and senescent leaves. It is also related to some field measurable parameters for validation and quality control purposes LIANG et al., 2015), e.g. canopy structure, average leaf angle and others.
One of the main advantages of these SVI's is that they allow obtaining relevant information about vegetation cover in wide areas and over time in a fast and easy way, besides, the underlying mechanisms are well-understood (DELEGIDO et al., 2013). However, SVI's lack cause-effect relationships and, consequently, the statistical predictions often suffer from lack of robustness and transferability as they are usually site, species and time specific (COLOMBO et al., 2003). Several SVI's have been developed over the lasts decades, some are more generalists, while others are more specific for species and local conditions (e.g. EucVI developed for LAI retrieval in eucalypt plantations, described in le Maire et al.(2012).
These SVI's are primarily constructed using the inverse relationship between the red and NIR spectral regions. A summary of some of these SVI's is shown in Table 1. Rondeaux et al., 1996 * = 0.5; ** = 2.5, 1 = 6.0, 2 = 7.5 and =1.0 The most widely known is the Normalized Difference Vegetation Index (NDVI) (ROUSE et al., 1973), which uses a normalized difference between red and near-infrared regions. NDVI has been used to monitor vegetation activity over annual and seasonal growth stages and is strongly correlated with LAI. However, since the relationship between NDVI and LAI is exponential, NDVI often shows saturation under conditions of moderate-to-high LAI values (e.g. > 3 -5) (HABOUDANE, 2004;WANG et al., 2005). Other VI commonly used is the Soil Adjusted Vegetation Index (SAVI), the Enhanced Vegetation Index (EVI) and the Optimized Soil Adjusted Vegetation Index (OSAVI). More detailed description about vegetation index can be found in JENSEN(2005).
Radiative Transfer Models of vegetation
Physical remote sensing is another method to retrieve biophysical parameters of forest canopies. This method is based on radiative transfer models (RTM). This kind of model simulates light absorption and scattering inside vegetation canopies using the leaf biochemical composition and canopy structural properties as input JACQUEMOUD et al., 2009;LAURENT et al., 2011b). Several types of different RTM use a variety of robust and precise mathematical and computational representations of the radiation transfer in the environment (terrestrial and atmosphere surfaces) (GASTELLU-ETCHEGORRY, 2013). These models can be relatively simple or complex, requiring different input parameters to address the entire radiative transfer problem in order to simulate the vegetation reflectance for any experimental conditions.
Other canopy structural properties (e.g., leaf aggregation, leaf angle distribution, clumping at different scales) present a substantial challenge for RTMs parameterization and interpretation from RS data and need to be further investigated (OLLINGER, 2011).
Normally, the application of RTM to recover canopy parameters uses the inversion methods that identify the set of parameters of the model that provides the best fit between the simulated reflectance and the remote sensing reflectance. The accuracy of inversion is associated with the inversion method and the characteristics of remote sensing measurements (radiometric and spatial resolution, view direction, spectral domain, among others). Due to the complexity of RTMs, the inversion procedure is not straightforward (LAURENT et al., 2011a). Many inversion methods are available and can be divided into three major categories (KIMES et al., 2000): 1) traditional inversion methods that minimize the distance between simulations and measurements through minimization of algorithms, 2) Look-up table (LUT) methods, where a dataset of possible reflectance is pre-computed, 3) and machine learning methods, for example, Neural Networks and Random Forest, which use non-liner a nonparametric regression between reflectance and parameters, calibrated on a simulation dataset.
The first method is robust but computationally intensive and not appropriate to deal with large sets of remote sensing data. Then, the other two methods are potentially more efficient and accurate, since they can use complex reflectance models with acceptable computational time and do not require initial guesses to model parameters as traditional inversion methods do (GRAU; GASTELLU-ETCHEGORRY, 2013). Other interesting methods can be the calibration of vegetation indices and the model regression (le MAIRE et al., 2008) The Look-up table is one of most applied methods to remote sensing data inversion (GASTELLU-ETCHEGORRY et al., 2003;KIMES et al., 2002;LIANG et al., 2006). This Radiative transfer models of forest canopies can also be classified according to their properties associated with interception of incident radiation as: isotropic media, azimuthally isotropic media and anisotropic media (VERHOEF, 1998). For the isotropic media, radiation interception is independent from the incident direction (e.g. canopies with spherical leaf angle distribution). For the azimuthally isotropic media, radiation interception is independent from the azimuth angle, but it depends on the incident direction of the zenith angle (e.g. canopies with other LAD than spherical). Finally, in the anisotropic media, radiation interception depends on both the azimuth and zenith angles (e.g. heterogeneous forest canopies). Other types of specific classifications of radiative transfer models can be found in (VERHOEF, 1998). An intercomparison of well-established radiation transfer model can be accessed in the RAMI initiative (http://rami-benchmark.jrc.ec.europa.eu/HTML/).
The radiative transfer within a canopy usually depends on the spatial distribution of canopy elements and subsequent complex radiative processes, such as the multiple scattering, mutual shading of crowns and background shading (KOETZ et al., 2004). To simulate these complex light-element interactions, three-dimensional canopy radiative transfer models are required. This is especially the case for heterogeneous canopy structure (GASTELLU-ETCHEGORRY et al., 2003;GASTELLU-ETCHEGORRY;TRICHON, 1998;KOETZ et al., 2004). The major drawback of these physical methods is that different combinations of RTM input parameters may produce the same reflectance spectra, which makes estimation of canopy properties by RTM inversion difficult (COMBAL et al., 2002).
DART model
The Several bands (visible to thermal infrared) can be computed in a single simulation with three methods: flux tracking, Monte Carlo and LiDAR. The flux tracking method, also known as "ray tracing", has three modes: 'R' (reflectance), which simulates the reflectance using the sun and/or atmosphere as radiation sources; 'T' (thermal), which simulates brightness temperature using the atmosphere and Earth scene as radiation source; and 'R + T' (reflectance and thermal), which simulates temperature brightness using the sun, Atmosphere and Earth scene as radiation source. The Monte Carlo method works only with the 'R' mode and without atmosphere. The LiDAR method is an active method and simulates only scattering processes (no thermal emission).
The cells that compose the scene array can contain turbid materials and triangles (GRAU; GASTELLU-ETCHEGORRY, 2013). Turbid material is used to simulate interactions of three-dimensional (3D) (vegetation and fluids, e.g. air and water) with scattered radiation obeying the Beer's law (ROSS, 1981). In this concept, the tree crowns are considered juxtapositions of turbid material cells and surfaces simulation (trunks, branches, topography, etc.) are computed as triangles. Using the modeling of the radiative transfer of the scenes, this model can generate results like remotely sensed images, land cover maps, energy budget (temperature, fAPAR, CO 2 assimilation, and transpiration), and LiDAR waveforms, among others.
The simulated atmosphere in the scene is composed of cells in three regions (BA, MA and HA, as shown in Figure 6) and its size increases as the altitude increases. Radiation propagates in a finite number of directions Ωi with an angular sector width ΔΩi. Any set of N discrete directions may be used (not necessarily equal solid angles, but ΔΩ n = 4π N n=1 ) (GASTELLU-ETCHEGORRY, 2008). Scattered radiation along the direction Ωi at a position r is known as a source vector W(r, Ωi). Radiation interaction in the atmosphere corresponds to absorption and not-resonant (scattering, thermal emission) mechanisms that depend on radiation (e.g. wavelength) and atmosphere (gas and aerosol volume density N, pressure P and temperature T) (CESBIO, 2013a). In DART, the atmospheric parameters can be manually input by the operator or specified using pre-computed databases (most accurate approach) that store information derived from the Lowtran and Modtran atmosphere models (BERK, 1989).
These models simulate the atmosphere as a layer consisting of gases, aerosols, rain and other particles with vertical profiles (temperature, concentration) and specific optical properties.
More theoretical DART details can be found in Gastellu-Etchegorry et al. (1999) and .
The DART model works with a graphical user interface (GUI) for input parameters that describes the landscape and illumination scene conditions. This interface uses four basic modules ("Direction" -calculates and stores the sun and view directions; "Phase" -calculates the general properties of the scene; "Maket" -scene geometry and landscape dimensions; and "Dart" -pre-computes light beam and simulates radiation propagation) and five optional modules ("Vegetation", "SequenceLauncher", "DEMGenerator", "Hapke" and "PROSPECT") (CESBIO, 2013b). An example of input parameters and modeling scheme is presented in Figure 7. Since its first release in 1996, DART has been successfully tested in studies on canopy vegetation compared with field measurements, such as impact of canopy structure on satellite images texture (BRUNIQUEL-PINEL; GASTELLU-ETCHEGORRY, 1998) and reflectance (GASTELLU-ETCHEGORRY, 2008), three-dimensional distribution of photosynthesis and primary production rates of canopies (MALENOVSKÝ et al., 2007), influence of wood elements of spruce canopy on nadir reflectance (MALENOVSKY et al., 2008), classification of heterogeneous forests (COUTURIER et al., 2009), quantification of chlorophyll leaves content (MALENOVSKÝ et al., 2013), among others. DART is one of the most complex three-dimensional radiation transfer models (KIMES et al., 2002).It has been continuously improved in terms of accuracy, scenes modeling (topography, vertical and horizontal trees canopy structure), radiative transfer (LiDAR, scene spectrum, band sensors) and functionality (SQL database) (GRAU; GASTELLU-ETCHEGORRY, 2013).
Objectives of the Thesis
The main hypothesis of this thesis was that interaction of radiation simulation between solar radiation and canopy, the three-dimensional radiative transfer models (DART), allows quantifying biophysical parameters and structural characteristics of forest canopies and evaluating canopy reflectance of satellite images.
The main objective of this thesis was to analyze canopy reflectance of Eucalyptus plantations under different genotypes and ages using the DART radiative transfer model to simulate remote sensing data obtained from satellite images in visible and near infrared spectral bands.
The specific objectives were to: 1) Parameterize the DART model at different ages and genotypes of Eucalyptus plantations and analyze the accuracy of simulated images by comparing the reflectance on top of atmosphere with real very high-resolution satellite images. Hypothesis: Reflectance in Eucalyptus plantations varies according to stand age and genotype composition and these differences can be simulated using the DART (Discrete Anisotropic Radiative Transfer) model obtained from real satellite images; 2) Analyze the relationship between the Leaf Area Index (LAI) and different Spectral Vegetation Indices (SVI's) using empirical methods and DART simulated images over a variety of acquisition configurations, ages and genotypes of Eucalyptus stands.
Hypothesis: Accuracy of relationships between spectral vegetation indices and the leaf area index is improved using the hybrid method with SVI's and RTM to account for stand properties and satellite acquisition conditions effects.
Thesis Structure
The thesis was structured in two chapters, organized according to specific objectives.
The articles had, as general approaches, the topics:
Paper 1: Accuracy of DART model to simulate very high spatial resolution satellite images of Eucalyptus stands at different ages and genotypes materials.
Abstract
In this study, we parameterize and validate the DART radiative transfer model using extensive in situ measurements as input. Eucalyptus plantations from 16different genotypes were simulated. Accuracy of the simulations of stand reflectance was achieved by comparing the simulation with very high-resolution satellite images at three different ages. The study site was located in Itatinga Municipality, in the state of São Paulo, southeastern Brazil, where two different experiments were analyzed: the first experiment consisted of 4 plots of 84 trees each chosen within an industrial stand, in real planting conditions. The second experiment consisted of a -clonal test‖, where plots of 100 trees of 16 different genotypes were planted in 10 randomized blocks, totalizing 16,000 trees. Regular inventories were conducted for the first experiment, at 3, 5, 6,9,12,15,18,21,25,31,39,44,51 and 57 months after planting. For the second experiment, inventories were conducted at 6, 12, 19, 26, 38, 52, 62 months. Leaves, soil and trunk spectral optical properties were collected in2010 and 2015, and SPAD measurements (highly correlated with chlorophyll content) were collected in2010 and 2014. The DART model was parameterized using measured tree dimensions interpolated in the 3 dates of satellite measurements from field measurements. The DART model was run with the atmospheric module, which allowed simulating images at bottom and top of atmosphere (BOA and TOA) in the three dates of satellite measurements (May 2010, August 2010 and July 2013), when the stand was6, 9 and 44 months of age, respectively. Accuracy of the simulations was evaluated by comparing the mean TOA reflectance of DART with mean reflectance (TOA) measured by the Worldview-2 in the three dates. The mean absolute error (MAE) was computed for eight multispectral bands in the three dates. The multispectral reflectance of genotypes in all ages at BOA level was also analyzed for DART and Worldview-2 images. Results showed a good simulation of the spectra, with MAE lower than 0.045 for all bands. DART was very accurate to simulate reflectance of bands in the visible region (MAE <0.016). However, some limitations were found in the simulations of bands in the near infrared band (NIR1 band -770-785 nm), mainly at 44 months of age. These results could be associated with limitations in the model to simulate the shadow effect. Despite these limitations, this systematic error in the near infrared bands does not make DART usage impossible, since post-processing techniques could be implemented to correct the simulation based on measurements. Similar reflectance hierarchy between genotypes for DART and Worldview-2 multispectral bottom of atmosphere (BOA) level reinforces DART suitability to describe the radiative transfer on forest landscapes at different ages. The more pronounced effect of genotypes in the NIR bands suggests that the structural variability of the stand was the main factor to show these differences and, not necessarily, the chlorophyll content. Higher reflectance in the near infrared region at BOA level for the genotypes with higher leaf areas at the specific date reinforces the impact of this parameter in the canopy reflectance of the eucalyptus stands. This study shows the potential of DART to simulate reflectance spectra of Eucalyptus stands at different ages and open perspectives on its use in inversion mode.
Introduction
Commercial Eucalyptus plantations in Brazil cover 5.6 million ha, which accounts for 71.9 % of planted forests in Brazil (IBÁ, 2015). Currently, most areas are planted with a few Eucalyptus species, but a large variety of genotypes, planted mainly on clonal plantations, which were tested and selected for distinct widespread soils and climatic Brazilian conditions (STAPE et al., 2014). These genotypes provide different phenotypes, with distinct canopy structure, leaf morphology and biochemical compounds, allocation patterns and growth speed and, consequently, different biomass production. Due to their high economic importance in Brazil, the understanding of how biophysical parameters of planted forests could explain the spatial-temporal growth dynamics is of paramount importance. These biophysical parameters are also needed to address the use of resources and ecological functioning of these plantations.
Among the several methods to characterize biophysical parameters of forest plantations, the development of remote sensing applications has become one feasible and robust technique, since these applications are able to express -through light interaction between earth-atmosphere compounds -canopy variables of forest ecosystems over several temporal and spatial scales. Remotely sensed images, in particular those obtained from orbital platforms, can be converted into reflectance values for each spectral band of the image, and later used to retrieve biophysical parameters of the forest through empirical relationships, or through radiative transfer models (RTM). This last method, despite being more complex, is based on a better understanding of the physical laws that control the transfer and the interaction of solar radiation in a vegetative canopy to explain the quantitative values of canopy reflectance (GASTELLU-ETCHEGORRY; BRUNIQUEL-PINEL, 2001). This physically-based approach is better suited for large-scale applications (GOBRON et al., 1997;VERSTRAETE et al., 2008) and can also make full use of the high dimensional spectral and multi-angular information provided by many modern sensors CHOPPING et al., 2008;DARVISHZADEH et al., 2008b).
Applications with physically-based RTM have become a reliable alternative to describe vegetation functioning, mainly through the use of three dimensional (3D) models, which are able to simulate accurately the spectral behavior of bi-directional reflectance factor (BRDF) of the Earth's surfaces . The expected advantage of these 3D RTM is to provide an accurate 3D distribution of the radiation that is
Study Site
The study site was located in Itatinga Municipality, in the state of São Paulo, Figure 2 shows the location of the plots and clonal test blocks.
In-situ measurements
Complete forest inventories were conducted at 3, 5, 6,9,12,15,18,21,25,31,39,44,51 Leaf angle distribution (LAD) was computed from the leaf angles orientation measured in the field in six felled trees for each genotype. In each tree, the inclination of 72 leaves was measured with a clinometer. These 72 leaves were selected according to their position within the crown: three different crown heights (bottom, middle and top layers), four auxiliary branches at each height (two in row planting and two in inter-row planting), and six leaves along the length of these auxiliary branches.
The tree leaf area was determined for each tree using allometric relationships calibrated on 10 felled trees. Leaf area of young trees was calculated using the power equation Where: LA =tree total leaf area (m 2 ); C = crown radius (m); H = tree height (m); α,β = equation parameters.
The main characteristics of the plots during the years of analysis in the E. grandis stand and clonal test are shown, respectively, in Table 2 and Figure 3. These data were extracted by interpolation of the field measurements data in the three dates that were analyzed in this study. For the leaf area, additional estimations were extracted using auxiliary leaf area index values retrieved from MODIS images during the months after planting.
the reflectance. In November 2010, litter and trunk reflectance were collected only for the Eucalyptus grandis stand directly in the field. On October 2015, litter and trunk reflectance were collected for each treatment (genotypes) in three different locations in the field in order to generate one composite sample per treatment and measured in the laboratory using a Contact Probe in five different points of the composite sample.
Additionally, chlorophyll content of leaves for the clonal test was estimated with the SPAD device (Minolta Inc.) in November 2010 and May 2014, when plants were at 12 and 48 months of age, respectively. As for spectral measurements, three trees per treatment (genotypes) were selected and SPAD values of six leaves per tree were recorded.
Spectral intervals
In DART parameterization, we used the "ray tracing" and R (reflectance) mode for the simulation method in order to allow the simulation of bidirectional reflectance images of 21 spectral intervals (bands). These bands were defined to cover the main region of canopy to 740 nm). In these regions, the leaf reflectance curve increases until reaching a plateau in the infrared region ( Figure 4 in Section 1.3 of Chapter 1). From these 21 bands, it was possible to reconstruct a full reflectance spectrum by interpolating the images from each band.
Afterward, these bands were convolved for broadband areas corresponding to multispectral bands of satellite sensors using their relative spectral response.
Illumination parameters
Simulation of virtual eucalyptus plantation forests was run based on 100 discrete illumination directions, which correspond to the solid angle between the beam emitted by solar source and atmosphere surface. The number of direction was chosen to maintain the accuracy while reducing the processing time. The input solar zenith and azimuth angles (respectively, θ andφ ) were computed knowing the exact local latitude (22°58'04''S) and date and hour of satellite overpass. Image acquisition geometry (θ ,φ ) was obtained from metadata of acquired satellite images.
Scene
The DART scene is horizontally delimited by the landscape extension (∆ , ∆ ) and vertically by the height of objects within the scenes and atmosphere layers (∆ ). All shape. The composed ellipsoid was the crown type used to simulate the trees, which is a crown with two half ellipsoids (one for most of the crown and another for a possible small ellipsoid at the crown bottom).
Leaf Angle Distribution (LAD)
DART is able to uses even types of predefined leaf angular distribution (LAD): planophile, erectophile, plagiophile, extremophile, uniform, ellipsoidal and elliptical. The ellipsoidal LAD, which uses a mean angle value as parameter input, was chosen because it simulates accurately Eucalyptus leaf angles distribution. The required parameter is the average leaf angles (ALA) and it was computed from the leaf angles measured in situ and interpolated for the specific dates. A different ALA was used for each crown section (top, middle and bottom) in the E. grandis stand and each treatment in the clonal test. A summary of the ALA used as input in DART for each date and treatment is shown in Table 3. We observe a large variability of LAD between clones, from values of ALA going from ≈27° (planophile type) to ≈60° (more erectophile type of angle distribution).
Leaf Area Index (LAI)
In the DART version, we used the leaf area index of the scene (LAI scene = LA specie ∆X . ∆Y , with ∆X = 20 m and ∆Y = 30 ) as an input parameter. The LAI can, however, be separated between the different species (defined in DART as each tree that has a specific LAI value) that constitute the scene. As we measured the leaf area of each tree in field (computed from allometric relationships-Section 2.3.2), each tree inside the plots was considered as a different species in DART. LAI values used as DART input are shown in Table 4.
Leaves, trunk and litter optical properties
The leaf, trunk and litter optical properties measured in the field (described in Section 2.2.2) were used as input optical properties of the simulated objects in DART. For leaves and litter optical properties, reflectance and transmittance for leaves and reflectance for litter were used field measurements carried out in October 2015. Trunk optical properties (reflectance) were based on field measurements in November 2010. Since we had several field measurements, we parameterized DART with optical properties respectively for each treatment and crown layer (lower, middle and upper levels) for leaves, each treatment for litter and one general trunk optical properties for all the eucalypt trees.
Atmospheric Correction
An atmosphere correction was necessary to simulate the bidirectional reflectance of images in top of atmosphere (TOA) using similar local atmosphere conditions that affect the real acquired satellite images. For that propose, we performed several simulations of atmosphere conditions to select the parameters that best described the real conditions. This procedure was done using the atmosphere module inside DART, which used pre-defined simulations. For all these simulations, the sun and view zenithal and azimuthal angles corresponded to the angles of the real acquired satellite image (WorldView-2 images) in three dates, which were adopted to compare the DART simulations. Twenty-one simulated bands were convolved to the multispectral broadbands of these satellite images.
After the creation of three datasets (one dataset for each date), linear regressions between the BOA and TOA reflectance were adjusted for each atmospheric condition and applied to the simulated BOA atmosphere in order to convert these simulated images from BOA to TOA. This procedure was much faster than directly simulating the Eucalyptus stand with atmosphere in the same DART simulation. These TOA images were compared with the real TOA fromWorldView-2 satellite images. The best combination of parameters was selected according to the comparison that showed the smaller root mean square error (RMSE) for all bands.
After calibrations, the best set of atmospheric parameters were used in the atmosphere module of DART to simulate BOA and TOA images for each of the 21 bands and with virtual directions related with the geometry of acquisition of real satellite images.
WorldView-2 Satellite images
The satellite data used in this study to validate DART simulations were images obtained from the very high spatial resolution sensor WorldView-2. These images were obtained in May and August in2010, and in July in2013. The characteristics of satellite WorldView-2 are shown in Table 5 and the main acquisition parameters in Table 6. All the satellite images were orthorectified and projected on the Universal Transverse
Realism of simulated scenes in DART
To verify the realism of DART simulations, we visually checked if the DART tridimensional views were adequately represented due to the input of datasets. In these analyses, we verified if the shape, size and positions of the trees inside the scenes for each date in the tridimensional views were visually compatible with information of field measurement and input data, such as height, type and dimensions of crowns and distribution inside the plots. Leaf area index (LAI) used in DART-from input LAI values -to perform the modeling of each plot was compared to the entire-plot LAI measured in the field.
Comparison between simulated and satellite images
The accuracy of the simulated reflectance images at the top of atmosphere (TOA) from DART was checked against the TOA reflectance obtained from real acquired WorldView-2 images, for all 8 bands (coastal, blue, green, yellow, red, red edge, NIR1 and NIR2), three ages (6, 9 and 44 months) and all 16 clones (average of the 10 blocks). This comparison was performed using the mean reflectance of DART images that were convolved to Worldview-2 8 bands according to their sensor spectral response, and the mean reflectance of Worldview-2 images at the TOA level. The accuracy level was expressed by the mean absolute error (MAE) (Equation 3) as suggested by Willmott and Matsuura (2005) to assess the average model performance and identify the best and worst simulated band: Where: 2 is the measured reflectance by Worldview-2 satellite at wavelength λ, is the reflectance simulated by DART in the same wavelength and is the number of samples (n=480, being 3 dates, 10 blocks and 16 clones).
Additionally, a comparison between all clones reflectance from DART simulations and Worldview-2 bidirectional reflectance at BOA images was performed to analyze the differences of clones related to their reflectance behavior on remote sensing images.
Optical properties and chlorophyll content
Trunk optical properties (reflectance) measured in 2010 and litter optical properties was not possible to confirm that the higher absorption was caused by chlorophyll amount (TAGEEVA et al., 1960;THENKABAIL et al., 2000) since genotypes 3 and 5 did not show higher values of SPAD measurements, with some exceptions for genotype 3 that showed a relative high chlorophyll amount. Some studies also found results showing that the reflectance of evergreen Eucalyptus leaves in the green, blue and red regions was not sensitive to variation in chlorophyll content as observed in the red edge region (DATT, 1998;. Since SPAD showed only relative chlorophyll content, other leaf pigments could explain its absorption differences (USTIN et al., 2009). Moreover, this comparison is also difficult to establish, since SPAD measurements were not made in the same leaves where optical properties were collected. The different absorption between clones in the visible region could also be explained by differences in the leaf moisture amount (DATT, 1999) and thickness, even because genotypes 3 and 5 showed a relative higher absorption throughout almost all analyzed spectrum.
It was not possible to clearly define a reflectance pattern between clones in the near infrared region, because it changed between layers. The top crown layer showed the highest variability between and within clones (Figure 7), probably because of the predominance of young leaves. These differences of optical properties can be related with internal differences in mesophyll structures between clones and, within each clone, differences between leaf positions in the crown layers (ASHTON et al., 1998;CLARK et al., 2005;PANDITHARATHNA et al., 2008). We compared our results with the findings of Nogueira increasing photosynthesis of lower leaves and, when exposed to excess light, more transparent leaves that would reduce the level of photochemical damage to chloroplasts and less energy that would be required for their repair.
Structural analysis of simulated trees
To visually verify the positions, shapes and sizes of the trees in DART simulations, the DART 3D view tool was used for one plot of E. grandis stand and one block in the clonal test for all dates (Figures 10 and 11, respectively). These images allowed to check and to analyze
Analysis of DART simulated images
The mean reflectance simulated by DART and acquired by the Worldview-2 images at the eight multispectral bands (Table 5) genotypes. More pronounced discrepancies were found in the comparison of bands in the near infrared region (bands NIR1 and NIR2in Figures 13, 14 and 15). The plots in the E. grandis stand ( Figure 13) showed greater differences in the NIR1 band than the clonal test did, mainly, for plot 1 and when the trees were at 44 months of age (July, 2013). The clonal test (Figures 14 and 15) also presented greater differences in the near infrared in July, 2013.
In terms of bi-direction reflectance, in this study, the comparison between simulated and real satellite images from forest stands is still a difficult task, since the average signal of the image is dominated by the macroscopic properties of the illuminated and shadowed crowns as well as ground surface (COUTURIER et al., 2008). Considering this aspect, the pixel size and model capacity to assess the elements of forest heterogeneity of the crown and the understory spectral signature are important factors. In this study, the pixel size 0.25 m and the massive input information of trees, such as location, crown size and parameters from August 2010 July 2013 different layers and optical properties of leaves, trunk and background surface (litter) were very representative of the reality of stands, which was shown in the reflectance values compared with the real reflectance from Worlview-2 images. The potential of the DART model to simulate the reflectance of heterogeneous natural landscapes has been successfully tested comparatively with other radiative transfer models for over 15 years throughout the RAdiative Transfer Model Intercomparison exercise (RAMI) (WIDLOWSKI et al., 2015).
However, the simplifying assumption of the crowns composed of ellipsoids used here as DART input could limit the resemblance of simulated images with the reality of the canopy structure and, consequently, with images from very high resolution satellite images, as Worldview-2. Although, the level of complexity required for a more exact description of canopy must be contrasted with the need for detailed input parameters, nevertheless, it is a very difficult task to perform in the reality of forest studies. We used a massive database with parameters as DART input collected from several field campaigns. In the same study site, new measurements comprised terrestrial laser scanning for building mockup objects representative of the trees to be used in the DART model and new comparison between these two approaches should be performed in the future.
A numerical comparison between the mean reflectance simulated from DART and from Worldview-2 images was performed using the Mean Absolute error (MAE) together for all blocks and genotypes of the clonal test. The MAE value for all bands and dates is shown in Figure 16. Generally, MAE values were low for all bands. The lowest values were found for the bands in the visible region (< 0.016) and NIR2 band (0.006) and higher for the NIR1 band (0.045). These results reinforced the good agreement of DART to model the reflectance of Eucalyptus forest stands at different ages and genotypes. Despite the continuous comparison of DART performance over the years in the RAMI exercise (PINTY et al, 2004;WIDLOWSKI et al., 2015), the comparison of our results with other studies is limited due to different approaches performed under several forest types and local conditions. Regarding higher discrepancies between DART and Worldview-2 mean reflectance of the NIR1 band, more detailed analysis of this relationship of the clonal test in all dates are shown in Figure 17. DART underestimated the mean reflectance for all dates in the NIR1 band (770-785 nm, Figure 17), more pronounced at 44 months of age (July 2013). No trends were founds for each genotype or block specifically (results not shown here). The not-well simulation of the intra-crown and ground projecting shadow effect on simulated images could be one explanation for this underestimation. This limitation to simulate the dark side of the scene may have happened both because a DART limitation and/or problems to translate vertical and horizontal leaf distribution inside the crown as input parameters and/or the canopy porosity that was not adequately assessed by the turbid representation of the canopy.
In May and August 2010,there was an image of the majority of soils in regions in the scenes, since the trees were small (respectively, 6 and 9 months of age) without overlapping between crowns both intra-and inter-rows of the stands. However, in July 2013, more complex tree structures with higher tree height and canopy dimensions led to a higher shadow effect on neighboring trees and self-shadowing, which contributed to decreasing the mean reflectance of the scene. A detailed analysis was carried out in this topic to precisely address the NIR1 band greater discrepancy. Despite unsatisfactory results in the NIR1 band, since the results were systematic underestimated for all simulations without specific tendency for the genotypes and blocks, a post-processing could be applied to simulated images to correct these reflectance values. average DBH and tree height values in these ages, which showed fast growth potential of this genotype at the beginning of stand establishment also translated by the BOA reflectancewith lower reflectance in the visible region and higher, in the near infrared region. It reinforces DART suitability to describe the radiative transfer in forest landscapes at different ages. At 44 months of age (Figures 18c and f), it was not possible to establish a clear relationship between DART and Wordview-2 reflectance of the clones in the near infrared bands, possibly due to the already mentioned issues concerning limitations of simulations in the NIR1 band. However, genotypes 10 and 11 that showed the highest reflectance in the near infrared region at 44 months of age (Figures 18c and f), respectively, for DART simulations and Worldview-2 images also presented the highest leaf area values at this age, corroborating the effects of this parameter on reflectance.
Since DART simulations were well-performed compared with real acquired satellite images, the whole and detailed simulated database and new other simulations in different conditions of the structural characteristics of eucalyptus stands and satellite acquisition conditions could be performed to explore several approaches of these genotypes related with their biophysical parameters (e.g. estimate LAI, evaluate leaf angle effect, estimate fAPAR, among others) and satellite conditions (e.g. effect of view direction and sun geometry). The evaluation of DART simulations carried out in this study by comparing with real satellite images represented a first step to intensify more studies to better address the relationship between canopy reflectance and biophysical parameters using radiative transfer models, such as the DART.
Conclusions
This study proposed to analyze the accuracy of DART simulations in terms of biological realism and canopy reflectance of eucalyptus plantations over specific ages compared with very high-resolution satellite images obtained from the Worldview-2 sensor.
Optical properties and structural variables of the trees used as input in the DART model showed a great difference between 16 clones described in this study and reinforces the potential to study the effect of these characteristics on reflectance simulation.
DART allowed to proceed to a robust and accurate atmosphere correction of images (RMSE = 0.042).
DART was accurate to simulate the reflectance of bands in the visible region of the analyzed spectrum (MAE<0.016). However, some limitations were found in the simulations of bands in the near infrared band (NIR1 band -770-785 nm), mainly at 44 months of age (MAE=0.045).
It was not possible to conclude the causes for NIR1 underestimation, but it could be associated with errors in shadow modelling. Despite these limitations, the systematic errors in the near infrared bands did not make DART usage impossible, since post-processing techniques could be implemented.
Similar reflectance hierarchy between genotypes for DART and Worldview-2 multispectral bottom of atmosphere (BOA) level reinforces the DART suitability to describe the radiative transfer to forest landscapes at different ages. The more pronounced effect of the genotypes on the NIR bands suggests that the structural variability of the stand was the main factor to address these differences and, not necessarily, the chlorophyll content.
Higher reflectance in the near infrared region at BOA level for genotypes with greater leaf areas at the specific date reinforces the effect of this parameter on canopy reflectance of eucalyptus stands.
Abstract
The leaf area index (LAI) of forest plantations is a key biophysical parameter implied in different carbon and water cycle processes of forest ecosystems, including biomass production. There are many methods to retrieve LAI from remote sensing images, divided into empirical methods (e.g. using spectral vegetation indices -SVI's calibrated by in situ measurements)and the method that uses radiative transfer model (RTM). Each method has its own advantages and limitations that depend mainly on the type of ecosystem under study, available in situ data, etc., and on the reach to different precision levels to estimate LAI. Some limitations can be overcome using a combination of these two methods, also called the hybrid method. This study used a hybrid method to investigate the possibility to estimate a relationship between spectral vegetation indices and LAI on a variety of genotypes and ages of eucalyptus forest plantations from RTM simulations. The DART model was calibrated with extensive field measurements and was used to simulate spectral reflectance of the canopy for different view configurations, sun angles, and age of plantations. In this study, we tested: i) if a single relationship can be used for all genotypes or if a genotype-specific relationship is necessary, both methods were compared using a parsimony criterion; ii) if genotypes can be grouped in several groups due to a given criteria, without excessive loss of precision to estimate LAI; iii) if different LAI-SVI relationships are needed for the different conditions of satellite acquisitions (solar position, geometry of acquisition); and iv) if the LAI-SVI relationships obtained from RTM provide good results on real acquired satellite images. The study site was located in Itatinga Municipality, in the state of São Paulo, southeastern Brazil. This area included a clonal test experiment with 16 different genotypes (treatments) obtained from different enterprises and regions of Brazil. Complete forest inventories were conducted for the plots and treatments at 6, 12, 19, 26, 38, 52, 62 months after planting. Leaves, soil and trunk optical properties were collected in 2010 and2015 with ASD Field SpecPro and spectrometer. The Discrete Anisotropic Radiative Transfer Model (DART) was parameterized using leaf area (m 2 ), tree dimensions and locations data interpolated from field measurements. Simulations were made in nine dates comprising the period between 2010 and 2014. Seven different SVI's were analyzed, combined with 11 types of LAI -SVI regression models and 8 different possibilities of grouping the genotypes or the stand satellite acquisition variables. The best LAI x SVI relationships were chosen using the AIC and BIC criterion. The applicability of the relationship was evaluated using Worldview-2 images acquired in the three dates. The effect of using other satellite sensors was evaluated on their influence on LAI estimations. The NDVI was the SVI with the best estimations of LAI when it was used with a power function. The clone-specific relationship outperforms both global, stand and satellite conditions of acquisition. However, due to a poor simulation of the NIR1 band, a recalibration of the NIR1 band was necessary for better estimations of LAI using real acquired Worldview-2 satellite images. This shows limitations of LAI estimations based on RTM compared to calibration with in situ data. However, the work on RTM still allows to better understand the effect of the other biophysical parameters of the stand on SVI-LAI relationships as well as the effect of acquisition geometries.
Keywords: Leaf area index; Spectral vegetation index; Forest plantations; Remote sensing; 3D radiative transfer model; DART model
Introduction
The leaf area index (LAI) is a key biophysical parameter of forest plantations, related to several ecosystem processes, such as light interception, photosynthesis, water interception and transpiration, and biomass production. Its precise monitoring over time is essential to better understand the forest ecophysiological process. It can be critical for economic issues, including the trading of CO 2 quotas and the optimal balance between ecological preservation and forest exploration (HERNÁNDEZ et al., 2014).
Field measurements of LAI are normally performed using direct and indirect methods, such as allometric equations, destructive sampling and optical-based measurements (e.g. using hemispherical photographs and LAI-2000). Because field measurements are usually timeconsuming, estimation of LAI using remote sensing products is an alternative to accurately retrieve this parameter over large areas and time. Several studies have reported the use of remote sensing techniques for LAI estimation for more than 30 years, and continue to be applied with different sensors and in different ecosystems (COLOMBO et al., 2003;le MAIRE et al., 2011;LEBOEUF et al., 2007). Many methods retrieve LAI from remote sensing images, which are separated between empirical methods and radiative transfer model le MAIRE et al., 2011;LIANG et al., 2015).
LAI estimations through radiative transfer model inversion (RTM) deals with the simulation of reflectance spectrum of forest landscapes from canopy and soil characteristics (KUUSK, 1995;LAURENT et al., 2011b), where these parameters are usually retrieved by inversion techniques. The tree-dimensional Discrete Anisotropic Radiative Transfer -DARTmodel (GASTELLU-ERCHEGORRY et al., 1996) is an example of physically based RTM successfully tested on canopy reflectance measurements (GASTELLU-ETCHEGORRY et al., 1999) and applied in several studies to relate reflectance to forest canopy characteristics GUILLEVIC et al., 2013;MALENOVSKÝ et al., 2013).
Both SVI and RTM methods have advantages and drawbacks. For instance, SVI's can be sensitive to non-vegetation factors (e.g. soil background and sensor acquisition conditions) (OKIN et al., 2013) or other biophysical variables of the canopy that are not linked to LAI (leaf angle, leaf reflectance, etc.). RTM inversion techniques are based on the pre-requisite that the model gives results in a forward mode, which is not always the case, and are not always tested, which could lead to uncertainties (JACQUEMOUD et al., 2009). These limitations can be overcome using a combination of these two methods, also called the hybrid method, which uses RTM to simulate spectral reflectance databases from field measurements followed by regression methods to determine the relationship between spectral and canopy parameters (JACQUEMOUD et al., 2009). Hybrid methods have the simplicity of the empirical methods and the robustness of RTM inversion methods and can bring a comprehensive link between SVI and estimated vegetation parameters, such as LAI (VERRELST et al., 2015). This approach could be used, for example, to test the effect of the species, background surface, tree parameters (e.g., leaf angle distribution and age), satellite and its images acquisition conditions (e.g. sun geometry and view angles) in the LAI -SVI relationship. Indeed, since the whole simulation database is generated, the process to combine, isolate and vary parameters is facilitated and could be applied in future approaches.
Despite its potential, studies using the hybrid approach to retrieve forest biophysical parameters and evaluate the factor that directs its SVI's relationship are scarce.
This study used a hybrid method to investigate the possibility to estimate a relationship between SVI and LAI over a variety of genotypes and ages of eucalyptus forest plantations. The DART model was calibrated by extensive field measurements and used to simulate spectral reflectance of canopies. The calibration of this relationship was also analyzed by field measurements and Worldview-2 multispectral satellite images. In this study, This work is based on the hypothesis that accuracy of relationships between spectral vegetation indices and the leaf area index is improved when the hybrid method with SVI's and RTM is used to account for stand properties and effects of satellite acquisition conditions.
Study Site
The study site was located in Itatinga Municipality, in the state of São Paulo, (20 x 30 m) were analyzed and the other trees were considered as border trees. Figure 1 shows 93 the location of the clonal test blocks (different colors) and plots with the treatments (rectangles) with different north orientations.
In-situ measurements of LAI and other stand biophysical properties
Complete forest inventories of all blocks and treatments were carried out for the treatments at 6,12,19,26,38,52,62 and 73 months after planting. During these surveys, trunk circumference at breast height (CBH) was measured, with occasional measurements of tree height. In each of these dates, 10 to 12 trees of different sizes were cut for each genotype, within the border trees of blocks 2, 3 and 10. These destructive measurements were used to calibrate allometric relationships between CBH and tree height, between CBH and canopy height, CBH and crown diameter in the planting line and inter-row directions, CBH and tree total leaf area, CBH and the biomass of different tree components (leaves, branches, bark and stem wood), as in the study of Laclau et al., (2008).
3.2.3Worldview-2 images and creation of the experimental dataset
Three Worldview-2 multispectral satellite images were acquired in the study site and their configuration is described in used to prepare the experimental dataset (empirical approach), which includes the associated in situ LAI measurements.
The 3D DART radiative transfer model
The Discrete Anisotropic Radiative Transfer (DART) is a comprehensive model that can be used for the retrieval of physically based canopy parameter. DART simulates radiative transfer in heterogeneous 3D landscapes with the exact kernel and discrete ordinate methods . Any landscape is simulated as a rectangular
Creation of a reflectance simulation dataset from the DART model
In this study, the flux tracking method in the reflectance mode ('R') was used to simulate reflectance on 21 spectral intervals with amplitudes ranging from 20, 30 and 40 nm in the visible and near infrared region (360 to 1100 nm) to cover the main region of the canopy reflectance analysis. The ellipsoidal distribution function of leaf angle was used, which requires the average leaf angle (ALA) as an input parameter. This value was computed from the leaf angle orientation measured in field campaigns. LAI was calculated from field measurements (computed from allometric relationships as shown in Chapter 2 Section 2.2.2), for each tree. The input solar zenith and azimuth angles (respectively, θ and φ ) were
Spectral vegetation indices and regressions with LAI
Spectral vegetation indices (SVI) are mathematical combinations of the reflectance in different spectral bands, which could be quantitatively correlated with some biophysical characteristics of vegetation, such as LAI. Its calculation is simple, based on the reflectance measured by the sensor, and no information about satellite acquisition and sun geometries is required (le MAIRE et al., 2012). In this study, we focused on vegetation indices used to predict LAI. We aimed to identify which index better explains LAI in Eucalyptus plantations, using mainly the red and the near infrared bands. The indices tested include the Normalized Difference Vegetation Index -NDVI (ROUSE et al., 1973), the Enhanced Vegetation Index -EVI (WANG et al., 2002), the modified Enhanced Vegetation Index -EVI2 , the Soil Adjusted Vegetation Index -SAVI (HUETE, 1988), the Optimized Soil Adjusted Vegetation Index -OSAVI (RONDEAUX et al., 1996)
and the Generalized Soil
Adjusted Vegetation Index -GESAVI (GILABERT et al., 2002) and the 'Eucalyptus' Vegetation Index -EucVI calibrated for Eucalyptus plantations (le MAIRE et al., 2012). The formulations of these parameters are reported in Table 3.
1 -1.881 0.001 0.094 1.407 0.018 1 (1) is a soil adjusted factor and the parameters 1 and 2 describe the blue band usage for the correction of the red band; the parameters are empirically determined by standard values, respectively, 1, 6 and 7.5; is a gain adjusted factor, with standard value 2.5; is the mean reflectance of the blue band (450-510 nm).
(2) is a parameter as in = , with the standard value 2.08; is a gain adjusted factor, with the standard value 2.5. (3) Since LAI estimation using these SVI shows different relationships, linear or nonlinear, 15 regression models were tested, which used 1 to 3 parameters (Table 4). convolved bands (using the relative spectral response of Worldview-2) from the DART simulations for each genotype, block, view directions and ages. In the first chapter, we found a problem for simulations of the near infrared band (NIR1). However, since it was a systematic problem (see chapter 1), we considered that it did not affect the SVI -LAI relationships performance and analysis, but only the quantitative estimates of LAI. To fix this problem, we applied a direct correction of the simulated NIR1 reflectance based on the Worldview-2 image reflectance.
Statistical analysis
Some intrinsic stand properties, sun and view zenith and azimuth angles were analyzed to better understand their influence on the relationship between spectral vegetation and leaf area indices and, consequently, the number of parameters and model calibration requirements. For both experimental and simulation reflectance datasets, the analyzed stand, sun and satellite characteristics were arranged in groups, and the calibration of the SVI -LAI models (described in Section 3.2.6) was made for each group.
For the experimental dataset, we tested three different groups: 1) all genotypes, which represented a global adjustment for all characteristics (no group); 2) each genotype separately, which should logically lead to a better individual modelling performance, but at the expense of the total number of parameters (16x1 to 16x3 parameters depending on the equations); 3) age groups, where one regression was performed in each image.
For the simulation dataset, nine groups of parameters were analyzed, which were: 1) all genotypes together; 2) each genotype separately; 3) four groups of genotypes according to the mean soil reflectance collected in field measurements and grouped after the clustering analysis; 4) four groups of genotypes according to the mean leaf reflectance collected in field measurements and grouped after the clustering analysis; 5) two groups for the view zenith angles (5° to 15° and 20° to 30°); 6) two groups for the view azimuth angles relative to the east-west and north-south raw orientations; 7) four groups for the average leaf angle values grouped by 25, 50 and 75% quartiles; 8) four groups for the sun zenith angles relative to the blocks raw orientation grouped by 25, 50 and 75% quartiles; and 9) four groups for the sun azimuth angles relative to the blocks raw orientation grouped by 25, 50 and 75% quartiles.
The SVI -LAI models were compared by means of their r-square (R 2 ), root mean square error (RMSE), Akaike Information Criterion (AIC) (AKAIKE, 1973) Where, is the number of free parameters in the model, is the number of input data and RSS is the residual sum of squares between the original data and fitted model.
The Bayesian Information Criterion (BIC) is another measurement of the goodness of fit, similar to AIC, but using a Bayesian framework and also adjusted for the sample size. It generally penalizes free parameters more strongly than AIC does. It is calculated using Where 2 is the error variance and is the number of free parameters in the model is the number of input data, similar to that in the AIC.
We compared together the sets of regressions (all combinations of SVI's with all types of regression models) obtained with the nine grouping possibilities using the AIC and BIC criteria. Lower AIC (or BIC) for one of the grouping possibilities would mean a better model, taking into account the parsimony rules. This method was applied to both the experimental and the simulation dataset.
Finally, to evaluate the different best models obtained (either in the experimental or simulation dataset), we applied the regression to the Worlview-2 reflectance. In the case of the calibration on experimental dataset, it was not a perfect validation since the data are not independent. In the case of the regression, however, the validation is therefore made in the independent dataset. This final comparison between measured and estimated LAI was described in terms of RMSE.
LAI -SVI calibrations in the experimental dataset
The general behavior of calibration of SVIs with the experimental dataset and with a global adjustment curve is shown in Figure 3 for six SVI's using the RED and NIR1 bands.
For each model and SVI, the AIC criterion was presented in Table 5.Here and in the rest of this study; we focused on AIC, because BIC provided very similar results. In Figure 3, only the best regression model was represented (dark line), corresponding to the model with the lowest AIC in the Table 5.The best model adjustment was obtained for model 11 (power function) with the NDVI (R2=0.93 and RMSE=0.51), followed by the GESAVI. for each SVI and types of grouping are shown in Figure 4. In this figure, we observe, as mentioned above, that the NDVI adjusted by model 11 was the best SVI for the global adjustment. When we analyze the grouping of variables represented by the age, the SVI performance changed and the EucVI was the best SVI adjusted with model 10 (also a power function). However, the best SVI results based on the most parsimonious regression (lower AIC and BIC) was achieved by the adjustment of each genotype, using the NDVI and model 11 (Table 6). does not bring systematically a better regression, but this analysis is limited because the different images are taken at different ages (therefore LAI ranges), thus, both effects (image and age) are not dissociable. In the next section, we will check if these results are corroborated by the RTM simulations, and we will investigate further the analyses of grouping variables possibilities.
LAI -SVI calibrations on simulation dataset
A great variety of AIC and BIC was obtained depending on the chosen SVI, regression model and grouping of the genotypes. The AIC and BIC values were very similar, therefore, only AIC was presented ( Figure 5). The best model, based on a parsimony rule, showed the lowest AIC values. The SVI with the lowest AIC values were NDVI, followed by EucVI and GESAVI. The SVI with the worst results was SAVI, followed by EVI2. The best regression model changed between SVI's, but with predominance of the non-linear regression models 9 and 11 (Table 4), the former is a logarithmic function and the latter, a power function. For NDVI, the regression model 7 showed the minimum values of AIC, but the results were very close to those in model 11 with AIC equal to 0.45066x10 -5 .
Both models showed realism in terms of the SVI x LAI scattering behavior, but with a better behavior and simplicity for model 11, which was further kept.
The best group for all the SVI's was the adjustment for each genotype, the same of the experimental dataset. In this case, even if the total number of parameters to be adjusted was large (32 for a 2-parameter regression model) the gain in terms of precision would be even higher. The second best way of grouping the genotypes was the grouping based on leaf optical properties (reflectance). In this case, the best SVI's were GESAVI and OSAVI.
Another grouping possibility that worked well was the grouping of genotypes based on their litter reflectance, and, in this case, NDVI, EucVI and SAVI performed best. The other groups did not show great improvements in terms of AIC and BIC values compared to the global adjustment (no group, single regression for all clones). Average leaf angle (ALA), sun and view zenith and azimuth angles did not influence much the regression calibration, with similar results to those observed for overall regression. Similar to the experimental dataset, regardless of the types of grouping and regression models, the best SVI ordering of was NDVI, EucVI and GESAVI, OSAVI, EVI, EVI2 and SAVI.
If one model had to be kept, our results showed that it would be one single model for each clone, based on NDVI and regression model 11 (Table 4). In this case, the overall R 2 and RMSE of the calibration were equal to 0.97 and 0.29, respectively (Table 7). Table 7 also presented the adjusted parameters for each genotype, all significant at 95% confidence level. genotype to another genotype) can lead to significant errors (greater than 20% error). In other words, for the same NDVI value, true LAI can vary between genotypes. Genotype 10 presented, in general, higher LAI values compared to the other clones for a given NDVI.
Conversely, clones 12 and15 presented the lowest LAI values for a given NDVI. (16), all blocks (10) and simulated dates (9); and one view direction angle ( = 15°, = 30°). Numbers inside the figure for each curve represent the respective genotype number to support the legend identification (G1 to G16) Figure 7 represents the calibration results, that is, the relationship between the LAI used as input in the DART simulation dataset compared to the LAI obtained from the reflectance of the DART simulation dataset, using the genotype-specific regressions presented in Table 5 Clone 5 Clone 6 Clone 7 Clone 8 Clone 9 Clone 10 Clone 11 Clone 12 Clone 13 Clone 14 Clone 15 Clone 16 Figure 7 -LAI of the DART simulation dataset and LAI estimated from the DART simulation dataset (reflectance)by model 11 and NDVI for all clones, blocks and dates; and one view direction angle ( = 15°, = 30°)
Comparison of the experimental and simulation results
In the sections presented above, we observed that the experimental and simulation dataset obtained similar results for regressions between SVI and LAI. The best models were obtained, in both cases, for a genotype-specific regression, a simple NDVI index with a regression model with nonlinear power function. The RTM modeling part allowed to confirm that the genotype specific regressions are not attributed only to a particularity of the experimental dataset. Indeed, the RTM dataset was built for more ages, sun angles, and view conditions than the only three images that were used first. We also confirmed that other satellites with other Relative Response Spectra could have obtained the same results (not shown here). We can therefore rely on the fact that NDVI and power functions, applied at the genotype scale, provides a better overall result than the other groups or a single regression.
The next step, as underlined in the introduction, is to try to use the regression obtained from the RTM simulation dataset directly on measured reflectance. The idea is that this regression, obtained from RTM, could easily be obtained from different satellite and view configuration, and therefore would not require field measurements for each satellite measurement. The direct use of RTM derived regression (similar to a model inversion), Input LAI of the DART simulation dataset (m 2 /m 2 ) Estimated LAI requires the prior accuracy of the model in the forward mode. We observed in Chapter 1 that accuracy is correct for most bands, except for the NIR1 band. This is shown in Figure 8 below, on a scatter plot with NIR -RED reflectance. It clearly shows that, while RED simulated values are within the range of measurements, NIR simulated values are clearly underestimated. The NIR1 band is used to compute most SVIs. Therefore, there is a large discrepancy between the SVI of the simulated and the experimental dataset ( Figure 9). We can see in Figure 9 that while most indices remain within the range of simulations, they are generally located on the extreme part of the simulation range, opposite to the nadir-viewing configuration. We can therefore conclude that underestimation of NIR band leads to an underestimation of SVI and that the regression calibrated in RTM will not be usable directly in the experimental dataset. (Figure 10a), for all dates and clones. As underlined in the previous section, this is attributed to the underestimation of the simulated NIR1 band used in DART, leading to an underestimation of DART NDVI compared to the measured WorldView-2 NDVI, which finally leads to an overestimation of the predicted LAI.
A post-processing technique was applied through NDVI -LAI re-calibration using the satellite red (630-690 nm) and NIR1 (770-785 nm) bands. A simple re-calibration of the simulated NIR1 band, based on a linear regression between NIR1 obtained from Worldview-2 and NIR1 simulated in the three dates (Chapter 1) was performed in order to estimate the prediction error of the method and ensure the NIR1 band had not shown any bias. Since the NDVI changed after this correction, it was necessary to change the regression parameters accordingly ( Table 8).The comparison between LAI estimated from NDVI after the NIR1 correction and LAI obtained from field measurements is shown in Figure 10b, and obtained R 2 and RMSE 0.96 and 0.39, respectively (Table 8).
Figure 10 -Comparison of observed LAI (measured in the field) and estimated LAI from NDVI using the same parameters adjusted for model 11 and each genotype group using the images simulated by DART in nine different dates (a); and the same comparison but using a re-calibration of this model using Worldview-2 satellite images (b). These results present all genotypes and blocks for the dates May 2010 (plus sign symbol), August 2010 (point symbol) and July 2013 (diamond symbol) The comparison between the LAI estimated from the NDVI simulated for Worldview-2 image and the LAI estimated from NDVI simulated for other satellites such as Quickbirb, Landsat5 TM and IKONOS2 is shown in Figure 11. Besides the high correlation (R = 0.99), we can observe the influence of the sensor type on the LAI estimated with variations in the curve inclinations and meaning that LAI-NDVI relationship should be re- calibrated for each satellite. An error of one point of LAI can be caused by these differences in the satellite spectral responses. Figure 11 -Comparison between estimated LAI for Worldview-2 satellite and other three satellites(plus sign symbol for Quickbird, cross symbol for Landsat5 TM and square for IKONOS2) for all genotypes and dates and one view direction angle ( = 15°, = 30°). Both LAI were computed from the convolution of bands simulated by DART
Discussion
The first part of the results presented the calibration of LAI -SVI relationships based only on in-situ LAI measurements and SVI from real satellite images. The NDVI using a model with two parameters and the grouping calibration for each genotype showed the best relationship. These regressions, obtained from a large range of genotypes, can therefore be used further for application purposes. However, they were obtained from a single sensor, from a given satellite configuration, and only from three ages, thus, their generality remains questionable (which is the main drawbacks of experimentally-calibrated regressions). Also, the use of these experimental datasets did not allow concluding if this grouping per genotype images do not have adequate spatial resolution to analyze this type of very small plots? We therefore attempted to answer these questions by using DART RTM modelling.
The RTM gives a physical basis of the relationship between reflectance and LAI, and between reflectance and other variables, which can help to understand why the genotypes should be grouped or not. Moreover, once DART was parameterized for different ages of the eucalyptus stand, it allowed the calibration of any type of SVI and LAI values for any satellite type (e.g., different sensors and spectral resolutions) and at any measurement date and sun configuration. However, it means that the model must simulate correctly the SVI. In our case, the study has shown limitations in the SVI from DART simulations to achieve the SVI values of Worldview-2. These models are traditionally difficult to validate and problems in simulating wavelengths can be found as part of the modeling uncertainties. The RAdiative transfer Model Intercomparison (RAMI) exercise focuses on the benchmarking of these types of models and difficulties to describe complex canopy structures have been reported by users (WIDLOWSK, 2015). In the RAMI exercise, a good performance was reported for the DART model. The discrepancies that we found between simulated and satellite SVI values was due to the unsolved problem of simulation in the NIR1 band, which had to be corrected afterward. This problem did not appear in the RAMI intercomparison, and we therefore expect that the problem of NIR1 reflectance underestimation does not come from an internal problem of the DART model, but from error in the model parameterization. Another study on different RTM model has shown that such discrepancies between modeled reflectance and simulated reflectance often occur (ATZBERGER et al., 2013) and are dealt differently each time. Sometimes, the poorly simulated band is discarded, sometimes it is corrected, and sometimes more uncertainty is added to the simulation of the band. One alternative to this issue could be the use of learning machine techniques in the simulated dataset to explore other bands besides the NIR1 to retrieve the LAI that better explains this parameter. While a simple correction of NIR1 was made in this work, we underline that it is not the most adequate proceeding, and a better understanding of the reasons of this issue need to be identified. However, since this problem causes a systematic error to the database, it did not necessarily affect the analysis of LAI -SVI relationship, and a better understanding of the LAI-SVI relationship for LAI estimation was achieved using the RTM hybrid approach, as proposed.
The best LAI -SVI relationship found with the DART dataset was the NDVI used at the genotype-scale. It was the same result for the empirical regression calibrated in the experimental dataset, which confirms the physical basis of this conclusion. Normally, a better performance of specie-specific SVI's could be expected, such as for EucVI and GESAVI calibrated for Eucalyptus. Here, the better performance of the NDVI could be explained by its more generalist behavior. Liang et al. (2015) and Nigam et al. (2014) also found better results using this less sensitive SVI to estimate LAI of crop stands. In fact, we used the same EucVI and GESAVI equations in all simulated database and it could resulting lack of adaptability of these indices to the variety of other structural and biochemical variables of the canopy. The EucVI, which was calibrated for the same eucalyptus species in le Maire (2012), also achieved good results and could be applied to eucalyptus plantations, similar to its use in this work.
A genotype-specific calibration leads to better AIC values, which means that despite the large number of parameters to be calibrated, the advantage of having a genotype-specific relationship provides better accuracy than a global adjustment and other type of grouping. This could be explained by the fact that genotype-specific regressions integrate the combination of all the specific variables that affect SVI calibration instead of defining one specific group of variables. Colombo et al. (2003) estimated LAI by different SVI's combined with image textural information and geostatistical parameters on different vegetation types and concluded that the LAI -SVI relationship should be developed separately for each vegetation type. also suggested species stratification before estimating LAI by SVI's. However, grouping the genotypes in terms of leaf or litter optical properties provided intermediate results and could also be recommended. The effect of the average leaf angle that was first suspected as a relevant group was less evident, probably because only few clones had very different angles and the intra-group variability was too high.
Once the DART simulation dataset was obtained, it was a very powerful tool for further LAI estimations in eucalypts plantations; since it can be applied to any date, acquisition geometries and type of sensors, as shown in the application results. The results of the influence of the sensor type on LAI estimations and the capacity of changes between them using the DART simulation dataset enables sensor intercomparison to better understand the differences and adaptability of estimation to several remote sensing images with different spectral sensor responses. Moreover, this hybrid method allowed inferring the physical basis that affected the LAI -SVI relationship, which is an advantage of this method in relation to the direct inversion technique. Even if a direct use of regressions calibrated in DART is not possible in the current conditions, the model could be used to correct the experimental regression of satellite images and estimate the LAI with a relatively good precision (RMSE of 0.39).
Conclusions
This study proposed a methodology to estimate the relationship between vegetation indices and LAI from DART simulations in forest plantations. We showed in both experimental and simulation dataset that: 1) The NDVI had the best LAI -SVI relationship using a power function (lower AIC values); 2) The genotype-specific group outperforms both global, stand and satellite conditions of acquisition (AIC=0.45066x10 5 , R 2 =0.97 and RMSE=0.29); 3) A correction of the NIR band was necessary to be able to use the simulated LAI -NDVI relationship; 4) Once corrected, the regressions showed good estimations of independent LAI for in situ measurements; 5) The LAI estimated by NDVI changed between satellites, reinforcing the idea to have a generic simulated dataset able to calibrate different relationships in terms of sensors, acquisition characteristics, and genotypes. DART can also be used to simulate the emission of LiDAR (Light Detection And Ranging) of single or multi-pulses. The LiDAR data simulation was one of the topics that we aimed to address during this work on forest plantations; however, but it is not concluded yet. | 2019-04-26T14:25:29.196Z | 2016-06-17T00:00:00.000 | {
"year": 2016,
"sha1": "60c88f14a863da75e778965257638ea458bdf7f2",
"oa_license": "CCBYNCSA",
"oa_url": "http://www.teses.usp.br/teses/disponiveis/11/11150/tde-28092016-130547/publico/Julianne_de_Castro_Oliveira_versao_revisada.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e638ca7e694de1f8789569b25cc68f34ef86674a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
270095860 | pes2o/s2orc | v3-fos-license | Cholinesterase inhibitors and memantine are associated with a reduced mortality in nursing home residents with dementia: a longitudinal observational study
Background A large proportion of nursing home (NH) residents suffer from dementia and effects of conventional anti-dementia drugs on their health is poorly known. We aimed to investigate the associations between exposure to anti-dementia drugs and mortality among NH residents. Methods This retrospective longitudinal observational study involved 229 French NH and the residents admitted in these facilities since 2014 and having major neurocognitive disorder. From their electronic health records, we obtained their age, sex, level of dependency, Charlson comorbidity index, and Mini mental examination score at admission. Exposure to anti-dementia drugs was determined using their prescription into 4 categories: none, exposure to acetylcholinesterase inhibitors (AChEI) alone, exposure to memantine alone, exposure to AChEI and memantine. Survival until the end of 2019 was studied in the entire cohort by Cox proportional hazards. To alleviate bias related to prescription of anti-dementia drugs, we formed propensity-score matched cohorts for each type of anti-dementia drug exposure, and studied survival by the same method. Results We studied 25,358 NH residents with major neurocognitive disorder. Their age at admission was 87.1 + 7.1 years and 69.8% of them were women. Exposure to anti-dementia drugs occurred in 2,550 (10.1%) for AChEI alone, in 2,055 (8.1%) for memantine alone, in 460 (0.2%) for AChEI plus memantine, whereas 20,293 (80.0%) had no exposure to anti-dementia drugs. Adjusted hazard ratios for mortality were significantly reduced for these three groups exposed to anti-dementia drugs, as compared to reference group: HR: 0.826, 95%CI 0.769 to 0.888 for AChEI; 0.857, 95%CI 0.795 to 0.923 for memantine; 0.742, 95%CI 0.640 to 0.861 for AChEI plus memantine. Results were consistent in propensity-score matched cohorts. Conclusion The use of conventional anti-dementia drugs is associated with a lower mortality in nursing home residents with dementia and should be widely used in this population. Supplementary Information The online version contains supplementary material available at 10.1186/s13195-024-01481-0.
Background
Major neurocognitive disorders (M-NCDs) also known as dementia are widespread conditions, affecting around 50 million people worldwide, and Alzheimer's disease is the most common form of dementia, accounting for 60-70% of cases [1].With advancing age, the most important risk factor, the number of people living with these diseases is set to rise sharply as the world's population ages, and by 2050, the prevalence of M-NCDs is expected to triple.Alzheimer's disease and other M-NCDs are responsible for a global decline in cognitive functions, including memory and in independence, requiring human assistance for activities of daily living, and frequently they lead to behavioural disorders [1].All these features are common reasons for admission to nursing homes, and it is estimated that around twothirds of nursing home residents are affected by neurocognitive diseases.In addition, Alzheimer's disease and other M-NCDs are responsible for increased mortality, and a recent meta-analysis revealed that all-cause mortality was multiplied by 5.9 in patients with M-NCDs compared to age-matched individuals without these diseases [1][2][3].
Only a few pharmacological treatments are currently available for M-NCD, and their indication depends on the subtype.No pharmacological treatment is indicated for vascular dementia or frontotemporal dementia.Acetylcholinesterase inhibitors (AChEI) are indicated in M-NCD due to Alzheimer's disease, Lewy body disease and Parkinson's disease, and memantine is indicated in M-NCD due to Alzheimer's disease.These drugs have been marketed in the USA and Europe for over 20 years, and the value of these molecules is controversial, although their efficacy has been demonstrated by wellconducted randomized trials [4,5].In fact, their effects on the decline of cognition and independence are modest, and randomized trials did not show any reduction in mortality [4][5][6][7][8].In addition, certain side-effects that came to light after they were launched on the market have also contributed to their being called into question.More recently, the anti-amyloid antibodies aducanumab and lecanemab have been approved in the USA but not in Europe, and their clinical value is also controversial due to their very high price and moderate efficacy/tolerance profile [9,10].These molecules have also shown no effect on patient mortality.
Conventional anti-dementia drugs have interesting biological properties that could have favourable effects not only on neurons, but also on the function of other human cells and on the cardiovascular system.AChEIs have vagotonic effects and slow the heart rate [11], while memantine protects cells from excitotoxicity by antagonising NMDA receptors [12].AChEI and memantine also have antioxidant and anti-inflammatory properties [11,12].This has stimulated observational studies to determine whether the use of anti-dementia drugs is associated with a reduction in dementia mortality.In fact, although individual randomised trials have not shown such effects, the size of these trials and the relatively short follow-up times have not allowed an in-depth investigation of these possible effects.Very few of these studies have examined the relationship between mortality and the use of anti-dementia drugs in the specific context of nursing homes, despite the fact that a very large number of older adults with dementia live in these facilities.From our point of view, the studies carried out in this context are interesting because they concern very old, complex and vulnerable people, who are poorly represented in research studies, and the results obtained can guide our practices not only for nursing home residents, but also for older adults living at home who have a comparable profile.
The effects of conventional anti-dementia drugs on patient mortality have been the subject of studies with varying results [13,14].In this context, our team carried out a meta-analysis to explore the AChEI on mortality of people with dementia that showed that AChEI were associated with a significant reduction in mortality in both randomised trials and observational studies, with a risk reduction of around 15% [15].Of the 24 studies included in this review, only one had been conducted in nursing homes, and their authors found that the 5423 residents which received donepezil had a significantly lower mortality than 5423 matched residents of the same facilities that did not received anti-dementia drugs (hazard ratio: 0.90; 95% CI, 0.84-0.96)[14].Studies that investigated association of memantine with mortality are rare and no evidence is available for nursing home residents.
The aim of this study was to investigate the associations between mortality and exposure to anti-dementia drugs in nursing home residents with dementia.
Design, participants and data source
This retrospective observational cohort study has been conducted on the residents admitted after January 1st, 2014 in the 329 nursing homes of a French group of private nursing homes.These facilities were located in all regions of France in both urban and rural areas.Deidentified data until December 31st, 2019 were obtained from their electronic health record (EHR).Their EHR was filled by residents' general practitioners and by the medical and paramedical staff of the nursing home.We included in the analysis all residents with an explicit diagnosis of Alzheimer's disease, dementia or M-NCD mentioned in the EHR and those who received donepezil, rivastigmine, galantamine or memantine.In addition, we included residents with overt and prolonged cognitive impairment defined by a Mini Mental Status Examination (MMSE) score < 20 on at least two separate occasions.All were presumed to have M-NCD.The flowchart that describes the constitution of the cohort is shown in the supplementary material (Figure S1).We retrieved from EHR residents' age at admission, gender and the level of dependency assessed by the Grille AGGIR, the French national scale used for resources allocation to disabled adults > 60 years in France.The scale is based on the rating of 17 variables describing activities of daily living and each variable is quoted on 3 levels according the ability of the person to perform the activity him/herself without human assistance.Based on these ratings, an algorithm designed to estimate the amount of human assistance required for activities of daily living, allocates the person to one of six groups (groupes iso-ressources or GIR): GIR 1 to GIR 6.The group GIR1 corresponding to person with the most severe dependency and who require the highest level of assistance, and group GIR 6 to persons with no dependency who require little or no human assistance.We also recorded the first MMSE score notified in the EHR, and calculated the Charlson comorbidity index from the diseases notified in the EHR and age at admission.
Exposure to anti-dementia drugs and outcomes
Information about anti-dementia prescription was obtained from orders directly entered into the EHR by the residents' general practitioner.Exposure to AChEI was defined by any prescription of donepezil, rivastigmine or galantamine, irrespective of their duration.Exposure to memantine or exposure to AChEI plus memantine was defined similarly.
Outcome was the mortality studied from the vital status recorded until December 31st, 2019 and primary criteria of judgment were the adjusted mortality hazards ratio in matched cohorts.
Statistics
Four groups were formed according to exposure to antidementia drugs: None (neither CEI or memantine), AChEI only, Memantine only, and both AChEI and memantine.Their characteristics were compared using one-way ANOVA, the chi-2 test and the Kruskal Wallis test for the variables that were not normally distributed.
Unadjusted and adjusted mortality hazard ratios and their 95% confidence intervals (95%CI) were calculated using Cox proportional hazards models for these four groups using non exposed group as the reference.For adjusted HRs, the controlled covariates were age, sex, level of dependency, MMSE score and Charlson index (model 1) or adjusted on age, sex, level of dependency, MMSE score and individual comorbidities variables (model 2).Survival curves were drawn using Kaplan-Meier method and the log-rank test was used to determine whether survival curves differed statistically.The hazard ratio for the combined therapy (AChEI plus memantine) has been compared to those of AChEI alone or to memantine alone using the Wald test.
To control for treatment indication bias, we have also studied association between mortality and each antidementia group in three propensity score-matched cohorts.First, residents exposed to AChEI alone were matched with residents not exposed to anti-dementia drugs (matched cohort 1) on their propensity score by nearest neighbor method using two neighbors for one case, within a caliper of 0.005 SD.Variables used to define treatment allocation were age at baseline, sex, first MMSE score, the level of dependency and the Charlson index values.The quality of the propensity score-matched cohorts was assessed with the standardized mean difference.Two other matched cohorts were elaborated using the same methods, one for the residents exposed to memantine alone (matched cohort 2) and another for residents exposed to AChEI plus memantine (matched cohort 3).A standardized mean difference greater than 0.20 was considered a sign of imbalance.For each matched cohort, we have plotted survival curves using Kaplan Meier method and we used using Cox proportional hazards models to calculate mortality hazard ratios and their 95% confidence intervals adjusted on age, sex, level of dependency, MMSE score and Charlson index (model 1) or adjusted on age, sex, level of dependency, MMSE score and individual comorbidities variables (model 2).
Calculations were computed using Stata software 16.1 (Stata Corps, USA) and the level of significance was P < 0.05.
Entire cohort
Data from the EHR of 45,606 residents were available in the period considered and 25,358 (55.6%) were selected for the study according the flowchart shown in supplementary material (Supplementary material, Figure S1).Age at admission was 87.1 + 7.1 years and they comprised 7,647 (30.1%) men and 17,711 (69.8%) women.Mean follow-up was 18.8 months and there was no loss of follow-up.Their characteristic according the exposure to anti-dementia drugs are shown in Table 1.Exposure to anti-dementia drugs occurred in 5065 residents (20.0%) comprising 2,550 (10.1%) for AChEI alone, in 2,055 (8.1%) for memantine alone, in 460 (0.2%) for AChEI plus memantine, whereas 20,293 (80.0%) had no exposure to anti-dementia drugs.Anti-dementia drugs prescription was present on admission in 87% of them and median duration of exposure was 7.1 months (IQR: 2.8, 17.5).Exposure of less than 30 days was observed in 664 residents (13.1%).The residents not exposed to anti-dementia drugs were significantly older, comprised a greater proportion of women and their level of dependency and their cognitive impairment were less severe.The MMSE score of the residents exposed to both AChEI and memantine was significantly higher than those of the other groups (Table 1).Severe dependency was significantly more frequent in residents exposed to memantine only or to the association AChEI plus memantine.Some comorbidities like cardiovascular disease, diabetes and chronic obstructive lung disease were more frequent in residents unexposed to anti-dementia drugs.
As compared to unexposed residents, unadjusted hazard ratios for mortality were significantly reduced in residents exposed to AChEI and in those exposed to both AChEI and memantine but not in those exposed to memantine alone.The survival plot is shown in Fig. 1.Adjusted hazard ratios for mortality were significantly reduced for the three groups exposed to anti-dementia drugs, as compared to reference group with two models that account for comorbidities differently (Table 2).The adjusted hazard ratio for combined therapy (AChEI plus memantine) was not significantly different from that for AChEI alone or memantine alone (p = 0.191 and p = 0.083, respectively).The one-year mortality rate was significantly reduced in residents exposed to the AChEI alone and those exposed to AChEI plus memantine (Supplementary material, Table S2).A sensitivity analysis was performed by excluding the 664 residents whose duration of exposure to anti-dementia drugs was less than 30 days and adjusted hazard ratios showed consistent results with those obtained in the entire cohort (Supplementary material, Table S1).
Propensity-score matched cohorts
Three cohorts, each composed of residents exposed to one type of anti-dementia drug regimens and of residents not exposed to anti-dementia drugs, were formed using propensity score matching on age, sex, level of dependence, MMSE score and Charlson index values.The first cohort was formed of 1933 residents exposed to AChEI alone and 3226 unexposed to anti-dementia drugs, the second of 1600 residents exposed to memantine alone and 2801 unexposed to anti-dementia drugs, and the third of 370 exposed to both AChEI and memantine and 717 unexposed to any anti-dementia drugs.The characteristics of the residents selected for matched cohorts are presented in Table 3.For the three cohorts, the mean standardized differences for the matching variables were less than 0.20 (Supplementary material, Figure S2), indicating a satisfactory balance.Exposure to antidementia drugs was associated with significantly lower mortality in all three matched cohorts, with unadjusted and adjusted hazard ratios fairly close to that observed in the entire cohort (Table 4).To facilitate an overall view of the reduction in mortality associated with exposure to anti-dementia drugs, we have summarised the hazard ratios obtained in the different cohorts in a single table.
(Supplementary material, Table S3).The survival plots for the three matched cohorts shows that mortality is significantly reduced throughout the follow-up period (Fig. 2).One-year mortality rate in the three cohorts was significantly reduced in residents exposed to the anti-dementia drugs (Supplementary material, Table S2).To estimate the time afforded by the reduction in mortality associated with anti-dementia drugs, we calculated the mean differences in survival time between exposure groups for the residents who died during the observation period.The survival time was 3.33 months longer on average with AChEI (first matched cohort), and the corresponding were 4.65 and 8.88 months for memantine alone and AChEI plus memantine respectively (second and third matched cohorts).
Table 2 Unadjusted and adjusted mortality hazard ratio and their 95% confidence interval for exposure to anti-dementia drugs among nursing home residents with dementia (entire cohort)
Discussion
This observational study found the use of AChEI, memantine or their combination is associated to a significantly lower all-cause mortality in a large population of nursing home residents.Consistent findings were obtained in the whole cohort and in propensity scorematched cohorts.
The beneficial association of memantine on mortality in nursing home residents with dementia is a novel finding, as no large-scale study had previously documented such an effect.Lazzeroni observed in a large database that patients treated with memantine had lower mortality than those treated with donepezil, but their study did not compare them with demented patients who had not received anti-dementia drugs [16].Long-term follow-up of a randomized clinical trial of 75 patients with Lewy body dementia or Parkinson's disease revealed that memantine was associated with better survival at 36 months [17].Meta-analysis of randomized controlled trials on memantine in patients with Alzheimer's disease have documented the beneficial effects of memantine on parameters other than mortality, in particular cognition and functional independence, and observational studies also suggest favorable effects on behavioral disturbances [18][19][20].Although the effect size of memantine on the clinical consequences of dementia is considered small, all these results are consistent with ours.
Our results also confirm that the use of AChEI is associated with lower mortality in patients with dementia, a point highlighted in a recent meta-analysis of both randomized and observational studies [15,21].Among these meta-analyzed studies, only one had been conducted in nursing home residents with dementia, and it showed that the use of donepezil was associated with a reduction in mortality, with a hazard ratio of 0.89 (95% CI, 0.83-0.95)close to that observed in our study for AChEI [14].In our study, we also observed that nursing home residents receiving combined therapy (AChEI plus memantine) had lower mortality that those with AChEI alone or memantine alone but AChEI differences in hazard ratio did not reach significance.Thus, our results suggest that combination therapy offers no clear benefit in terms of Table 3 Characteristics of the residents selected into the three cohorts matched on propensity scores.Cohort 1, 2 and 3 comprised residents exposed to acetylcholinesterase inhibitors (AChEI) only, memantine only, or AChEI plus memantine, respectively, and residents unexposed to anti-dementia drugs Table 4 Unadjusted and adjusted mortality hazard ratios and their 95% confidence intervals as a function of dementia drug exposure in the three matched cohorts.For each cohort, the hazard ratio was calculated using residents not exposed to antidementia drugs in the same cohort as a reference mortality compared to AChEI or memantine therapy alone.Studies that have compared the clinical effects of combined therapy with those of AChEI or memantine alone have come to divergent conclusions, and the superiority of combined therapy is still the subject of controversy [8,13,18,22,23].
One explanation for the reduction in all-cause mortality in residents exposed to AChEI could be linked to their cardiovascular effects.AChEI increase vagotonic tone and slow heart rate [11,[24][25][26], and resting heart rate has been shown to be an independent risk factor for cardiovascular and all-cause mortality [27].These pharmacological properties may explain why AChEI is associated with lower cardiovascular mortality, as slowing heart rate appears to be a major component of the beneficial effects of several drugs on cardiovascular events [23].Another possible explanation for our findings would be the metabolic effects of AChEI and memantine.Some studies have suggested that AChEI may have anti-oxidant properties, anti-inflammatory activity and a protective effect on endothelial cells, in patients with Alzheimer's disease or metabolic syndrome [28][29][30][31][32][33].Memantine antagonizes NMDA receptors which are widely distributed in the human body and might contribute to protect cells from excitotoxicity and cellular calcium excess [12].By this way, it might have beneficial effects out of the nervous system, in particular on inflammation, cardiovascular diseases, cancer or infectious diseases.Those effects might also contribute to beneficial survival.
Another explanation would be a disease-modifier effect of AChEI, slowing progression of dementia and delaying decline in independence.A diagnosis of any type of dementia is associated with increased mortality, especially from pneumonia and neurologic causes, and dementia severity is correlated with the risk of dying [34][35][36].In vitro studies suggest that AChEI may have a neuroprotective effect independent of its cholinergic activity [37,38].This hypothesis is further supported by the finding, in several studies, that dementia symptoms progressed more rapidly in patients in whom AChEI were discontinued than in patients who kept on treatment [39,40], and that placement in nursing home was delayed in patients treated with AChEI [41][42][43].
This study has both limitations and strengths.We used data derived from routinely collected clinical records, which may contain errors, undereporting and missing information, particularly for the record of comorbidities and for the diagnosis of dementia and its type, both of which depend on the practice of the physicians caring for the residents.In particular, dementia is under-diagnosed or under-reported in such facilities, and when the diagnosis is recorded, the type of dementia is often missing.This is important because mortality rates differ between subtypes of dementia.In particular, vascular dementia, which has a higher mortality rate than Alzheimer's disease [44], is probably more common in unexposed residents than in exposed ones, given that no anti-dementia drugs are indicated in this subtype of M-NCD.As in observational studies, our results may be influenced by biases, including unobserved confounders.In particular, the decision to prescribe dementia medication was not random, and we observed several differences in resident characteristics as a function of dementia medication exposure.We attempted to minimize this bias by conducting two different adjusted analyses, the results of which were found to be highly consistent.The strengths of our study are the large sample of residents distributed nationwide, the real-life context, the use of several relevant covariates known to influence mortality, with exhaustive follow-up and vital status record at the end of the observations.While our study shows that the use of conventional anti-dementia drugs is associated with a reduction in resident mortality, it also highlights the fact that this population is largely undertreated by these drugs.This is not unique to our study, and has also been observed in nursing homes in the USA [45] and the Netherlands [46].The reasons for this are not clearly known, and the relative frequency of subtypes of M-NCD for which these drugs are not indicated cannot explain it.This undertreatment may lead to a loss of chance for this complex and vulnerable population.At a time when anti-amyloid monoclonal antibodies have been approved in the USA to treat Alzheimer's disease, our study is a reminder that the use of conventional Alzheimer's drugs is important, particularly in patients living in geriatric institutions who are highly vulnerable and who will not be good candidates for these new treatments.
Fig. 1
Fig. 1 Survival plots for residents exposed or not exposed to acetylcholinesterase inhibitors (AChEI), memantine or AChEI plus memantine in the whole cohort
Fig. 2
Fig. 2 Survival plots for residents exposed or not exposed to dementia drugs in propensity score-matched cohorts.Panel A shows the survival plot for residents exposed to acetylcholinesterase inhibitors (AChEI) and residents not exposed to dementia drugs (matched cohort 1).Panel B shows the corresponding plot as a function of memantine exposure (matched cohort 2) and panel C as a function of AChEI and memantine exposure (matched cohort 3)
Table 1
Characteristics of nursing home residents with dementia (entire cohort) according to their exposure to cholinesterase inhibitors (AChEI), memantine, and AChEI plus memantine GIR: groupe iso-ressources, according to the French national tool AGIRR for dependency assessment.MMSE: Mini Mental Status Examination.COPD: chronic obstructive pulmonary disease
Exposure to anti-dementia drugs Hazard ratio 95%CI P
*adjusted on age, sex, level of dependency, Mini Mental Status Examination score, Charlson index **adjusted on age, sex, level of dependency, Mini Mental Status Examination score, cardiovascular disease, peripheral arterial disease, chronic obstructive pulmonary disease, diabetes and cancer : groupe iso-ressources, according to the French national tool AGIRR for dependency assessment; MMSE: Mini mental status examination GIR | 2024-05-30T13:19:02.108Z | 2024-05-29T00:00:00.000 | {
"year": 2024,
"sha1": "08d84821fa3a10f9e4c9233595db823002acbd37",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/counter/pdf/10.1186/s13195-024-01481-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "45332fe990b414392c65e01b1b3165d45ac41264",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249210164 | pes2o/s2orc | v3-fos-license | Text2Human: Text-Driven Controllable Human Image Generation
Generating high-quality and diverse human images is an important yet challenging task in vision and graphics. However, existing generative models often fall short under the high diversity of clothing shapes and textures. Furthermore, the generation process is even desired to be intuitively controllable for layman users. In this work, we present a text-driven controllable framework, Text2Human, for a high-quality and diverse human generation. We synthesize full-body human images starting from a given human pose with two dedicated steps. 1) With some texts describing the shapes of clothes, the given human pose is first translated to a human parsing map. 2) The final human image is then generated by providing the system with more attributes about the textures of clothes. Specifically, to model the diversity of clothing textures, we build a hierarchical texture-aware codebook that stores multi-scale neural representations for each type of texture. The codebook at the coarse level includes the structural representations of textures, while the codebook at the fine level focuses on the details of textures. To make use of the learned hierarchical codebook to synthesize desired images, a diffusion-based transformer sampler with mixture of experts is firstly employed to sample indices from the coarsest level of the codebook, which then is used to predict the indices of the codebook at finer levels. The predicted indices at different levels are translated to human images by the decoder learned accompanied with hierarchical codebooks. The use of mixture-of-experts allows for the generated image conditioned on the fine-grained text input. The prediction for finer level indices refines the quality of clothing textures. Extensive quantitative and qualitative evaluations demonstrate that our proposed framework can generate more diverse and realistic human images compared to state-of-the-art methods.
(a) Interactive User Interface for Text2Human (b) Synthesized Human Images Generating high-quality and diverse human images is an important yet challenging task in vision and graphics. However, existing generative models often fall short under the high diversity of clothing shapes and textures. Furthermore, the generation process is even desired to be intuitively controllable for layman users. In this work, we present a text-driven controllable framework, Text2Human, for a high-quality and diverse human generation. We synthesize full-body human images starting from a given human pose with two dedicated steps. 1) With some texts describing the shapes of clothes, the given human pose is first translated to a human parsing map.
2) The final human image is then generated by providing the system with more
INTRODUCTION
Recent years have witnessed the rapid progress of image generation since the emergence of Generative Adversarial Networks (GANs) [Goodfellow et al. 2014]. Nowadays, we can easily generate diverse faces of high fidelity using a pretrained StyleGAN [Karras et al. 2020], which further supports several downstream tasks, such as facial attribute editing [Abdal et al. 2021;Jiang et al. 2021;Patashnik et al. 2021] and face stylization [Pinkney and Adler 2020; Song et al. 2021;].
Human full-body images, another type of human-related media, are more diverse, richer, and fine-grained in content. Furthermore, human image generation [Frühstück et al. 2022;Fu et al. 2022;Grigorev et al. 2021] has wide applications, including human pose transfer [Albahar et al. 2021;Sarkar et al. 2021a], virtual try-on [Cui et al. 2021;Lewis et al. 2021], and animations [Chan et al. 2019;Yoon et al. 2021]. From the perspective of applications and interactions, apart from generating high-fidelity human images, it is even desirable to intuitively control the synthesized human images for layman users. For example, they may want to generate a person wearing a floral T-shirt and jeans without expert software knowledge. Human image generation with explicit textual controls makes it possible for users to create 2D avatars more easily.
Despite the great potential, controllable human body image generation with high fidelity and diversity is less explored due to the following challenges: 1) Compared to faces, human body images are more complex with multiple factors, including the diversity of human poses, the complicated silhouettes of clothing, and sundry textures of clothing; 2) Existing human body image generation methods [Sarkar et al. 2021b;Weng et al. 2020;Yildirim et al. 2019] fail to generate diverse styles of clothes since they tend to generate clothes with simple patterns like pure color, let alone fine-grained controls on the textures of clothes in the generated images.
3) The generation of clothes with textual controls relies on additional finegrained annotations. However, currently, there is a lack of human image generation datasets containing fine-grained labels on clothes shapes and textures Liu et al. 2016a,b]. To bridge the gap, in this work, we propose the Text2Human framework for the text-driven controllable human image generation. As shown in Fig. 1, given a human pose, users can specify the clothes shapes and textures using solely natural language descriptions. Human images are then synthesized in accordance with the textual requests.
Due to the complexity of human body images, it is challenging to handle all involving factors in a single generative model. We decompose the human generation task into two stages. Stage I generates a human parsing mask with diverse clothes shapes based on the given human pose and user-specified texts describing the clothes shapes. Then Stage II enriches the human parsing mask with diverse textures of clothes based on texts describing the clothes textures.
Considering the high diversity of clothes textures, we introduce the concept of codebook, which is widely used in VQVAE-based methods [Esser et al. 2021a;Van Den Oord et al. 2017], into our framework. The codebook learns discrete neural representations of images. To adaptively characterize textures, we propose a hierarchical VQVAE with texture-aware codebook designs. Specifically, the codebooks are constructed in multiple scales. The codebook in the coarser scale contains more structural information about textures of clothes, while the codebook in finer scales includes more detailed textures. Due to the different natures of different textures, we also build codebooks separately for each texture.
In order to conditionally generate human images consistent with the texts describing the textures, we need a sampler to select appropriate texture representations (i.e., codebook indices) from the codebook, and then re-arrange them in a reasonable order in the spatial domain. In this manner, with rich texture representations stored in codebooks, the human generation task is formulated as to sample an intermediate feature map from the learned codebooks. We adopt the diffusion model based transformer [Bond-Taylor et al. 2021;Esser et al. 2021b;Gu et al. 2022] as the sampler. With the texture-aware codebook design, we incorporate mixture-of-experts ] into the sampler. The sampler has multiple index prediction expert heads to predict indices for different textures.
With the hierarchical codebooks, we need to sample intermediate feature maps from the coarse level to the fine level, i.e., sampling indices for both the coarse-level and fine-level codebook is required for the image synthesis. Thanks to the implicit relationship between codebooks at different levels learned by our proposed hierarchical VQVAE, the indices of codebook at the coarse level can provide hints for the sampling of the fine level features. A similar idea is also adopted VQVAE2 [Razavi et al. 2019]. However, in VQVAE2, the pixel-wise sampling by auto-regressive models is time-consuming. By comparison, we propose a feed-forward codebook index prediction network, which predicts the desired fine-level codebook indices directly from the coarse-level features. The proposed index prediction network speeds up the sampling process and ensures the generation quality.
To facilitate the controllable human generation, we construct a large-scale full-body human image dataset dubbed DeepFashion-MultiModal dataset, which contains rich clothes shape and texture annotations, human parsing masks with diverse fashion attribute classes, and human poses. Both the textual attribute annotations and human parsing masks are manually labeled. The human poses are extracted using [Güler et al. 2018]. All images are collected from the high-resolution version of DeepFashion dataset. These images are further cleaned and selected to ensure they are full-body and of good quality 1 .
To summarize, our main contributions are as follows: 1) We propose the Text2Human framework for the task of text-driven controllable human generation. Our proposed framework is able to generate photo-realistic human images from natural language descriptions. 2) We build a hierarchical VQVAE with the texture-aware codebook design. We propose a transformer-based sampler with the concept of mixture-of-experts. The features are routed to different expert heads according to the required attributes. The hierarchical design and mixture-of-experts sampler enable the synthesis and control of complicated textures. 3) We propose a feed-forward index prediction network to predict codebook indices of fine-level codebook coarse indices router Sampler with Mixture-of-Experts 7 9 6 1 4 5 1 2 3 4 2 7 5 9 3 9 1 5 6 4 2 3 9 1 9 7 1 5 8 1 4 3 index prediction
Text for Clothes Textures
A T-shirt with pure color and long pants with pure color. Fig. 2. Overview of Text2Human. We decompose the human generation into two stages. Stage I translates the given human pose to the human parsing according to the text describing the clothes shapes. The text for clothes shapes is first transformed to one-hot shape attributes and embedded to a vector ℎ . The shape vector ℎ is then fed into the pose-to-parsing module to spatially modulate the pose features. Stage II generates the human image from the synthesized human parsing by sampling multi-level indices from our learned hierarchical texture-aware codebooks. To sample coarse-level indices, we employ a sampler with mixture-of-experts, where features are routed to different expert heads to predict the indices based on the required textures. At the fine level, we propose a feed-forward network to efficiently predict fine-level indices to refine the generated human image.
based on the features sampled at the coarse level, which overcomes the limitation of the time-consuming sampling process in classical hierarchical VQVAE methods. 4) We contribute a large-scale and high-quality human image dataset with rich clothes shape and texture annotations as well as human parsing masks to facilitate the task of controllable human synthesis.
RELATED WORK
Generative Models. Generative Adversarial Network (GAN) has demonstrated its powerful capabilities in generating high-fidelity images. Since [Goodfellow et al. 2014] proposed the first generative model in 2014, different variants of GAN [Brock et al. 2019;Chai et al. 2022;Karras et al. 2021Karras et al. , 2019Karras et al. , 2020 have been proposed. In addition to unconditional generation, conditional GANs [Mirza and Osindero 2014] were proposed to generate images based on conditions like segmentation mask [Isola et al. 2017;Park et al. 2019;] and natural language [Surya et al. 2020;Xu et al. 2018]. Our proposed Text2Human is a conditional image generation framework by taking human poses and texts as inputs. In parallel to GAN, VAE [Kingma and Welling 2013] is another paradigm for image generation. It embeds input images into a latent distribution and synthesizes images by sampling vectors from the prior distribution. Several VAE-based works [Esser et al. 2021a[Esser et al. , 2018Larsen et al. 2016;Van Den Oord et al. 2017] have been proposed to improve the visual quality of the generated images. Our proposed method shares some similarities with existing VAE-based methods but differs in the texture-aware codebook, sampler with mixture-of-experts, and feedforward index prediction network for the hierarchical sampling.
Human Image Manipulation and Synthesis. The goal of pose transfer [Balakrishnan et al. 2018;Liu et al. 2020Ma et al. 2017Ma et al. , 2018 is to transfer the appearance of the same person from one pose to another. [Albahar et al. 2021] proposed a poseconditioned StyleGAN framework. The details of the source image are warped to the target pose and then are used to spatially modulate the features for synthesis. ] proposed a method for the text-guided pose transfer task. [Men et al. 2020] proposed ADGAN for controllable person image synthesis. The person image is synthesized by providing a pose and several example images. All of these tasks require a source person image to synthesize the target person. Recently, TryOnGAN [Lewis et al. 2021] and HumanGAN [Sarkar et al. 2021b] are proposed to support the human image generation conditioned on human pose only. TryOnGAN trained a pose conditioned StyleGAN2 network and can generate human images under the given pose condition. HumanGAN proposed a VAE-based human image generation framework. Human images are generated by sampling from the learned distribution. However, these methods do not offer fine-grained controls on human generation. Our proposed framework allows for controllable human generation by giving texts describing the desired attributes.
TEXT2HUMAN
Our aim is to generate human images conditioned on texts describing the attributes of clothes (clothes shapes and clothes textures). Given a human pose ∈ R × , texts for clothes shapes ℎ , and texts for clothes textures , the output should be the corresponding human image ∈ R × ×3 . The whole pipeline of Text2Human is shown in Fig. 2. We decompose the human generation into two stages. Stage I synthesizes a human parsing mask with the given pose and texts for clothes shapes. We transform the text information to attribute embeddings and concatenate them with human pose features to predict the desired human parsing mask. With the human parsing mask obtained from Stage I as the input, the final image is synthesized according to the required clothing textures in Stage II. We set up a hierarchical texture-aware codebook to characterize various types of texture as illustrated in Fig. 3, where the final image is synthesized using both coarse-level (top-level) and fine-level (bottom-level) codebooks. To sample the codebook indices at the coarse level, a sampler with mixture-of-experts is proposed, where features are routed to different expert heads to predict the desired indices. To speed up the sampling at the fine level, we propose a feed-forward codebook index prediction network, which further refines the quality of generated images.
Stage I: Pose to Parsing
Given a human pose and texts about clothes shapes, we hope to synthesize the human parsing map ∈ R × .
First, texts are transformed to a set of clothes shape attributes { 1 , ..., , ..., }, where ∈ {0, 1, ..., } and is the class number of attribute . The attributes are then fed into the Attribute Embedding Module to obtain a shape attribute embedding ℎ ∈ R : where (·) is the attribute embedder for and (·) fuses attribute embeddings from attribute embedders.
Together with , the ℎ is then fed into the Pose-to-Parsing Module, which is composed of an encoder and a decoder . The operation at layer of is defined as follows: where B(·) is the spatial broadcast operation so that ℎ is broadcasted to have the same spatial size with −1 , and 0 = .
The operation of at layer can be expressed as ′ = ([ , ′ −1 ]). The final decoded feature ′ is fed into fully convolutional layers to make the final parsing prediction. We use the cross-entropy loss to train the whole Pose-to-Parsing Module.
Preliminaries.
VQVAE. The goal of Vector-Quantized Variational AutoEncoder (VQVAE) [Van Den Oord et al. 2017] is to learn a discrete codebook that stores discrete neural representations by learning to reconstruct images. VQVAE consists of an encoder , a decoder and a learnable codebook Z = { | ∈ R } =1 . We first extract the continuous neural representationˆby feeding the image into the encoder, i.e.,ˆ= ( ) ∈ R ℎ× × . Then the quantizer is adopted to discretize the continuousˆ, and the operation is defined as follows: Then the image is reconstructed using the quantized representa-tionˆ= ( ). The encoder, decoder and codebook are end-to-end trained through the following loss function: where (·) denotes the stop-gradient operation.
Diffusion-based Transformer. To sample images from learned codebooks, autoregressive models [Chen et al. 2018;Salimans et al. 2017] are employed to predict the orderings of codebook indices. Autoregressive models predict indices in a fixed unidirectional manner and the prediction of the incoming index only relies on already sampled top-left parts. In VQVAE, PixelCNN [Van den Oord et al. 2016] is adopted as the autoregressive model. In recently proposed VQGAN [Esser et al. 2021a], transformer [Vaswani et al. 2017] is adopted for its capability to capture long-term dependencies among codebook indices (In transformer, codebook indices are referred to as 'tokens'). Recently, some works [Bond-Taylor et al. 2021;Chang et al. 2022;Esser et al. 2021b;Gu et al. 2022] proposed to use the diffusion model to replace the autoregressive model motivated by two advantages: 1) Indices are predicted based on global and bidirectional context, resulting in more coherent sampled images; 2) Indices are predicted in parallel, leading to much faster sampling speed. Specifically, in diffusion-based transformer, starting from fully-masked indices 0 , the final prediction of indices are sampled steps by transformers. The indices at the step are sampled following the distributions: where is the parameters of transformers. At each time step, the indices are randomly replaced with newly sampled ones.
3.2.2 Hierarchical VQVAE with Texture-Aware Codebook. Considering the complicated nature of clothes textures, representing textures in single-scale features is not enough. For example, as shown in Fig. 9(a), the reconstruction of a plaid shirt with multi-scale features contains more details. Inspired by this, we propose the hierarchical VQVAE with multi-scale codebooks. Specifically, given an input image ∈ R × ×3 , we first train an encoder to downsample to obtain its coarse-level featureˆ: We build a top-level codebook Z forˆwith codes ∈ R 1×1× . The quantization ofˆis the same as Eq. (3). Then the image is reconstructed using the quantized feature through the decoder :ˆ= ( ). Here we view as two consecutive parts = • . The spatial sizes of the inputs to and are /8 × /8 and /16 × /16, respectively. Once the top-level codebook Z is trained, we move to build the bottom-level codebook Z . The image features represented by the codes of Z already recover the coarse information. Therefore, Z just needs to learn residual information to Z . We introduce a residual encoder to extract fine-level featureˆ, which is quantized into with Z . The image is then constructed as follows:ˆ= ( ( ) + ).
During the training of bottom-level codebook and , and are fixed. The network is optimized by Eq. (4) combined with the perceptual loss and discriminator loss.
To make the codes in Z contain richer texture information as well as keep the well-learned structure information in Z , the code shape is set to 2 × 2 × rather than the conventional 1 × 1 × . It is implemented by dividing into non-overlapping patches with spatial size of 2 × 2. Once the features are divided into patches, the quantization process is the same as Eq. (3).
Our hierarchical VQVAE shares some similarities with VQVAE2 [Razavi et al. 2019] in the hierarchical design, but differs in the following aspects: 1) Codes in our fine-level codebook have a spatial size of 2 × 2, while the codes in codebooks of VQVAE2 has no spatial size; 2) Our hierarchical design is motivated by representing textures at multiple scales while VQVAE2 is motivated to learn more powerful priors over the latent codes. 3) VQVAE2 trains the whole network end-to-end, which leads to poor representation ability of coarse-level features. Our stage-wise training strategy ensures meaningful representations at all levels.
Apart from multi-level codebooks, we further design a textureaware codebook. The motivation behind the texture-awareness of the codebook lies in that the textures with different appearances at the original scale may appear to be similar at downsampled scales, leading to an ambiguity problem if we build a single coarse-level codebook for all textures. Therefore, we build different codebooks for different texture attributes separately. We will divide features extracted by the encoders according to their texture attributes at the image level and feed them into different codebooks to get the quantized features.
Sampler with Mixture-of-Experts.
To incorporate texture-aware codebooks, we adapt the diffusion-based transformer into a textureaware one as well. A straightforward idea is to train multiple samplers for different textures. However, this naive idea has two shortcomings: 1) Contextual information in the whole image is vital for the sampling of codebook indices, while training sampler for one single texture makes such information blind to the network. 2) Training multiple samplers are not ideal if we adopt the transformer as the sampler, since multiple transformers are too heavy for modern GPU devices. Therefore, we introduce the idea of mixture-of-experts into the diffusion-based transformer. The inputs to the mixture-of-experts sampler consist of three parts: 1) codebook index , 2) tokenized human segmentation masks , and 3) tokenized texture masks . The texture mask is obtained by filling the texture attribute labels of clothes in the corresponding regions of the segmentation mask. The multi-head attention (·) of the transformer is computed among all of the tokens: where , and are learnable embeddings. The feature extracted by the multi-head attention is routed to different experts heads. The router routes the specific textures based on the texture attribute information provided by . Each expert head is in charge of the prediction of tokens for a single texture. The prediction of tokens is formulated as a classification task, where the class number is the size of the codebook. The final codebook indices are composed of outputs from all expert heads.
During training, the codebook index is the coarse-level codebook index obtained by the hierarchical VQVAE. When it comes to sampling, is initialized with masked tokens and it is iteratively filled with newly sampled ones until fully filled.
Feed-forward Codebook Index Prediction.
To sample an image from the hierarchical VQVAE, multiple feature maps composed of the hierarchical codebooks need to be fed into the decoders. The traditional paradigm [Razavi et al. 2019] is to sample multiple features at different scales. However, token-wisely sampling at larger feature scales is time-consuming. Besides, when sampling at a large feature scale, long-term dependencies are hard to capture, and thus the generated images are of poor quality.
Motivated by these, we propose a feed-forward codebook index prediction network by harnessing the implicit relationship between codebooks at different levels learned by our proposed hierarchical VQVAE. Specifically, features, which are token-wisely sampled at the coarse level, are fed into the codebook index prediction network to predict the fine-level codebook indices. The codebook index prediction network is defined as: The encoder-decoder network is adopted for the index prediction network. It should be noted that the codebook index prediction network is texture-aware as well. Shared features are extracted by the encoder and decoder, but fed into different classifier heads according to the attributes. The use of the codebook index prediction network and the hierarchical codebooks improves the quality of generated images compared to images generated with only one level codebook. Thanks to the feed-forward index prediction network, the sampling process at larger scales under the hierarchical VQVAE design can be achieved within only one single forward pass. It speeds up the sampling process compared to the token-wisely autoregressive sampling used in [Razavi et al. 2019]. Fig. 4. User Interface with Parsing Palette. To generate the human image, users are required to upload a human pose and texts describing the clothing shapes and textures. Users can modify the generated human parsing by using the parsing palette. For example, they can edit the right pant leg from a short one to a long one. Some holes can be added to the right pant leg to make the results more customized.
Text-driven Synthesis
Our framework is a text-driven one. To transform the texts requested by users into attributes, we have some predefined text descriptions for each attribute. We use the pretrained Sentence-BERT model [Reimers and Gurevych 2019] to extract the word embeddings of our predefined texts and the text requested by users and then calculate their cosine similarities. According to the cosine similarities of word embeddings, we then classify the texts into their corresponding attributes.
Interactive User Interface
We present an interactive user interface for our Text2Human as shown in Fig. 1(a). Users can upload a human pose map and then type a text describing the clothing shapes. A human parsing map will be generated accordingly. Then users provide another text describing the clothing textures, and Text2Human generates the corresponding final human image. On the right side of the interface, we provide a parsing palette, which enables users to edit the human parsing. For example, as shown in Fig. 4, users can draw some holes on jeans and make the right pant leg longer using the palette to make the generated images more customized.
DEEPFASHION-MULTIMODAL DATASET
Currently, most human generation methods are developed on the low-resolution version of the DeepFashion dataset and the datasets lack fine-grained annotations. Therefore, a publicly available and well-annotated high-quality human image dataset is important for the research on the human generation task. Motivated by this, we set up a large-scale high-quality human dataset with rich attribute annotations named DeepFashion-MultiModal Dataset. In a nutshell, our dataset has the following properties: 1) It contains 11,484 highquality images at 1024 × 512 resolution. 2) For each image, we manually annotate the human parsing labels with 24 classes. 3) Each image is annotated with attributes for both clothes shapes and textures. 4) We provide densepose for each human image.
Data Source and Processing. DeepFashion dataset is a large-scale clothes database that contains over 800,000 fashion images, ranging from in-shop images to unconstrained photos uploaded by customers on e-commerce websites with varying quality. Since images from the in-shop clothes retrieval benchmark are mostly of high quality with pure color background, we filter full-body images from this benchmark. There are 11,484 full-body images in total. Similar to the data alignment method used in FFHQ [Karras et al. 2019], we align the full-body images based on their poses.
Annotations. 1) Human Pose Representations: We extract densepose for each image using the off-the-shelf method [Güler et al. 2018]. 2) Human Parsing Annotations: Human parsing serves as an effective intermedium in pose-to-photo synthesis. For each image, we provide human parsing annotations including 24 semantic labels of body components (face, hair, skin), clothes (top, outer, skirt, dress, pants, rompers) and accessories (headwear, eyeglasses, neckwear, etc.). The human parsing is manually annotated from scratch by annotators using Photoshop. 3) Clothes Shape Annotations: We manually label the clothes shape attributes for each image. The annotations include the length of upper clothes and lower clothes, the presence of fashion accessories (e.g., hat, glasses, neckwear), and the shapes of the upper clothes' necklines. The length of upper clothes falls into four classes: sleeveless, short-sleeve, medium-sleeve, and long-sleeve. The categories for lower clothes are three-point shorts, shorts, cropped pants, and trousers. The shapes of necklines are roughly divided into V-shape, square-shape, crew neck, turtleneck, and lapel. The presence of fashion accessories has two states, i.e., presence or absence. When we annotate clothes shapes for jumpsuits (e.g., dress and rompers), the upper part and the lower part of garments are treated separately. 4) Clothes Texture Annotations: We manually label the clothes textures by two orthogonal dimensions: clothes colors and clothes fabrics. Clothes colors consist of floral, patterned, stripes, solid color, lattice, color blocks, and hybrid colors. Clothes fabrics are divided into denim, cotton, leather, furry, knitted, tulle, and other materials.
EXPERIMENTS 5.1 Implementation Details
We split the dataset into a training set and a testing set. The training set contains 10, 335 images and the testing set contains 1, 149 images. We downsample the images to 512 × 256 resolution. The texture attribute labels are the combinations of clothes colors and fabrics annotations. The modules in the whole pipeline are trained stage by stage. All of our models are trained on one NVIDIA Tesla V100 GPU. We adopt the Adam optimizer. The learning rate is set as 1 × 10 −4 . For the training of Stage I (i.e., Pose to Parsing), we use the (human pose, clothes shape labels) pairs as inputs and the labeled human parsing masks as ground truths. We use the instance channel of densepose (three-channel IUV maps in original) as the human pose . Each shape attribute is represented as one-hot embeddings. We train the Stage I module for 50 epochs. The batch size is set as 8. For the training of hierarchical VQVAE in Stage II, we first train the toplevel codebook, , and decoder for 110 epochs, and then train the bottom-level codebook, , and for 60 epochs with top-level related parameters fixed. The batch size is set as 4. The sampler with is obtained by a human parsing tokenizer, which is trained by reconstructing the human parsing maps for 20 epochs with batch size 4. is obtained by directly downsampling the texture instance maps to the same size of codebook indices maps using nearest interpolation. The cross-entropy loss is employed for training. The sampler is trained for 90 epochs with the batch size of 4. For the feed-forward index prediction network, we use the top-level features and bottomlevel codebook indices as the input and ground-truth pairs. The feed-forward index prediction network is optimized using the crossentropy loss. The index prediction network is trained for 45 epochs and the batch size is set as 4.
Comparison Methods
Pix2PixHD. ] is a conditional GAN for semantic map guided image synthesis. Here, we use the human parsing map and the texture map obtained by filling texture attribute labels in the human parsing map as inputs.
SPADE. [Park et al. 2019] is a conditional GAN for semantic map guided synthesis. It is adapted in a similar way to Pix2PixHD.
MISC. [Weng et al. 2020] synthesizes human images based on a human parsing map and some attributes about the clothes.
HumanGAN. [Sarkar et al. 2021b] is a pose-conditioned VAEbased human generation method, which generates diverse human appearances by sampling from a fixed distribution (e.g., Gaussian distribution). Taming Transformer. [Esser et al. 2021a] is a VQVAE-based method and also shows an application to conditional human image generation. For a fair comparison, we use human parsing as the input condition.
Evaluation Metrics
FID. For image generation tasks, Fréchet Inception Distance (FID) is a metric evaluating the similarities between generated images and training images. A lower FID indicates a higher quality.
Attribute Prediction Accuracy. We use a pretrained predictor to predict the texture attributes of generated images. The prediction accuracy is reported to measure the realism of the generated texture. We also use the pretrained predictor to calculate the ratios of complicated textures (floral, stripe, lattice) to evaluate the diversity.
User Study. A user study is performed to evaluate the quality of the generated images. Users are presented with 20 groups of results. Each group has five images generated by baselines and our method. A total of 16 users are asked to 1) rank images according to photorealism (rank 5 is the best) and 2) score texture consistency with the given three attribute labels for upper clothes, lower clothes and outer clothes. The full score is 3. If the outer clothing is not required, the score for the outer clothing is 1.
Quantitative Comparisons
We report the quantitative results under two different settings: human image generation 1) from a human parsing, and 2) from a given human pose. Table 1 shows the comparisons with state-of-the-art conditional image generation methods. A well-annotated human parsing map and labels for clothes texture annotations are provided to synthesize the human images. As shown in Table 1, our method achieves the lowest FID, which demonstrates the fidelity and diversity of our generated human images. In addition, the best texture attribute prediction accuracy shows that our proposed Text2Human framework can accurately generate human images conditioned on provided textures. In Table 2, we show the quantitative comparisons on pose-guided human image synthesis. Since it is non-trivial to add clothes shape and texture controls for HumanGAN and TryOn-GAN, under this setting, we report the ratio of complicated textures among all generated images. The highest ratio demonstrates that our methods can synthesize diverse textures for clothes. The user study results are shown in Fig. 5 the highest rank in terms of the photorealism of the generated images. As for the clothes textures, the images synthesized by our framework are more consistent with the required texture attributes. The user study results are consistent with other quantitative results. Figure 6 shows visual comparisons on synthesized human images given human parsing maps and clothes textures. Our proposed method can generate complicated textures with finer details and high-fidelity faces. Figure 7 shows visual comparisons with stateof-the-art pose-guided TryOnGAN [Lewis et al. 2021] and Human-GAN [Sarkar et al. 2021b]. The compared baselines do not offer any controls on clothes shapes and textures, while our method can explicitly control these attributes. We also compare our proposed Text2Human with another VQVAE-based method, Taming Transformer [Esser et al. 2021a]. As shown in Fig. 8, given the same human parsing map, our method can generate more plausible human images.
Ablation Study
Hierarchical Design for Texture Reconstruction. Fig. 9(a) Texture-Aware Codebook and Mixture-of-Experts Sampler. To evaluate the effectiveness of our texture-aware and mixture-of-experts design, we train a diffusion-based sampler with only one codebook for all textures. As shown in Fig. 9(b), the sampler without mixtureof-experts and texture-aware codebook cannot generate requested floral textures, demonstrating our design makes the sampler better conditioned on the textual inputs. We report attribute prediction accuracies on complicated textures (i.e., floral, stripe, and denim). The results are shown in Table 3. Without mixture-of-experts, the attribute prediction accuracy drops by 50.00%, 66.67%, and 3.87% on floral, stripe, and denim textures, respectively. There are more denim textures (3449 images) than floral (325 images) and stripe (361 images) textures in the training set. It is easier for models to capture the patterns of denim textures even without the mixture-of-expert design. As a result, we can observe a smaller performance gap for denim textures, compared to those for floral and stripe textures. It indicates that the mixture-of-experts design is more effective in generating uncommon textures with fewer training samples.
Feed-Forward Index Prediction Network. To overcome the limitations of the hierarchical sampling paradigm of VQVAE2, we propose a feed-forward index prediction network to speed up the sampling speed as well as refine the textures. In terms of running time, our feed-forward network predicts fine-level codebook indices within 0.6s while VQVAE2 takes 25mins. In terms of quality, we conduct a comparative experiment with VQVAE2. For a fair comparison, we use the "ground-truth" coarse-level codebook indices obtained when reconstructing a given human image as input to predict fine-level indices by the autoregressive model of VQVAE2 or our feed-forward network. As shown in Fig. 9(c), our method reconstructs more clear and high-fidelity clothes textures than VQVAE2. We report LPIPS distance ] and ArcFace distance [Deng et al. 2019] between the reconstructed images and the original images in Table 4. It further verifies the effectiveness of our proposed feed-forward index prediction network in terms of reconstruction performance. Fig. 9(d) further provides a visualization of the refinement of our feed-forward network. Our network effectively refines the synthesized lattice patterns sampled from the coarse-level codebook.
Limitations
In this section, we discuss three common limitations of our proposed Text2Human. 1) Uncommon poses. The performance would degrade with human poses which are uncommon in the DeepFashion-MultiModal dataset. Two examples of uncommon poses are shown in Fig. 10(a). The first pose is with two legs crossed, artifacts would appear in the cross-region. The second person stands facing the side rather than the front. In this case, artifacts would appear in the face region, as the model is prone to generate faces heading the front. And thus, the generated image looks unnatural. Our framework is data-driven and can benefit from more diverse human datasets in future work. 2) Plaid textures are blurry as shown in as shown in Fig. 10(b). This is attributed to the imbalanced textures in DeepFashion. Only 162 out of 10335 training images have plaid patterns in upper clothes. This is a common problem for all baselines, and our performance is superior. In future work, the performance could be boosted by adding more data with such complicated patterns. For newly added data, the labels for clothes attributes could be provided by the attribute predictor trained on our dataset. Some techniques dealing with imbalanced data could also be employed to mitigate the problem. 3) Potential error in word embeddings. Translating text descriptions to one-hot embeddings inevitably introduces errors. For example, for the length of sleeves, we only define four classes, i.e., sleeveless, short sleeves, medium sleeves, and long sleeves. If the user wants to generate a sweater with sleeves covering the elbow but not reaching the wrist, the synthesized human parsing cannot be perfectly aligned with the text inputs as the predefined texts cannot handle sleeves with arbitrary lengths. In future work, continuous word embeddings could be employed to provide richer and more robust information.
CONCLUSIONS
In this work, we proposed the Text2Human framework for textdriven controllable human generation in two stages: pose-to-parsing and parsing-to-human. The first stage synthesizes the human parsing masks based on required clothes shapes. In the second stage, we propose a hierarchical VQVAE with texture-aware codebooks to capture the rich multi-scale representations for diverse clothes textures, and then propose a sampler with mixture-of-experts to sample desired human images conditioned on the texts describing the textures. To speed up the sampling process of hierarchical VQ-VAE and further refine the sampled images from the coarse level, a feed-forward codebook index prediction network is employed. Our proposed Text2Human is able to generate human images with high diversity and fidelity in clothes textures and shapes. We also contribute a large-scale dataset, named DeepFashion-MultiModal dataset, for the controllable human image generation task. | 2022-06-01T07:34:17.287Z | 2022-05-31T00:00:00.000 | {
"year": 2022,
"sha1": "73d7ae1db12e9fa22313f612d9ffc5f2f4ad5d71",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "73d7ae1db12e9fa22313f612d9ffc5f2f4ad5d71",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
39178194 | pes2o/s2orc | v3-fos-license | Critical care issues in adult liver transplantation
Over the last decade, liver transplantation has become an operational reality in our part of the world. As a result, clinicians working in an intensive care unit are more likely to be exposed to these patients in the immediate postoperative period, and thus, it is important that they have a working knowledge of the common complications, when they are likely to occur, and how to deal with them. The main focus of this review is to address the variety of critical care issues in liver transplant recipients and to impress upon the need to provide favorable circumstances for the new liver to start functioning and maintain the function of other organs to aid in this process.
Introduction
particularly in the postoperative state. Typically endotracheal intubation and mechanical ventilation are continued into the postoperative period. As a standard practice, all the modalities of monitoring and medication are continued into the postoperative period, the degree of which will be decided depending on the progress of the patient. In a very stable low risk transplant, all anesthetic and relaxant medications can be discontinued and patient can be fast tracked to weaning and extubation. Otherwise, the patient continues to be infused with longer acting anesthetics, analgesics, and muscle relaxants. In such cases, the process of weaning and time of extubation will depend on the patient's subsequent progress, which in turn is largely guided by 'kick-starting' of the engrafted liver. Intra-abdominal drains should be inspected for the nature and rate of blood and fl uid loss. Biochemical, hematological and microbiological monitoring are implemented on a periodic manner depending on the protocol of individual units. Radiological investigations like chest X-ray, abdominal ultrasound, hepatic vascular Doppler monitoring are done on a minimum daily basis till the patient continues to require critical care services. Doppler ultrasonography has high sensitivity and specifi city in evaluating hepatic artery, portal veins, hepatic veins, inferior vena cava, and the bile duct for signs of thrombosis or stenosis in a posttransplant patient.
Though it is the degree of functioning of the transplanted liver that largely determines the other system functions, their dysfunction may be an outcome of an independent entity, not least important among which is sepsis. Optimum renal function is of paramount importance as a determinant of good outcome. [1]
Function of the Liver Allograft
A smooth ICU course after liver transplant is dependent on satisfactory graft function, which can be assessed by clinical parameters, such as wakefulness, normal mentation, improvement of muscle-power, stable respiratory effort, change in drain fl uid from sero-sanguinous to ascites, improvement in urine-output, and lab parameters including improvement in acidemia, stable platelet counts, stable and improving INR without use of fresh-frozen plasma, improving serum lactate, declining transaminases, and normal fl ow patterns on Doppler.
Serum bilirubin concentration gradually falls to normal levels during the fi rst week. Aspartate aminotransferase (AST) and alanine aminotransferase (ALT) peak during the fi rst three days and slowly level off, from then, in case of a healthy uptake of the grafted liver. Gamma glutamyl transferase and alkaline phosphatase, which are canalicular enzymes rise to four to fi ve times of normal and then return to normal in the next few days. Synthetic functions normalize after the third day. While all of the above parameters may remain equivocal, the deteriorating clinical condition with multi-organ dysfunction may be the main clue to the nonfunction of transplanted liver. In such situations, liver biopsy (percutaneous or transjugular) may provide the ultimate answer.
Hyperacute graft rejection is very rare in liver transplantation and occurs due to the presence of preformed antibodies. On the other hand, acute cellular rejection is as common as 15-25%. [2,3] It can be present from within few days to a few years, and so in reality the term acute is inaccurate. Here a rise in serum bilirubin is associated with rise in aminotransferases and canalicular enzymes. Clinical symptomatology can be rather nonspecifi c with loss of appetite, pruritis, and fever without tachycardia. This picture is associated with increased hepatic artery resistive index on Doppler. The diagnosis is made on liver biopsy and treatment based on severity or the degree of rejection (Banff score). [4,5] The so called "chronic" rejection can also occur at any time and is evidenced by cholestatic features clinically and advancing arteriopathy and degenerating bile ducts on liver histology, with terminal liver failure ensuing eventually. Chronic rejection is extremely uncommon, accounting for less than 5% [6] of all cases of graft loss, and may occur due to untreated acute rejection, noncompliance to immunosuppression medication, or some immunological mechanisms which are not very well understood.
Various kinds of anastomotic problems can be present in the early postoperative days with varying incidence. Hepatic artery thrombosis, with incidence of 4-12%, [7] can present as sudden deterioration in hemodynamics, ARDS, severe coagulopathy, sudden and marked elevation of aminotransferases, with commonly accompanying liver abscesses due to bile duct strictures. Complications involving the portal vein are seen in 1.7-6% of liver transplant recipients. [8][9][10] Persistent ascites, enteric congestion and bleeding denoting portal hypertension, and later variceal hemorrhage may point to portal vein thrombosis. Doppler ultrasound followed by a traditional angiogram or magnetic resonance angiogram (MRA) will be diagnostic. Appropriate surgical or radiological intervention can be both, graft and life saving in these conditions. Patency of biliary tract can be jeopardized, either due to direct insult to the duct system or because of feeder vessel obstruction. Biliary tract complications account for up to 15% of postoperative surgical complications. [11] These are more common in partial grafts than in whole liver grafts. Again surgical intervention and/or radiological intervention are imperative.
Mediators from the liver or intestine may lead to a reperfusion syndrome after the graft is revascularized, which may manifest as hypotension from peripheral vasodilation, bradycardia, hyperkalemia, and pulmonary hypertension.
Cardiovascular system
An important aspect of pretransplant workup is to establish suitability of a patient to withstand the severe cardiopulmonary stress that the surgery poses to him. With the upper age limit of recipients being increasingly liberalized, the possibility of coronary artery disease in the recipient must be kept in mind. Since cirrhotic patients have a modifi ed lifestyle due to chronic and debilitating disease, as well as have a hyperdynamic systemic circulation, the classical symptoms of coronary insuffi ciency are often not present on presentation. This obviously does not imply an absence of the underlying cardiac disorder, which manifests in the intraoperative or early postoperative period, complicating the anesthetic and early ICU management. Some patients, especially those with alcohol related cirrhosis, may present with reversible dilated cardiomyopathy after liver transplantation. [12] The intensivist must be aware of the possibility of perioperative myocardial infarction causing left ventricular dysfunction. Preoperative dobutamine stress echo (DSE) is a good screening test for occult coronary artery disease because it assesses the adequacy of myocardial oxygen supply. In addition, assessment must be made of valvular function and the presence of intrapulmonary shunting (by contrast Echo) and portopulmonary hypertension. A negative DSE predicts a good prognosis, that is, a low probability of perioperative cardiac events. [13] Unrecognized coronary artery disease is associated with a mortality of up to 50% and morbidity of 80%. [14] While high cardiac output is common in cirrhosis, a relative low cardiac output status may be seen in cirrhotic patients with cardiomyopathy and amyloidosis. These patients will need advanced hemodynamic monitoring such as Swan Ganz or PiCCO, and inotropic support.
Arterial hypotension is quite common in peritransplant period. A vasodilated and hyperdynamic state is typical of liver failure. These changes resolve slowly after liver transplantation. Magnitude of hepatic reperfusion syndrome may infl uence the posttransplant cardiopulmonary status. It may take days to weeks sometimes for these changes to revert to near normal. Failure to normalize with reduction in the level of vasoconstrictor support indicates poor prognosis. Sepsis may further complicate the picture. Elevated venous pressures will lead to hepatic congestion, and this may in turn increase the portal pressure. As a result of this, graft function may further suffer and lead to bacterial translocation and endotoxemia. To avoid such sequences of events, hypotension has to be classifi ed as cardiac or vasodilatory. Moderate fi lling followed by vasoconstriction should treat this. Whereas, inotropic support can be provided by dobutamine and adrenaline, vasoconstriction can be achieved with low dose noradrenalin or vasopressin. Cardiac tamponade has to be excluded in face of low cardiac output and high fi lling pressures.
Systemic hypertension is common in the early postoperative period in a patient with a well functioning graft. It generally occurs either due to lack of analgesia or sedation in this setting. Later in the course of patient recovery, hypertension occurs due to cyclosporine or tacrolimus. Treatment is usually initiated when the systolic blood pressure is greater than 160 mmHg or the diastolic blood pressure greater than 100 mmHg. Atrial fi brillation may occur due to perioperative fl uid shifts, acid base imbalance and electrolyte abnormalities. Treatment includes etiological management, beta-blockers and calcium antagonists, but amiodarone should be avoided if possible because of its potential hepatotoxicity.
Pulmonary system
It is estimated that 45-69% patients with cirrhosis have some degree of hypoxemia. [15] Long-standing mechanical factors like ascites, atelectasis, and pleural effusion with restrictive lung changes are added to the major post upper abdominal surgical status leading to hypoxemia in the posttransplant period. Good pain control, chest physiotherapy, and incentive spirometry with functioning graft will improve this scenario.
Widespread vasodilatation with vascular shunting in liver disease will have its refl ection on the pulmonary system. In a more etiopathogenic sense, hepatopulmonary syndrome (HPS) and porto-pulmonary hypertension (PPH) may continue into the posttransplant period. Introduction of pulmonary artery catheter is essential for management of these patients. HPS is hypoxemia in the background of liver disease with air contrast echocardiogram (Bubble study) demonstrable intrapulmonary shunting. This syndrome presents with dyspnea and desaturation on erect posture, platypnea and orthodeoxia. Their ventilation-perfusion mismatch resolves after a few days of successful transplantation. [16] Fixed nonreversible shunt denotes poor prognosis. Pulmonary hypertension is more likely in cirrhotic patients with worsening porto-pulmonary shunting. This can affect right ventricular function and may have to be corrected with epoprostenol (prostacylclin-PG1 2 ), which is a potent pulmonary and systemic vasodilator. Severe PPH is a contraindication to transplantation. Pulmonary hypertension developing fi rst time after transplant is usually due to pulmonary embolism.
Cardiogenic and noncardiogenic pulmonary edema, Acute Respiratory Distress Syndrome (ARDS), and pulmonary infection are not uncommon in this period. In the immediate postoperative period, OLT recipients may develop ARDS as a result of the surgical insult or transfusion related acute lung injury [17] (TRALI). In case of a suspected infection, broncho-alveolar lavage has to be obtained for quantitative bacteriological and fungal cultures, which should be followed by sensitive antibiotics or antifungal agents. Early application of noninvasive ventilation rather than just supplemental oxygen can reduce the incidence of reintubations, major or fatal complications and overall mortality. Preventing intubation should be a major aim of management of respiratory failure in these immunocompromised patients.
Ventilatory strategies that minimize insult to the graft function should be used, as positive-pressure ventilation and high positive end-expiratory pressures may alter the splanchnic blood fl ow and decrease graft oxygenation and cause congestion of the inferior vena cava and hepatic vein drainage areas. Posttransplant ventilation is usually for a day or two, depending on various pulmonary and extra-pulmonary determinants. This is a risk-benefi t ratio evaluation between the need for good graft oxygenation and risk of infection. All possible precautions should be employed to avoid ventilator-associated pneumonia.
Renal system
Pretransplant renal dysfunction is an independent predictor of posttransplant morbidity and mortality. [1] Up to 25% of recipients suffer from renal impairment prior to transplantation, and nearly two-thirds of transplant recipients show impaired posttransplant renal function. [18] It has been found that in the posttransplant period there is a 40% decline in the glomerular fi ltration rate at the end of 6 weeks after which it stabilizes. [19] The etiological reasons for renal dysfunction in the posttransplant phase include pre-transplant renal dysfunction, which maybe due to acute tubular necrosis (ATN) or hepato-renal syndrome or other medical problems, tubular damage due to peritransplant hypotension, graft dysfunction, ATN from postoperative sepsis and drug induced injury (cyclosporine, tacrolimus, amphotericin, aminoglycosides etc.). Management of renal dysfunction depends on the etiology. Those with HRS are more likely to require renal replacement therapy (RRT) with about 10% progressing to develop ESRD. [20] Intra-operative management of hypotension, use of veno-venous bypass, [21] and avoidance of nephrotoxic drugs are important reno-protective strategies. Nephrotoxicity is a known complication of calcineurin inhibitors (CNI) used to prevent rejection. Reducing the dosage, use of CNI sparing anti-rejection protocol or delaying introduction of CNIs in those with high probability of renal dysfunction and use of calcium channel blockers in CNI related hypertension have been found to be useful strategies in long term renal protection. Oliguria may be the earliest warning sign of renal dysfunction. 8-10% of transplant recipients require renal replacement therapy in the immediate post op period. [22,23] Dialysis, preferably lactate free continuous renal replacement therapy (CRRT) is required to stabilize these patients. So-called reno-protective agents like dopamine, calcium channel blockers or prostaglandins have not been proven to be of value. Combined liver and kidney transplant is an option reserved for those patients with pre-transplant renal dysfunction due to other concomitant medical illness or intrinsic renal disease.
Gastrointestinal system
Many patients are severely malnourished before transplantation. As most patients have brisk return of gastrointestinal function, early enteral nutrition is the goal, except in patients with choledochojejunostomy. Upper gastrointestinal bleed is usually due to gastritis or stress ulceration. In general, upper gastrointestinal bleeding and good graft function do not co-exist. Portal vein thrombosis may result in recurrence of varices and bleeding. Posttransplant pancreatitis is a feared complication of liver transplantation. Conservative management is usually preferred to aggressive measures.
Central nervous system
A functioning allograft will generally improve neurological impairment, especially in those patients with certain pre-existing metabolic encephalopathy. Neurological events do occur, which can range from seizures to stroke to coma and are often fi rst recognized while the patient is still in the intensive care unit. Clinical series have documented neurological complications in 8.3% to 47% of all patients receiving liver transplantation. [24,25] Alteration of mental status is common and upto one-third of liver recipients can have some degree of neurologic dysfunction in the perioperative period. [26] Rapid recovery from encephalopathy is expected in the presence of good graft. Compromised graft function may result in recurrence of encephalopathy. Fulminant hepatic failure patients undergoing liver transplant need continuous monitoring of intra-cranial pressure. Focal defi cit should lead to suspicion of stroke or embolism. Acute change in mental status and occurrence of seizures should need checking up of drug, electrolytes and blood glucose levels.
Psychosis is another feared complication in transplant recipients. It has a multifactorial etiology and could be due to prolonged ICU stay, use of steroids and other immunosuppressants. The fact that most antipsychotics are hepatotoxic is a major impediment in the treatment of this condition. Psychosis resulting in a noncompliant patient, can be a major stumbling block in rapid recovery, due to ineffective delivery of medication, physiotherapy and mobilization.
Endocrine and metabolic problems
Hyperglycemia is common because of surgical stress, steroid administration and insulin resistance associated with liver failure and there is increasing evidence to deploy tight control regimens. [27] Hypoglycemia may be a sign of inadequate graft function or severe sepsis.
Hypothermia is common in the posttransplant patient and can precipitate metabolic acidosis, and accentuate coagulopathy. Mild metabolic acidosis is common in the fi rst few hours after transplantation. Optimum fl uid and ionotropic management should attenuate this acidosis. Persistent metabolic acidosis in the absence of other causes should warrant suspicion of graft dysfunction. Slightly delayed acidosis may indicate sepsis. Serial lactate levels are helpful in managing such situations.
Adrenal insufficiency and hypothyroidism are sometimes seen in these patients and have to be corrected after establishing the diagnosis.
Fluid and electrolytes
The recipient is generally kept in a euvolemic or slightly hypovolemic state in the posttransplant period, with minimal intravenous infusions, to optimize graft function and avoid pulmonary edema. If needed, 5% dextrose with 0.45% NS is used unless the serum sodium is less than 130 mEq/L, when 5% dextrose with 0.9% NS can be used. Gelatins are generally preferred to starches in these patients. Packed red cell and albumin transfusions are preferred when volume expansion is required.
Electrolyte imbalance is quite common in these patients. Hyponatremia should be gradually corrected with judicious use of fluids with a target serum sodium rise of Ͻ10-12 mEq/dL/day is desirable. [28] Hypomagnesemia is common in cirrhotic patients and may be exacerbated in the posttransplant patient by excessive blood loss and medications (CNIs, loop diuretics, and amphotericin B). Recovering graft has a high requirement for phosphate and magnesium and these should be replaced adequately. Ionized serum calcium levels should be monitored as total calcium levels depend on the albumin concentration which may fluctuate widely in the early posttransplant period. Pretransplant hypocalcemia due to malnutrition and vitamin D dysfunction may be exacerbated early in the posttransplant patient by citrate chelation (with blood transfusion), gastrointestinal malabsorption and hepatocyte injury resulting in an intracellular shift of calcium. Hypercalcemia and hypermagnesemia are rare.
Coagulopathy
Coagulopathy results from preexisting portal hypertension, inadequate clotting factor synthesis, hypersplenism, fi brinolysis, hypocalcaemia and dilution. The risk of bleeding must be balanced against the risk of hepatic artery or portal vein thrombosis, so over correction should be avoided. Hence, monitoring of coagulation becomes mandatory after liver transplantation. Thrombelastography (TEG), a method for evaluating the viscoelastic properties of the blood clot, can be used to complement the standard coagulation parameters in these patients. TEG can be useful in differentiating between bleeding secondary to incomplete surgical hemostasis, platelet dysfunction or anomalies in coagulation factors and therefore, can help in optimizing and minimizing blood component usage by guiding use of selective blood component therapy. [29] TEG guided replacement can reduce transfusions and attendant complications. [30,31] It may also be useful in detecting a hypercoaguable state, not refl ected by standard coagulation parameters, which may be present after any major surgery and thus, may guide antithrombotic therapy with increased safety. [32,33] TEG has further advantages of allowing rapid bed-side monitoring and may be useful in assessing the graft function. [34] Platelet dysfunction due to renal insuffi ciency can be managed with desmopressin. Replacement of blood products is necessitated in the presence of active bleeding or any planned intervention. Otherwise, maintenance of an INR between 1.5 and 2, a platelet count Ͼ50 ϫ 10 9 /L and a fi brinogen level Ͼ100 mg/dL is satisfactory.
Infection
The primary cause of death after liver transplantation is infection. [35] Bacterial and fungal infections are common in the early posttransplant period, originating from intravascular lines, lung, urinary tract, surgical wound, and the biliary system. [36] Prophylaxis against gram-negative bacteria is usually deployed depending on local antibiogram patterns. Prolonged surgery, multiple transfusions, malnutrition, hyperglycemia, requirement of dialysis and retransplantation are risk factors for fungal infections. [37] Viral infections are seen much later in this population. A detailed account of posttransplant infections is beyond the preview of this article [ Table 1].
Immunosuppressive Therapy
Triple therapy is generally given in most centers based on a CNI, like tacrolimus or cyclosporin, in conjunction with an antiproliferative agent (mycophenolate mofetil) and a steroid. [41] A dual regimen of steroids and a CNI has shown to be equally effi cacious as triple therapy. [42] The advantage of early use of triple therapy is that it may allow delaying the initiation of the CNI, while the posttransplant changes in renal function recover. One must maintain a balance between under-immunosuppression, which may lead to graft rejection, and over-immunosuppression, which may lead to sepsis and malignancy.
Cyclosporin and tacrolimus seem to be similar in terms of graft and patients survival. [43] However, tacrolimus is associated with fewer episodes of rejection and less need for steroid use. Tacrolimus is equally nephrotoxic and is associated with increased rates of diabetes and neurotoxicity but has a lower incidence of hypertension and hyperlipidemia. [44,45] Nutritional Support Factors such as preoperative malnutrition, stress from surgery, and immunosuppressive therapy, enhance the need for nutritional support after transplantation. In the immediate postoperative period, protein catabolism is markedly increased, [46] and hence, these patients should receive 1.5-2.0 grams of protein per kilogram of dry weight during this phase. [47] Energy requirements are not signifi cantly elevated; especially in an uncomplicated, nonseptic patient therefore, calories should be provided at approximately 120-130% of the calculated basal energy expenditure (BEE). [47] Patients should be encouraged to start oral diets as soon as tolerated.
Conclusion
The principles guiding critical care for liver transplant patients, are to provide favorable circumstances for the new liver to start functioning and maintain the function of other organs to aid in this process. | 2018-04-03T05:54:13.612Z | 2009-07-01T00:00:00.000 | {
"year": 2009,
"sha1": "fa1041fceb912b647e18f5b4c8bb0a07935bad7d",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4103/0972-5229.58535",
"oa_status": "BRONZE",
"pdf_src": "Adhoc",
"pdf_hash": "e59eb5dbcfc40cb13bd63461668f87ee353a3cad",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
139903362 | pes2o/s2orc | v3-fos-license | Investigation on the feasibility of coffee husk (endocarp) as efficient filler material for enhancing physical and mechanical properties of styrofoam based particleboard
This research focuses on introducing a coffee husk as viable and efficient filler for enhancing physical and mechanical properties of Styrofoam based particleboard. Heat treatment method was adopted to produce the particleboard from the mixture of coffee husk (CH) with Styrofoam (PS). Styrofoam is material derived from polystyrene. The aim of this research is to get the appropriate weight composition between coffee husks with PS and to identify the physical and mechanical properties of the produced particleboard. The composition of coffee husk varies between 0-90%wt. The manufacture of particleboard i.e. coffee husk milled with size 20/10 mesh then soak with 10% NaOH for 2 hours, rinsed with clean water and dried and weight according to the composition. The mixture of CH and PS is inserted into mold and put into hot-press. The result shows from physical properties that density, water absorption and thick development test corresponding with SNI 03-2105-2006 standard, the mechanical properties shows MOR test meets the standard on the addition of CH 10-50%, while the MOE test has not meet the standard.
Introduction
Central Aceh is the largest Arabica coffee-producer region in Indonesia [1]. The abundance of waste of coffee processing (endocarp) is a huge potential that has only been burned for free. The use of coffee husk (endocarp) as a raw material for making particleboard is considered as one of the best choice. The particleboard is a board product that is produced by compressing the wood particles and simultaneously binding with an adhesive [2].
The development of particleboard technology is now beginning to shift from composite materials to synthetic fibres to natural fibre-making materials [3]. Natural fibre-reinforced particleboard began to be observed by various industries, such as railway industry, ships, automotive, sports, and construction of civil buildings even to household industries. Consideration of fibre selection for composites is strongly influenced by several parameters such as strength and modulus of elasticity, elongation when fracture, thermal stability, bonds between fibres, matrix, dynamic behaviour, density, price, process cost, availability and ease of recycle [4].
Styrofoam made from raw polystyrene (PS) is an inelastic plastic, so it is able to obtained that is not vulnerable essential plastic addition. Styrofoam, which was originally fragile has made it more plastic with the addition of plastic material that is dioctyl phthalate (DOP) [5]. The fundamental of this study was conducted to examine the extent to which coffee husk and PS can be used as a new material particleboard. Particleboard of fibre reinforced coffee husk and PS as a matrix; it is used as an engineering material, in terms of physical and mechanical properties. The purpose of this research is to make particleboard material with Styrofoam binder. In addition to obtaining a suitable weight volume composition between the husk fibre and Styrofoam fibres which will serve as particleboard that meets JIS A 5908; 2003 standards.
Experiment
This section briefly discusses the preparation of specimen and the procedure of the research, which are including the provision of CH, provision of PS and finally the preparation and testing of the samples.
Speciment Preparation
The used materials in this study are Styrofoam (PS), toluena, coffee husk (endocarp), and clean water (distilled water). While the used the equipment are 500 ml beaker glass, sieves, spatula, analytical balance, hot plate, moulds, electronic system universal tensile machine type SC-2DE, and aluminum foil. in open air for one week, and then soaked in the 10%NaOH then it was washed with water until its pH is neutral. Additionally, it was dried in 800 0 C oven. After that it was milled (20/10 mesh) and sieved so it was aready-made coffee parchment skin. Alkali treatment improves the mechanical bond better [8].
Provision of Styrofoam (PS)
The used Styrofoam obtained from the packaging waste is washed and dried then cut to size 0.5 x 0.5 cm and weighed according to the composition, then dissolved with toluene and then added 5% MEXPO catalyst. This mixture is stirred using mixer until evenly distributed.
Preparation of Samples
The mixture is between coffee husks and was ready to use PS. The PS was stirred until it was homogeneous. Then put in mould. The samples were in hot-press at a temperature of 170 0 C and pressure 25 kgf/cm 3 . Once it was completed, the hot press was turned off and the samples were removed and conditioned at room temperature for 7 days and cut to its standard.
Testing
Physical testing was performed to test the density, water absorption and the thick development and mechanical testing including the Mechanical of Rapture (MOR) and Modulus of elasticity (MOE) were done by mean of the Electronic System Universal Testing Machine Type SC-2DE MFG No.6079 ASTM D 3039 [9].
Results and discussion
The effect of added composition of coffe husk weighted the percentage on physical and mechanichal properties such as density, water absorption, tick development, modulus of raputer (MOR) and modulus of elasticity (MOE), respectively as follow,
Density
The test results of the density is illustrated in Figure 1. Figure 1 shows that the presence coffee husk influenceds its density value, including the value of density was lowest for the 90% CH of 0.51 g/cm 3 , while the value of the highest densities obtained in the composition 0% CH by 0,68 g/cm 3 . The density value was affected by the volume fraction of the sample, a decrease in density along with weight reduction of pp and additional of husks particles coffee resulted in the addition of a volume fraction, this is due to differences in density PP and shell of coffee beans (endocarp), with the same mass between PP and coffee husks but the second volume was different therefore PP reduction and the additional of coffee husks in the same weight fraction can increase the volume of the resulting particleboard, and this would affect the value of density. The greater volume produced, the density value would be diminished.
Density of the particleboard tends to increase along with the addition of adhesive, this occurs due to physical force between the adhesive interactions with the filler through cavities that filled it. Moreover, the results indicate that the presence of the addition coffee husk (CH) can improve the physical properties of the resulting particleboard, but if the particles used has exceeded the limits of the binding matrix to fiber, composites produced is damaged. All composition have met the SNI 03-2105-2006 standard [10].
Density of the particleboard tends to increase along with the addition of adhesive, this happens due to physical force between the adhesive interactions with the filler through cavities that filled it. These results indicate that the presence of the addition coffee husk (CH) can improve the physical properties of the resulting particleboard, but if the particles used has exceeded the limits of the binding matrix to fiber, composites produced will be damaged. All composition have met the SNI 03-2105-2006 standard [10].
Water Absorption
The water absorption resulting samples ranged 0.01-7.6 %. Figure 2 illustrates the results of the water absorption weighted the coffee husk percentage. the matrix is suitable, this is due to the hydrophobic properties of Styrofoam. The particleboard is not easy to absorb water from the environment. The slightly increased of water content occured in the 10-90% so that the content of the particles reduced the resistance of PS mixture which caused erosion. So with reduced Styrofoam materials, the percentage of water absorption becomes smaller.
Water absorption is highest in the 90% CH, where coffee husks a material that tends to absorb water, therefore the increase in the percentage of coffee husks lead to the increasing of water in the sample during the manufacturing process. Water absorption increases with the addition of skinweight percentage of the coffee, this is because the skin of coffee is a lightweight aggregate that has many pores, so that the water absorption percentage is greater than the water absorption without fiber. SNI 03-2105-2006, particleboard, requires the water absorption value for all samples <14%. From the test results indicated that of all particleboard produced met the standards. This result is very suitable to use for interior or exterior panels since the water content is very low.
Thick Development
Development of a minimum thickness of 0.4% was in 0% CH and a maximum of 0.04% was in the sample 5. When it is compared with SNI 03-2105-2006, thick indigo development of the required maximum was 12%, it is thus the particleboard can be said to have fulfilled standard for all compositions on the test thickness swelling as illustrated in Figure 3. Figure 3 illustrates that the thick development particleboard tends to increase with the increasing weight percentage of coffee husk (endocarp). This is because the nature of coffee husk that absorbs water (hydrophilic) so the amount of water absorbed more and more resulted in the development of particleboards. However, if it is compared to the SNI 03-2105-2006 standard, which requires maximum thickness development value more than 12%, then the rate of progression thickness development of particleboard have met the standard.
Modulus of Rapture (MOR)
The value of the maximum MOR is 40% CH, which amounted to 99 kgf/cm 2 and the lowest flexural strength values obtained in 90% CH is 90 kgf/cm 2 . Figure 4 illustrates that the addition of particle composition of CH tends to increase MOR strength. This suggests that the presence of CH may increase the MOR of the particleboard material. The maximum MOR strength results on samples is 40% CH due to the adhesive force strong enough on Styrofoam, which creates better bonding. In composition of 60-90% CH MOR values begin to decrease; this occurs due to not maximal PS binds the CH. The bond between the particle and matrix PS is easily discharge. Resulting in the emergence of shear stress. The failure is dominated by loose bonding of particles and matrix. This is often called "fibre pull out". Particleboard with 10-50% CH composition meets SNI 03-2105-2006 standard. The particleboard requires minimum MOR value of 82 kgf/cm 2 . The probability SEM viewing on the board at its breaking section explains the increase in mechanical properties. Figure 5 illustrates the MOE that tends to increase with the increasing percentage of heavy CH. This is caused by the skin effect of coffee, which is natural fibber that has good elasticity. The increase in MOE value tends to increase until the composition of 40% CH. However, at 50% interval MOE value decline, this is due to the gap on the particles so that when the composite is loaded then the tension moves the void area and reduces the strength of particleboard. All samples have not met the requirement of SNI i.e., minimum 20.400 kgf/cm 2 .
Conclusion
The results that obtained from present work can be summarized as follow; Physical test results such as density, water absorption and thickness development of coffee husk (CH) and Styrofoam (PS) particleboard have met the SNI 03-2105-2006 standards. Increased coffee husk percentage composition indicated that the addition of particle composition CH tends to increase MOR strength. This suggests that the presence of CH may increase the MOR of the particleboard material. The maximum MOR strength results on samples 40% CH due to the adhesive force strong enough on Styrofoam, which creates better bonding. In composition 60-90% CH MOR values begin to decrease. This occurs due to not maximal PS binds CH. The bond between the particle and matrix PS is easily discharge. Resulting in the emergence of shear stress. The failure is dominated by loose bonding of particles and matrix. This is often called "fibre pull out". Particleboard with 10-50% CH composition has met the SNI 03-2105-2006 standard.
In addition, the increase in water content and thickness development decrease the value of MOR and MOE. Finally, The result of mechanical test of bending strength of CH-PS particleboard fulfilling SNI 03-2105-2006 standard is 10-50% KTBK composition, while 60-90% KTKB has not met the standard. | 2019-04-30T13:07:12.182Z | 2018-03-01T00:00:00.000 | {
"year": 2018,
"sha1": "e8bb8bcee7af22c5b5c592b769a0321839ff0bc4",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/334/1/012080",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e49d80f8ca482c4ee08f41c4a54190e3044b4bcc",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
} |
234707941 | pes2o/s2orc | v3-fos-license | Social Ties, Mobility, and COVID-19 spread in Japan
Why are some communities documenting higher case loads of COVID-19 infections than others? Past studies have linked the resilience of communities against crisis to their social vulnerability and to the capacity of local governments to provide public goods and services like health care. Disaster studies, which frequently examine the effect of social ties and mobility, may help illuminate the current spread of COVID-19. We model the occurrence of new cases from February 17 to May 29 using 4841 prefecture-day observations, paired with daily tallies of aggregate Facebook user movement among neighborhoods. This preliminary study of Japanese prefectures �nds that communities with strong bridging and linking social ties start out more susceptible to COVID-19 spread, but their rates quickly decrease over time compared to communities with stronger intra-group ties. These results imply that residents’ participation in civil society and trust in o�cials affect their adoption of new health behaviors like physical distancing, improving their capacity to respond and adapt to crisis. Though bridging and linking communities suffered more early on, they adapted better to new conditions, demonstrating greater resilience to the pandemic. We anticipate this study to be a starting point for broader studies of the effect of social ties and mobility on response to COVID-19 worldwide, verifying what kinds of social networks we should invest in to adapt to this pandemic.
Introduction
Why are some communities documenting higher case loads of COVID-19 infections than others?Since the global pandemic started, scholars have looked at the capacity of health care systems, the spread of residents, and the social vulnerability of communities to crisis, but social ties, a key factor in disaster studies, have remained absent from the conversation.This preliminary study of Japanese prefectures nds that communities with stronger bridging and linking social ties start out more susceptible to COVID-19 spread, but quickly see decreasing rates over time compared to communities with stronger intra-group ties.These results imply that residents' participation in civil society and trust in o cials affect their adoption of new health behaviors like physical distancing, improving their capacity to respond and adapt to crisis.This study makes three major contributions to the literature on disaster and pandemic response.First, while past studies have linked social capital and social vulnerability to disaster outcomes (Cutter et al. 2006, Aldrich & Meyer 2015, Fraser et al. 2020), this study applies this to the COVID-19 pandemic.Though social networks are associated with the spread of infection, this only occurs with contact; instead, communities with strong social networks can convince family, friends, and neighbors to adopt vital new behaviors like physical distancing and masks to reduce the spread of the virus.
Second, this study leverages Facebook user mobility data as a key mediating variable to discern the relationships between social ties and COVID-19 case rates, building on a literature on the role of mobility in crises (Yabe 2020, Fraser 2020).While geographic mobility has been linked to the spread of the avian u and SARS (Bowen & Laroe 2006, Smallman-Raynor & Cliff 2008), as well as COVID-19 (Zhang et al. 2020, Cowling et al. 2020), communities with higher social capital see lower or decreasing associations between mobility and COVID-19 spread, as residents learn about how to protect themselves while moving about.
Third, this study highlights that while health care system capacity is vital to reducing the spread of pandemics (Schoenbaum et al. 2011), individual citizens and communities' participation in public health efforts is vital to ensuring widespread adoption of new health behaviors.This builds on past ndings from SARS and Ebola (Tai and Sun, 2007;Funk et al., 2009;Vinck et al 2019), highlights that bridging and linking social ties are especially key, and applies this to Japan.Even after the Japanese government struggled to respond to the Diamond Princess outbreak and subsequent clusters, this study highlights that residents and their social networks have made a difference in Japan's response to COVID-19.
Literature Review
This study examines why some Japanese prefectures saw more new cases of COVID-19 than others.
Recent scholarship highlights that COVID-19 spreads through contact with aerosolized droplets from persons carrying the virus, facilitated by coughing and sneezing (WHO 2020).On average, it takes 5 days to develop symptoms, with a range of 1-14 days (Lauer et al. 2020).Tracking infection rates has been problematic, as some states (such as the US and Japan) were slow to begin testing cases and have failed to contact trace (Kingston 2020).Further, many people spread the virus asymptomatically (Lavezzo et al. 2020).Based on infections reported already, we can examine variation in infection rates among communities.
First, some communities might see higher rates of infection because of citizens' behaviors and mobility (Zhang et al. 2020, Cowling et al. 2020).Communities have adopted physical distancing at varying rates; many Japanese prefectures and religious organizations did not close down key institutions until early April (Kingston 2020, McLaughlin 2020).For example, the northern prefecture of Hokkaido saw a high share of cases early on, while Aichi Prefecture in the Chubu Region developed a cluster of infections gradually (See Figure 1).Communities where residents still move between neighborhoods frequently might have higher case rates (Bowen & Laroe 2006, Smallman-Raynor & Cliff 2008).Similarly, communities which already have developed cases are more likely to see spread due to exponential rates of infection.
However, some communities see especially high rates of infection and death.In the US, African American neighborhoods with high shares of low-income residents in New York City and the city of Flint, Michigan, have seen disproportionately high infection rates (Mansoor 2020).These communities have greater shares of socially vulnerable populations, such as residents who are elderly, women, single parents, unemployed, in poverty, or racial, religious, or ethnic minorities.These populations tend to see worse outcomes both from initial disasters and long term recovery processes (Cutter et al. 2006, Fussell et al. 2010), because they are nancially constrained from seeking help and have faced institutionalized discrimination in the past.
Yet some vulnerable communities manage better outcomes from crisis than others due to the capacity of governments to provide better quality response (Bollyky et al. 2019, Hallerod et al. 2013, Farag et al. 2012).In the case of COVID-19, some communities had better funded governments that purchased necessary materials and had more doctors, nurses, hospitals, and clinics available to serve new waves of patients (Schoenbaum et al. 2011).Meanwhile, others struggled to provide similar levels of care for populations.
Finally, even vulnerable communities with weak government and health care capacity could respond better to crisis if they have strong social networks to rely on.Disaster scholars nd that strong social capital -social ties that residents use for physical, nancial, and social support in times of crisis -are powerful interventions that boost community resilience (Aldrich & Meyer 2015).Scholars found this after the 1995 Kobe Earthquake, the 1995 Heat Wave in Chicago, the 2011 disaster in Japan, and after Hurricanes Katrina, Sandy, and Harvey in the US (Edgington 2010, Klinenberg 2002, Aldrich 2019, Ye and Aldrich 2019, Aldrich & Crook 2010, Collins et al 2017, Smiley et al. 2018, Metaxa-Kakavouli et al. 2018).Social capital comes in three forms: bonding, bridging, and linking social ties.Bonding ties connect members of the same social groups, like family members, neighbors, and members of co-ethnic or coreligious groups, and help those groups survive crisis, but can lead to hoarding of resources.Bridging ties connect members of different social groups, like unions, nonpro ts, and volunteer organizations, facilitating civic engagement (Putnam 2000), reducing ethnic violence (Varshney 2001), and providing mutual support across different social groups.Finally, linking ties connect residents to local, regional, and national o cials, helping them access key public goods they might not otherwise receive (Aldrich 2019, Sretzer & Woolcock 2004, Tsai 2007).
In the case of COVID-19, social networks boost the spread of quality information on how to keep community members from contracting the virus.Past studies of epidemics found that information from trusted personal ties is more effective in changing health behaviors than centralized information campaigns (Tai and Sun, 2007;Funk et al., 2009;Vinck et al 2019).We hypothesize that bonding social ties, like social vulnerability, might back re, circulating bad information while not providing new, quality information.In contrast, we hypothesize that bridging and linking social ties might facilitate the spread of quality information, since residents who trust their o cials and different social groups might trust WHO guidelines on physical distancing more.
Results
This study modeled daily infection rates of prefectures from Japan's Ministry of Health, Labor, and Welfare from February 17 to May 29, compiled by JAG Japan (JAG Japan, 2020).We divided prefectureday observations into two datasets.First, we modeled why some prefectures encountered their rst case using prefectures with 0 or 1 cases.Second, we modeled why some prefectures found additional cases after their rst case, using prefectures with 1 or more cases.We tested the effect of social capital, including bonding, bridging, and linking social capital, drawing from new indices (Fraser 2020) and the cumulative movement of residents among different neighborhoods.We assessed mobility using aggregate level data from Facebook's Data for Good project.Meanwhile, each model controlled for the capacity of health care systems, government nances, and social vulnerability of communities, alongside further demographic controls.Finally, since social processes might change as communities adapt to the new pandemic, we modeled these infection rates using three time chunks, rst looking from February 17 to April 5 (to include a surge in cases at the start of April), then from February 17 to May 1, and then from February 17 to May 29.This helps us con rm how long certain trends persist.Our modeling techniques, including proxies used in these models, are discussed in depth in the Methods section at the conclusion of this article.This analysis nds three broad trends, described in Methods Appendix Tables 1, 2, and 3. First, the models demonstrate several effects as expected.For example, cumulative inter-neighborhood movement is positively related to increasing case rates, but negatively related to prefectures getting their very rst case.
Up until April 5, more inter-neighborhood movement was associated with a higher likelihood of getting a rst case of COVID-19 and getting subsequent cases.But by May 29, greater inter-neighborhood movement actually became associated with a lower likelihood of rst cases, as cities adapted to the crisis and adopted masks and some social distancing.This is because cumulative movement helps the virus grow, but someone has to catch it rst in order to get their rst case.Likewise, the likelihood of virus spread increases as time passes, while prefectures that spend more on health and keep a better balanced budget tend to be much less likely to see their rst case or subsequent cases of COVID-19.
Second, we nd that towns with stronger bridging social capital are more likely to receive their rst case and subsequent cases.Meanwhile, towns with strong linking social capital are less likely to receive their rst case, but more likely to receive subsequent cases.This is because communities with strong civic participation and frequent meetings of social groups are excellent places for clusters of infection to take root, but communities with stronger linking social capital might be more likely to trust recommendations from local government and health authorities, helping limit the spread of those clusters.
Third, using interaction effects over time, we found that towns with more overall social capital, especially including bridging social capital, tended to see more cases outright, but fewer new cases over time.These effects were strongest from February to May, and were more muted after considering cases from May 1 to May 29.This suggests that social capital had a signi cant role in shaping resident responses to the virus in its rst several months in particular.
As added evidence, we found a similar bivariate trend between social capital predictors and case rates, shown in the top panel of Figure 2. We analyzed how the daily correlation between prefectural social capital and vulnerability with case rates changes over time.This shows that in aggregate, towns with stronger bridging and linking social capital tend to see lower new case rates of COVID-19, while those with greater bonding social capital and vulnerability tend to see higher case rates.
Then, in the second row of Figure 2, we compared our set of prefectures with 0 or 1 cases and our set with 1 or more cases of COVID-19.Then, we used loess regression curves to track the change in these correlations from February to late May.The right panel shows a much clearer (and stronger) relationship between case rates and bridging and linking social capital after a prefecture gets its rst case than before.However, those positive correlations with case rates dropped from February until late April, while bonding social capital and vulnerability developed increasingly less negative relationships with case rates over time.A nding of great concern is that bridging and linking capital's declining relationship with case rates only lasted until mid-April, after which it sharply increased again.Since late April, case rates have shown an increasingly positive relationship with all forms of social capital and social vulnerability.This implies that a tipping point was reached in late April, when some well networked communities began interacting again, creating new clusters of infections.
When we examine the cumulative case rates in Figure 3, we see that over time, bridging and linking social capital develop strong negative relationships with cumulative total case rates, even though they initially had positive relationships with early case rates.This seems to suggest that these communities with stronger bridging and linking ties are adapting over time, shifting from key sources of spread to key mitigators of spread.Even though some communities with strong social capital saw new cases in May, the cumulative pattern suggests that investing in bridging and linking social ties is a powerful grassroots strategy for adaptation to pandemics.
Finally, to triangulate the effect of social ties on COVID-19 rates, we examined how social capital shapes COVID-19 spread through mobility patterns.Figure 4 depicts the changing association over time between case rates and the total cumulative inter-neighborhood mobility of Facebook users, with a line of best t depicting the overall trend over time.However, each panel displays the relationship between mobility and case rates separately for towns with social capital above vs.below the median.If social capital had no effect, then we would expect the plots with high and low social capital to be nearly identical.However, in several cases, the trend lines are completely reversed, and in others, the correlation differs greatly.This analysis reveals three ndings.
First, prefectures with low social capital, including boning, bridging, and linking social capital, saw much higher positive associations between mobility and infection rates than did prefectures with high social capital.Second, prefectures with high and low social capital both saw the relationship between mobility and case rates decline, indicating residents' adaptation and adoption of new behaviors like physical distancing, staying home, and wearing masks.Third, communities with strong bonding social capital saw a starkly decreasing correlation in mobility and new cases over time, while those with weak bonding social capital saw a starkly increasing association over time.Finally, over time, communities with stronger bridging social ties saw decreasing relationships between mobility and infection, much more so than those with weak bridging social ties.These ndings highlight that communities with stronger social networks are adopting new and different mobility patterns and in-so-doing reducing their risk of contracting and spreading COVID-19.Figure 4's results are purely descriptive, and do not adjust for social vulnerability, health care capacity, or other factors, but present strong, clear trends.
Discussion
In summary, we see preliminary evidence that social ties and mobility patterns are shaping the spread of COVID-19 among Japanese municipalities.While strong bridging and linking ties trend directly with more cases of infection, these same social resources correlate with declining rates of infection over time and fewer cumulative infections.This suggests that communities are leveraging their bridging and linking social ties to adapt to the crisis and helping spread quality information about better health practices.This in turn may be reducing the infection rates of these highly socially active communities.
One challenge of inferring the effect of social ties on infection is that the Japanese government has been widely criticized for testing too few residents over the last three months.Critics might argue that we only observed that bridging and linking ties were related to infection rates because prefectures with stronger social ties tend to have better quality governance, and those prefectures used those networks to identify and test more people.However, this explanation is not appropriate.If communities with stronger bridging and linking ties test more, we would expect the effect of bridging and linking social ties over time to produce a false positive.However, we nd the opposite.This lends credence to our hypothesis that, despite limited testing in Japan, bridging and linking social ties are critical to adapting to COVID-19 spread over time.
A second challenge of examining social ties is that communities have changed as COVID-19 unfolded.As residents spent more time with family, commuted less, and companies reduced normally gruelling hours, the monthly suicide rate in Japan dropped precipitously by 20% in April (Blair 2020).As a result, overall models of social behavior during this period eclipse key changing trends over time.However, this study compensated for changing social conditions by modeling three nested time spans, from February 17 to April 5, to May 1, and to May 29.The effect of social networks on reducing COVID-19 spread over time was most pronounced from February 17 to April 5, indicating that communities and local governments should seek to activate these networks as early as possible.
Further, one advantage of this research is that it controls for the tendency of residents to move among different neighborhoods, using aggregate tallies of Facebook user movement.Facebook users are a relatively accurate means of measuring movement, as similar shares of users ages 20 to 59 and male and female use Facebook.See the Methods appendix for further information on Facebook demographics.We found that communities with greater cumulative mobility were much more likely to get their rst infection, but this effect shrunk greatly thereafter, likely as these communities began to adopt new health practices.
In summary, this study nds that social ties are a vital tool for adapting to and reducing COVID-19 spread, drawing on the case of Japanese prefectures from February 17 to May 29.In Japan, more vulnerable communities have seen fewer infections so far, because high earning urban metropolises have been major vectors for spread instead.Though communities with strong bridging and linking social ties may have facilitated early spread, over time, they are adapting better and reducing the rate of new infections, much more than communities with strong bonding social capital.By investing in residents' ties with their broader community and with their elected o cials, we can improve our capacity to respond not just to disasters but also pandemics.
Methods
This preliminary study examines why some Japanese prefectures saw more new cases of COVID-19 than others.Using data from the Ministry of Health, Labor, and Welfare, this aggregate-level study analyses how social ties shape the spread of COVID-19, while adjusting for the effects of human mobility, cumulative infections, social vulnerability to crisis, health care capacity, governance capacity, and demographics.Because the process that leads a prefecture to develop its rst case of infection is likely quite different from the processes that lead a prefecture to develop its third, fourth, and four-hundredth cases, we modeled these processes separately.Drawing from 4841 prefecture-day observations, we used a logit model to explain why in 3683 cases, prefectures either saw zero or one reported new cases of COVID-19.Then, we use a gamma model to explain why in 1632 cases, prefectures saw increasingly positive case rates.In the logit models, the outcome is the count of new cases (0 or 1), controlling for population as a predictor, while in the gamma models, we use the population-controlled case rate.This data stretches from February 17 to May 29.We repeated our analyses across three time frames to account for changing social processes as the pandemic progresses.First, we analyzed cases from February 17 to April 5, to account for the high spread of cases at the start of April.Second, we analyzed cases from February 17 to May 1, to account for the decreasing rate of cases in late April.Third, we analyzed cases from February 17 to May 29 to account for the stagnation of case rates in May.This three-pronged approach helps contextualize when key social processes affect COVID-19 spread the most.
Key Variables
This analysis employs several key predictors.To model social capital and social vulnerability, we use new indices modeled after the indices by Kyne & Aldrich (2019) & Cutter et al. (2003), aggregated to the prefectural level.As an initial analysis, we model just social capital, while subsequent analyses replace the social capital index with subindices for bonding, bridging, and linking social capital.All indices range from 0 to 1, where 1 denotes the most social capital or vulnerability, and 0 signi es the least.Next, we control for time using the number of days passed in the dataset.Next, to represent mobility, we calculated the total cumulative number of Facebook users who moved between neighborhoods within or between prefectures since the start date of the analysis, lagged by 5 days.This is to account for the fact that it takes on average 5 days for COVID-19 spread to result in symptoms and new cases.This data was provided by Facebook's Data for Good project.
This study never had any contact with individual level Facebook user data, but instead uses aggregated data provided by Facebook.Any user data was collected by Facebook Data for Good according to Facebook's Data Use Policy, then aggregated to the neighborhood level to maintain individuals' privacy, so that researchers never had contact with individual level data.This aggregate level data is regularly provided to humanitarian NGOs and research teams with data sharing agreements, and does not involve any sensitive data nor user data.This analysis is an observational study of aggregate-level data, so no Institutional Review Board protocol was necessary.
Finally, we also add as a predictor the cumulative case rate of a prefecture ve days prior.We might expect that prefectures with greater population movement or more cases ve days prior might see more new cases in the present.Facebook users are a decent approximation of movement in the population; similar shares of users across age groups and gender use Facebook.According to a survey by Japan's Ministry of Internal Affairs and Communications (MIAC) in 2019, 32.8% of Japanese reported using Facebook, compared with 17% of teens, 47% of users ages 20-29, 49% of users ages 30-39, 37% of users ages 40-49, 29% of users ages 50-59, and 14% of users ages 60-69.Rates of use among men and women were identical (33%).This gives us a highly detailed glimpse of movement within or between prefectures, helping us assess the effect of this movement on spread rates.
Controls
This analysis also applied several control variables.First, the logit models use population as a control variable (while population is already incorporated in the gamma model outcome variable, which is the rate of cases per 1000 persons).Next, to represent overall health conditions, we use the life expectancy of a prefecture.If we were modeling death rates, it would be more important to control for additional health conditions, but since we are just modeling spread rates, we do not.Instead, we control for health care capacity, because communities with better health care capacity might identify, quarantine, and treat affected patients faster.This Health Care Capacity Index is a simple index of my own design that combines the proportion of doctors, nurses, hospitals, and clinics per 1000 residents, transforms each into a z-score, and then averages them together to make a single index.It is better to combine these as a single predictor than to apply them as separate predictors, because communities with more nurses but fewer doctors, for example, could still contain the spread of COVID-19 just as ne as communities with more doctors but fewer nurses.Next, we also control for total municipal and prefectural expenditures on health, as well as the health of municipality budgets, represented by the ratio of revenues to expenditures.
Finally, we control for several demographic traits.First, demographic vulnerability has already been represented in this model by the social vulnerability index, which represents overall trends in age, gender, income, education, employment, and health based vulnerability.Even so, we controlled for speci c key traits of vulnerability as able, including the median age, unemployment rate, employment in the secondary sector (manufacturing), and population.
Several demographic traits could not be added to the model, because they were highly correlated with other demographic traits.Since there are only 47 prefectures, each demographic trait only has 47 unique values over all prefecture-day observations; adding these variables led to high multicollinearity in models, and so they were removed to ensure accurate estimated effects with no multicollinearity problems.For example, in our dataset, the median age, income per capita, and the college education population are all correlated with a Pearson's r of +/-0.65 or above.Including any of these causes the variance in ation factor to spike upwards of 7, which leads us to question the veracity of such a model.Similarly, health conditions like heart disease and hypertension and even traits like population and gender are all strongly correlated with the median age of a prefecture.Appendix Figure 1 shows a correlation matrix for all variables in both models, shaded blue to signify strong positive correlations and red for strong negative correlations.These re ect a substantial degree of correlation among control variables.To avoid these multicollinearity issues, we employed a social vulnerability index, which already incorporates age, income, occupation, and gender-based vulnerability (Fraser 2020).Similarly, we consider conditions like heart disease and hypertension already controlled for because they are so collinear with age.Since COVID-19 case rates are only available at the prefectural level, this is the highest level of detail available, but future studies may improve on this if municipal level case rates become available.
Models
For each time frame, we generated eight models in total, including four logit models and four gamma models, resulting in 24 models total (See Methods Appendix Tables 1-3).logit and gamma models, the rst model used social capital as a predictor, while the second model included bonding, bridging, and linking social capital instead.The third and four models applied interaction effects with time, testing whether the effect of social capital indices and social vulnerability indices change over time.Each model in the period between February 17 and April 5 explained at least 67% of the variation in new cases of COVID-19.As the pandemic progressed, this decreased to 40% by May 1 and 25% by May 29 as prefectures developed new social processes and behaviors in response to the pandemic.Based on chisquared intercept tests, all models t better than an intercept model, with a statistically signi cant t (p < 0.001).
Multicollinearity problems were abated by keeping the average variance in ation factor below 3.5 (except for interaction models, which are naturally collinear).Bridging and linking social capital indices generated the highest VIF scores, at 5.5.This is because bridging and linking social capital are related concepts.While this score is higher than the gold standard of 2.5, it is nowhere near 10, a problematic level of multicollinearity, meaning that it does not affect the validity of the model.
Finally, these models showed considerable heteroskedasticity, as shown by the Breusch-Pagan statistic and p-values in Methods Appendix Tables 1, 2, and 3.This is because the same prefectures across days tended to have similar outcomes, and so we used robust standard errors to calculate more conservative estimates of statistical signi cance.Each model depicts the standardized coe cients, which describe the log-odds of new cases given an increase of one standard deviation in a predictor.As a result, the size of effects can be compared across different variables to show which variable has the largest estimated effect on the outcome.
Declarations
Competing Interests: The authors declare no competing interests.
Figures
Figures
Figure 1 Changing
Figure 1 | 2020-06-18T09:03:02.519Z | 2020-06-11T00:00:00.000 | {
"year": 2020,
"sha1": "b61bda7f0351181accd922c6fc5377921af5b821",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-34517/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5c559c244f333e02ef30f6a89fce4a379a502c7e",
"s2fieldsofstudy": [
"Sociology",
"Environmental Science"
],
"extfieldsofstudy": [
"Sociology"
]
} |
114445982 | pes2o/s2orc | v3-fos-license | Ergonomics Concerns (OHS) to Improve Productivity in Brick Industry
A typical trait of our civilization is, since primitive time human being always try to reduce work load with the help of machines or manual devices. It is observed that machines can also not continue its working for very long without human intervention, if continue there might be chance of failure without maintenance. Work is done by man-machine system. Productivity is directly associated with 5 M’s namely man, machine, material, money and management . In this project emphasis has been given to improve the productivity in small scale industries i.e. brick industries . Ergonomics has taken as subsystem under which fatigue and safety is analyzed. Wrong postures gives different kind of problems muscular, ophthalmological and orthopedic cause reduction in productivity. With the provision of correct body posture and using different productivity models application of ergonomics reduces the absentia of workers and productivity can be improved in small scale industries. In this way we can improve the GDP of our country by providing attention towards ergonomics (OHS). of OHS and ergonomics application is a priority, understanding the meaning of the terms related to OHS and ergonomics applications is a major source of workplace improvement. It is therefore important for both foreign and local investors to investigate workplaces, to know how a tool, machinery and production process would match the local workers’ physical and mental capabilities of the local population. OHS and ergonomics issues have a connection with various components in the regional economy since the provision of health, hygiene and safety in the workplace contributes to economic growth processes in a number of ways. OHS and ergonomic issues are also related with the production economy and social progress, and thus, important components of gross domestic product (GDP)—which are considered as inputs into the national economy through industrial development. It is therefore important to know what socio-economic and industrial strategies would be most fruitful if OHS and ergonomic applications are to be implemented in practice. This is because the GDP lost in work-related injuries and occupational disease stemming from a poor work environment is not counted in DCs. In many DCs, physical work practiced as manual materials handling (MMH) and strenuous tasks which usually take a toll as injuries, accidents and production loss, because numerous risky and hazardous jobs and strenuous tasks still have yet to be semi-automated or be transferred to other forms of controlled environment. Hundreds of thousands of workers living in DCs will be at risk if no future attempts are made successful for the improvement of health and hygiene. For unhygienic workplaces, these risks are real, and there is a long term trends in occupational exposure in DCs. The rapid rate of changes in working life today also requires several types of flexibility with the consideration of occupational health, industrial hygiene and safety requirements in various workplaces [1-3]. New industrial entrepreneurs also need to have the capacity to Journal of Applied Mechanical Engineering J o u r n al o f A pp lied ical Eninee r i n g
Introduction
Ergonomics word is derived from two Greek words ergos means work nomos means law. This word was introduced in 1949 by a group of British scientists who were concerned with the efficient use of complex military equipment during the Second World War. We can say it is human engineering this word is used in country like USA but in European countries this is called Ergonomics. Ergonomics is multidisciplinary area of study. Various disciplines have influence on human factors. These are following areas where we can study about human engineering or ergonomics: i. Anthropometry and bio mechanics.
ii. Control of physical work environment.
iii. Design of man -machine system. iv. Accidents fatigue and safety v. Work place design.
Occupational health and safety (OHS) primarily intends to maintain the working ability of the labor force as well as to identify, assess and prevent hazards within the working environment. Ergonomics, on the other hand, combines all of these issues to improve workers' efficiency and well being and maintain industrial production through the design of an improved workplace. OHS and ergonomic applications therefore work together to satisfy the needs of changing local people's attitudes, local work methods and/or traditional ways of doing things. These issues are important for many developing countries (DCs), because the effects of poor health and lack of safety facilities, and non ergonomics conditions exist in various workplaces are a hindrance to the national economy and social progress. Since implementing the full concept of OHS and ergonomics application is a priority, understanding the meaning of the terms related to OHS and ergonomics applications is a major source of workplace improvement. It is therefore important for both foreign and local investors to investigate workplaces, to know how a tool, machinery and production process would match the local workers' physical and mental capabilities of the local population. OHS and ergonomics issues have a connection with various components in the regional economy since the provision of health, hygiene and safety in the workplace contributes to economic growth processes in a number of ways. OHS and ergonomic issues are also related with the production economy and social progress, and thus, important components of gross domestic product (GDP)-which are considered as inputs into the national economy through industrial development. It is therefore important to know what socio-economic and industrial strategies would be most fruitful if OHS and ergonomic applications are to be implemented in practice. This is because the GDP lost in work-related injuries and occupational disease stemming from a poor work environment is not counted in DCs. In many DCs, physical work practiced as manual materials handling (MMH) and strenuous tasks which usually take a toll as injuries, accidents and production loss, because numerous risky and hazardous jobs and strenuous tasks still have yet to be semi-automated or be transferred to other forms of controlled environment. Hundreds of thousands of workers living in DCs will be at risk if no future attempts are made successful for the improvement of health and hygiene. For unhygienic workplaces, these risks are real, and there is a long term trends in occupational exposure in DCs. The rapid rate of changes in working life today also requires several types of flexibility with the consideration of occupational health, industrial hygiene and safety requirements in various workplaces [1][2][3].
provide a rational basis of new thinking and solutions for sustainable development of workplace safety and health. The efficiency of the work force should increase as the workers could devote their attention to the jobs rather than to the tools needed to pursue their job tasks. This devotion can be introduced as formal and informal methods to assist individuals in acquiring knowledge on OHS, as well as an ergonomic way of doing things. It is also believed that the sustainable development of the workplace will be achieved for long term benefits if health, safety and ergonomic issues are given priority in the local context ( Figure 1).
Literature Review
Managers usually associate ergonomics with occupational health and safety and related legislation, not with business performance. In many companies, these decision makers seem not to be positively motivated to apply ergonomics for reasons of improving health and safety. In order to strengthen the position of ergonomics and ergonomists in the business and management world, we discuss company strategies and business goals to which ergonomics could contribute. Conceptual models are presented and examples are given to illustrate: (1) the present situation in which ergonomics is not part of regular planning and control cycles in organizations to ensure business performance; and (2) the desired situation in which ergonomics is an integrated part of strategy formulation and implementation. In order to realize the desired situation, considerable changes must take place within the ergonomics research, education and practice community by moving from a health ergonomics paradigm to a business ergonomics paradigm, without losing the health and safety goals. It shows that ergonomics do not get proper attention for implementation in large as well as small scale industries, but it is observed that OHS and ergonomics implementation is must for all kind of industries for employees health and safety and ultimately we say if workers satisfied and healthy means it increases productivity [4].
Employee participation and commitment from top management are important factors in effective occupational health and safety (OHS) management. However, between top management and employees there are middle managers, who are given little room in the top management/ employee dichotomy. In this context, using the shipping industry as a case study, this paper investigates the impact of senior officer leadership on ratings' participation in OHS management. Results suggest that while ratings' precarious employment coupled with a steep hierarchy of command on board ships make upward communication in formal environments practically impossible, it is possible for senior officers to elicit effective participation from ratings by making good use of informal settings, working alongside ratings and engaging with them in social activities. Such leadership efforts bring in temporary relief to the constraints of participation and create spaces for them to contribute in the management of shipboard OHS. In this kind of industries OHS and ergonomics implementation must should be separately handled by department OHS and ergonomics [5,6].
Problem Identification
1. A Study of brick industry revealed that the productivity of the industry decreased as compared to the last three year.
2. It was found that the working condition and workers are the same.
3. Absentia is increased and interest in work has been reduced since last third year. Increased complain of body related problems.
Methodology
Productivity measurement by PO-P approach consists of the following steps:
PO-P: The model
Under PO-P approach productivity index for the system is built up in stages, from the productivity indices of the sub -systems constituting the system. Productivity index of a sub-system is, in turn, built up from the productivity indices of the Key Performance Area (KPA's) of that sub-system ( Value of (PI) u from equation 4 can be substituted in equation 1 to provide PI, the Productivity Index of a system S, as A typical industrial organization engaged in manufacturing and marketing of engineering goods can be considered to operate as a system with the following sub-systems; • Production sub-system • Marketing sub-system • Financial sub-system • Technology sub-system • HRD sub-system • Materials sub-system.
Diseases and Disorders Leukemia
The cause of most human leukemia is unknown. It is a kind of cancer in which abnormal white blood cells multiply in an uncontrolled manner. They interfere with the production of normal white blood cells. Leukemia affects the production of red blood cells.
Bursitis
Bursitis is a disorder that causes pain in the body's joints. It most commonly affects the shoulder and hip joints. It is caused by an inflammation of the bursa, small fluid-filled bags that act as lubricating surfaces for muscles to move over bones. This inflammation usually results from over activity of an arm or leg.
Osteoporosis
Osteoporosis is a disease resulting in the loss of bone tissue. In osteoporosis, the cancellous bone loses calcium, becomes thinner, and may disappear altogether.
Sprains
A sprain is an injury to a ligament or to the tissue that covers a joint. Most sprains result from a sudden wrench that stretches or tears the tissues of the ligaments. A sprain is usually extremely painful. The injured part often swells and turns black and blue.
Fractures
A fracture is a broken bone.
Scurvy
Scurvy is a disease caused by lack of ascorbic acid (vitamin C) in the diet. If a person does not get enough vitamin C, any wound he or she might have heals poorly. The person also bruises easily. The mouth and gums become sore. The gums bleed, and the teeth may become loose. Patients lose their appetite, their joints become sore, and they become restless.
Tendinitis
Tendinitis is a disorder involving stiffness or pain in the muscles or joints. It is often called rheumatism.
Arthritis
There are more than 100 diseases of the joints referred to as arthritis. Victims of arthritis suffer pain, stiffness, and swelling in their joints. Osteoarthritis, also called degenerative joint disease, occurs when a joint wears out. Many elderly people have osteoarthritis, and the disease may also occur if a joint has been injured many times. The joints most frequently affected are those of the hands, hips, knees, lower back, and neck.
Scoliosis
Scoliosis is a side-to-side curve of the spine. This condition becomes
Talipes equinovarus
Talipes equinovarus, often called clubfoot is an abnormal condition of the foot, usually present at birth. The foot is bent downward and inward so that the person can walk only on the toes and on the outside of the foot. Sometimes the foot is bent upward and outward so that the person can use only the heel for walking.
Kyphosis
Kyphosis, also called hunchback is a forward bending of the spine. Kyphosis is caused by any condition that deforms the bones of the upper part of the spine so that the person is bent forward. Diseases that cause kyphosis include tuberculosis, syphilis, and rheumatoid arthritis.
Poliomyelitis
Poliomyelitis, also called polio, is a serious infection caused by a virus. A polio virus may attack the nerve cells of the brain and spinal cord, causing paralysis. Some patients show only mild symptoms, such as fever, headache, sore throat, and vomiting. Symptoms may disappear after about a day.
Female desease
The female workers engaged into clay brick production in subtropic climate are exposed to dust and heating microclimate. Scientific and technologic progress has a great positive influence on the improvement of work conditions in the stated production. Respiratory diseases turned out to take the first place in the structure of morbidity with transitory disablement. The clinical studies established the correlation between the work conditions and gynecologic morbidity, occurrence of complicated pregnancy and delivery, impaired physical development and health status of newborns and children.
Pneumoconiosis
The term 'pneumoconiosis' refers to a group of lung diseases caused by the inhalation and retention of dust in the lung. This causes a range of granulomatous and fibrotic changes. In modern times, the most commonly occurring variant, apart from asbestosis, is coal workers' pneumoconiosis arising from the inhalation of coal dust. There is generally a long time lag between exposure and onset of the disease -10 years in the case of coal dust and 15-60 years with asbestos -hence, most new cases or deaths from pneumoconiosis reflect the working conditions of the past (Table 1).
Suggestions
• Kiln design to improve fuel efficiency. "The big problem is no uniformity of temperature.
• The workers should work on shift basis.
• Building several kilns near each other and transferring the heat that is lost during the firing of one kiln to the next one.
• Need to develop a sun-drying process, so it eliminates the kiln.
• "We cannot do anything about the climate, but we can do a lot about the environment." • Brick Control Act.
• Mandatory to install a minimum 50-feet height chimney with filter in every kiln for emission of smokes.
• The owners are prohibited from using all kinds of fire wood in kilns.
• Allow infertile and fallow land for setting up of brick fields, but almost in all districts many brick-fields have been set up on arable lands.
Result and Discussion
From a survey of the kilns operating in various parts of the chhattisgarh state. I have a firm conviction that it is very much possible to make these kilns 'CLEAN' to provide congenial and hygienic atmosphere to the workers. The way is there, will is needed. All the improvements hinted above need be followed strictly. It will not only result in overall saving of fuel but will also make the workplace clean. Minor investments in these efforts will be more than compensated by the healthy and hygienic environment that has so far eluded the brick kilns.
It is time brick industry quits its old fashioned look of an introvert organized system and comes forward as a progressive and modern looking organization. The industry should voluntarily cooperate with statutory bodies in state and national interest to safeguard the nature and environment in its primitive form to be available to the posperity. I hope the state government will rise to be occasion and take up the challenge of providing a 'CLEAN' environment to the workers in their kilns. I am convinced they will succeed in this earnest effort.
The anticipated productivity index have increased from .6791 to .7832. Productivity can be increased by the application OHS and Ergonomics in small scale industries by more than 10%. Application of OHS and Ergonomics are generally ignored by small scale industries management people.
Suggestions
• Kiln design to improve fuel efficiency. "The big problem is no uniformity of temperature.
• The workers should work on shift basis.
• Building several kilns near each other and transferring the heat that is lost during the firing of one kiln to the next one.
• Need to develop a sun-drying process, so it eliminates the kiln. c) Mandatory to install a minimum 50-feet height chimney with filter in every kiln for emission of smokes.
d) The owners are prohibited from using all kinds of fire wood in kilns.
Conclusion
The existing level of productivity is measured using technique of productivity measurement, termed as Performance Objectives -productivity (PO-P). PO-P approach lays stress on the aspects of identification of areas with low productivity so as to bring about improvements. Its basic philosophy lies in the belief that input resources of an organization cannot be viewed in isolation. A methodology has been presented to help in identification of key performance areas, performance objectives and their weightage. To include performance objectives of qualitative nature's questionnaire is used. For productivity measurement three sub -systems mainly 'Technology', 'Workplace' and 'Market' has been identified where improvement in productivity was needed. For productivity enhancements, in the area of 'Technology', 'workplace' and 'Market factor', the Study looked at each of these sub -system and came up with suggestions that enhances productivity of these significantly. From a survey of the kilns operating in various parts of the Chhattisgarh state. I have a firm conviction that it is very much possible to make these kilns 'CLEAN' to provide congenial and hygienic atmosphere to the workers. The way is there, will is needed. All the improvements hinted above need be followed strictly. It will not only result in overall saving of fuel but will also make the workplace clean. Minor investments in these efforts will be more than compensated by the healthy and hygienic environment that has so far eluded the brick kilns. | 2019-04-15T13:06:25.455Z | 2015-02-15T00:00:00.000 | {
"year": 2015,
"sha1": "f84ba308ecbc233b7021b09140bb8d27f19e07b3",
"oa_license": "CCBY",
"oa_url": "https://www.omicsonline.org/open-access/ergonomics-concerns-ohs-to-improve-productivity-in-brick-industry-2168-9873-1000156.pdf",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "97b8577a9b8be2d2e6e282b562937daa461cc492",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
25657915 | pes2o/s2orc | v3-fos-license | Multivariate calibration of energy‐dispersive X‐ray diffraction data for predicting the composition of pharmaceutical tablets in packaging
HighlightsNon‐destructive screening of unpackaged and packaged tablet formulations.High accuracy of quantification using multivariate calibration methods.Data pre‐treatment prior to modelling and prediction not required.Improved prediction accuracy compared to angular dispersive X‐ray diffraction and comparable to Raman spectroscopy. ABSTRACT A system using energy‐dispersive X‐ray diffraction (EDXRD) has been developed and tested using multivariate calibration for the quantitative analysis of tablet‐form mixtures of common pharmaceutical ingredients. A principal advantage of EDXRD over the more traditional and common angular dispersive X‐ray diffraction technique (ADXRD) is the potential of EDXRD to analyse tablets within their packaging, due to the higher energy X‐rays used. In the experiment, a series of caffeine, paracetamol and microcrystalline cellulose mixtures were prepared and pressed into tablets. EDXRD profiles were recorded on each sample and a principal component analysis (PCA) was carried out in both unpackaged and packaged scenarios. In both cases the first two principal components explained >98% of the between‐sample variance. The PCA projected the sample profiles into two dimensional principal component space in close accordance to their ternary mixture design, demonstrating the discriminating potential of the EDXRD system. A partial least squares regression (PLSR) model was built with the samples and was validated using leave‐one‐out cross‐validation. Low prediction errors of between 2% and 4% for both unpackaged and packaged tablets were obtained for all three chemical compounds. The prediction capability through packaging demonstrates a truly non‐destructive method for quantifying tablet composition and demonstrates good potential for EDXRD to be applied in the field of counterfeit medicine screening and pharmaceutical quality control.
Introduction
EDXRD is a powerful tool for characterizing the chemical composition of crystalline materials. Materials which fall into this category include powder-form illicit drugs and plastic explosives, both of which have been studied using EDXRD [1][2][3]. The advantages of this technique include the use of high-energy photons which are capable of penetrating the surface of materials and characterising the layers beneath. This is a highly attractive capability in security screening contexts and for the determination of medicine quality. In both types of context a low level of disruption is desirable and EDXRD provides a non-destructive and non-invasive means of testing. A recent study has demonstrated that chemically-relevant features from EDXRD data can be observed for aspirin tablets when they are within blister packaging [4]. A quantitative analysis of unpackaged pharmaceutical formulations using EDXRD and mul-tivariate calibration methods was carried out in a previous study and demonstrated good capability to predict concentrations of the constituent compounds [5]. In the present study we demonstrate again this capability and extend it to modelling and quantifying the chemical composition of the same samples through blister and card packaging simultaneously.
There are many examples of Raman spectroscopy being combined with multivariate calibration for quantitative analysis of pharmaceutical mixtures [6][7][8][9][10]. One such study [10] looked at ternary mixtures of paracetamol, starch and sucrose, covering a range of concentrations. The Raman spectra were acquired through blister packaging to construct a partial least squares (PLS) regression model, resulting in a root mean square error of cross validation (RMSECV) of 1.4%, and the authors observed the potential application to counterfeit medicines detection. Fraser et al. carried out a semi-quantitative analysis of active pharmaceutical ingredients (APIs) in intact tablets of erectile dysfunction medicines, including counterfeit versions -the PLS calibration model in this case was constructed from Raman spectra from tablet 'cores', i.e. with Fig. 1. X-ray incident on two parallel planes at an angle Â, with photon wavelength and planar spacing d. The thicker ray line represents the additional path length traversed in a reflection from the lower plane which, for a coherent scattering event, is an integer multiple of the X-ray wavelength, satisfying Bragg's law. the coating removed. Using selected bands of the spectra and preprocessing, a RMSECV of 7.38% was achieved in the best case [9].
To the authors' knowledge, there have been no previous studies into the non-destructive quantitative analysis of pharmaceutical mixtures through packaging using EDXRD and chemometrics, which form the basis of this study. The following section introduces some principles of X-ray diffraction (XRD) to explain the physical phenomena giving rise to the features observed in the experimental data.
X-ray diffraction
Crystalline materials -such as polycrystalline powders of the chemicals used in pharmaceutical formulations -comprise molecules which are arranged in an ordered three-dimensional structure repeated throughout the crystal. Sets of parallel molecular planes arise from this long range order [18]. These sets of planes, in particular the separation between them, are unique to the material and thus present an opportunity for material identification. It is through XRD that we can achieve this characterisation of materials.
X-rays of the same energy scatter coherently from molecules in adjacent planes when constructive interference occurs. The conditions to be satisfied for the detection of a coherent scattering event are shown in Fig. 1 and are defined by Bragg's law: where is the wavelength of the incident X-ray, d is the interplanar spacing and  is the angle subtended by the X-ray source, the sample and the detector. There are two ways in which Bragg's Law can be interpreted for use in XRD experiments. Firstly, in ADXRD, monochromatic Xrays are used (i.e. fixed ) and diffraction peaks are detected for a range of angles. ADXRD provides high-resolution XRD profiles, but the relatively low energy X-rays used do not pass through thick samples. Secondly, in energy-dispersive XRD (EDXRD), the sample is irradiated with polychromatic X-rays and an energy-resolving detector collects a diffraction spectrum at a fixed angle. The quality of diffraction patterns is limited by the energy resolution of the detector, and more importantly, by the loss of angular resolution due to collimators allowing a range of angles, i.e. deviations from the nominal angle, of X-rays through. This is a necessary compromise in order to collect an adequate number of counts in an acceptable time scale for screening applications, but results in significantly broader, overlapping peaks compared to ADXRD profiles.
It is common to convert the energies of an EDXRD spectrum to units of momentum transfer x, which incorporates diffraction dependence on scattering angles and X-ray energy. This is useful for making comparisons between EDXRD systems, and between ADXRD and EDXRD. Bragg's law (1) is rearranged to: using the relationship between energy of a photon and its wavelength: where h is Planck's constant c is the speed of light in vacuo.
The advantages of EDXRD are that the lack of moving parts in the instrumentation can make data collection more rapid and the higher energies of X-rays used can penetrate bulkier samples. EDXRD can therefore be used for non-destructive analysis of materials.
It is assumed that statistically, all possible orientations -and hence planes -of the crystals are represented equally in a powder. However, some crystals have shapes that create a tendency for them to align in a certain way, in which case some crystal planes are over-represented in the resulting diffraction pattern -this is the preferred orientation effect. This effect is often stated as being a limiting factor of the use of ADXRD in the aforementioned studies.
Another relevant physical phenomenon in X-ray screening is that of attenuation. Materials attenuate the beam and reduce the flux of photons which are transmitted through the material. Attenuation is greater for lower-energy photons as well as for thicker materials. Moreover, the molecular composition of the material itself has its own energy dependent attenuation profile, (E). The percentage of X-ray photons at an energy E which will be transmitted through a material of thickness x which has an attenuation coefficient of (E) is defined by the Beer-Lambert law: where I is the relative intensity of the X-ray beam at energy E following the interaction with the material. The effect of attenuation by packaged tablets at lower energies is therefore appreciable.
Sample preparation
Paracetamol (Acetaminophen BioXtra, ≥99.0%; Sigma Aldrich), caffeine (ReagentPlus; Sigma Aldrich), and microcrystalline cellulose (average particle size 50 m; Acros Organics) were the ingredients of the ternary mixtures and were all used as received. The former two are common APIs, and the latter is a common excipient used as a dilutant. Microcrystalline cellulose was an appropriate excipient as it has an XRD spectrum that is representative of other common excipients in terms of its peak broadness and momentum transfer range in its XRD profile.
The calibration mixture design is shown by the triangles in Fig. 2. Such a design simplex is common to mixture analysis experiments and has been used to enable the system to be compared to other studies [11,13].
Each sample mixture was ground with an agate mortar and pestle for three minutes to mix thoroughly and to reduce particle sizes -with the aim to decrease preferred orientation effects [13]. More vigorous mixing techniques such as milling were avoided to prevent potential polymorph phase transitions [11,13,15,19]. Sieving was also avoided as the paracetamol powder exhibited a build-up of electrostatic charge when ground, making it difficult to handle; this additional step also risked introducing artefacts resulting from selecting particles of a certain size [16,20]. 400 mg of each mixture was transferred to a 13 mm-diameter die and pressed into tablets using an automated Speca Press. A 1ton load (equivalent to 67.0 MPa) was applied, with a dwell time of two seconds before the pressure was released. The compacted tablets were then extracted carefully from the die.
EDXRD system
A schematic of the system used in the EDXRD experiment is shown in Fig. 3. The X-ray source was a water-cooled Comet MXR-160 X-ray tube with tungsten target. The source was operated at a peak voltage of 60 kV and 2 mA current.
A high-purity germanium (HPGe) detector (model GLP-36360/13-P, EG&G Ortec) was positioned at approximately 4 cm from the scatter collimator to detect scattering events. The detector was held at a temperature of 77 K and was coupled to a multichannel analyser to produce an energy-space histogram for each sample. Each detected photon was assigned to one of 512 channels.
The nominal scattering angle (2Â) was 6.3 • -determined by comparing the peak positions for a caffeine sample spectrum to those from a caffeine reference spectrum. The beam spot size was calculated to be 1.6 mm in diameter at the sample.
Sample scanning
All samples were scanned in triplicate, with a different part or side of the tablet scanned each time. For the "packaged" sample scans, pieces of card, foil and plastic taken from Sainsbury's paracetamol packaging were cut to size and fashioned into a sample holder such that the tablets would have foil and card on one side, and plastic and card on the other.
After initial scans, a preferred orientation effect was evident in all samples containing paracetamol, with some peaks showing large variations in intensity between scans. Rotating the sample was not an option in this experimental setup, nor would it be suitable for the ultimate goal of scanning whole tablets in packaging. Others have overcome this issue by either shaking samples, or by scanning at different points to smooth out discrepancies [21,22]. In this instance, a set of translation stages was added and used to scan all paracetamol-containing tablets in 30 positions for 10 s per step. All samples not containing paracetamol were scanned continuously for 300 s.
Multivariate analysis
The two multivariate analysis methods used in this study were principal component analysis (PCA) and partial least squares regression (PLSR). Both methods are powerful tools in chemical mixture analysis, for X-ray diffraction data and spectroscopic data in general. Such data have high correlation between variables within their profiles, specifically between energy channels in this study. As such, the high dimensional spectral data have a low-dimensional latent structure to describe the variation in the chemistry of the mixture set. This lower dimensionality, often referred to as chemical rank, corresponds closely to the number of compounds comprising the mixtures. These methods transform the data into a few mutually-orthogonal latent variables which between them account for almost all of the variance found in the data set, and allow us to discard uninformative data or noise.
PCA is used in this study as an exploratory tool, enabling us to identify possible groupings or patterns of samples from transforming the EDXRD data alone, and to then compare these groupings to known reference chemistry. By doing this we get an insight into the power of the experimental system to discriminate chemical information of interest.
PLSR is used to build calibration models which relate the known chemical information, such as sample concentrations, to the instrumental response measured by our system. If we define the concentrations of a compound as a response vector y and the corresponding multivariate EDXRD profiles as a matrix of explanatory variables X, then the calibration building stage aims to form a linear regression model between the two: where ˇ is the vector of regression coefficients which we estimate asˆ using the method of partial least squares [23,24]. ε is the error not explained by the model. PLS models the covariance between the reference chemistry and the EDXRD profiles of the calibration data.
Typically, a small number of principal latent variables are selected and the regression coefficients between the reference chemistry and the instrumental response are calculated. The model is then used with EDXRD profiles of test samples to predict their concentrations in order to validate the regression model. The method of model validation used in this study is leave-one-sample-out crossvalidation. A RMSECV is calculated for the model from the average magnitude of the residual of predicted concentration and reference concentration for all samples. It is through validation that the appropriateness of data pre-treatment methods, the range of energies, and the number of latent variables in the PLS model can be assessed and an optimal modelling approach determined.
Data pre-treatment
In addition to modelling the raw EDXRD data, the data have been transformed by a range of pre-treatments using common chemometric techniques to determine whether they improve the performance of the PLSR model compared to using only the raw profiles. The transformations used are standard normal variate (SNV) and multiplicative scatter correction (MSC), which have been shown in NIR spectroscopy to correct for multiplicative scattering and other physical effects such as particle size and in some instances lead to model improvement. A first-order derivative pretreatment has also been carried out on the EDXRD data which may correct for potential baseline drift across profiles [25]. In order to determine the effect on prediction accuracy of X-ray attenuation at low energies, both a 'short' and 'long' spectral region were studied. The former region corresponds to 12.2-40.3 keV, or 0.538-1.78 nm −1 ; the latter region encompassed the full range of the X-ray tube spectrum, i.e. 4.02-56.6 keV, or 0.178-2.50 nm −1 .
Software
The plotting of EDXRD profiles was carried out using Matlab (R2017b, v.9.3, Mathworks). The Unscrambler (v.9.5, CAMO, Norway) software was used for the multivariate analysis methods PCA and PLSR, pre-treatments and model checking diagnostics.
Unpackaged samples
Each sample was scanned in triplicate and the three profiles per sample were averaged. Plots (A) and (C) in Fig. 4 show triplicate measurements of two samples containing paracetamol, samples 2 and 11. Very prominent preferred orientation effects can be observed between 0.5 and 1.0 nm −1 in momentum transfer space between the triplicate measurements. However, when the translational stages were implemented the preferred orientation effects of paracetamol were greatly reduced as shown plots (B) and (D) of Fig. 4. The 22 averaged profiles for the unpackaged mixtures are plotted in Fig. 5(A). An important feature of EDXRD data is heavy overlapping of peaks which is demonstrated in the figure. No particular energy range can therefore describe the variation of a particular chemical across the sample set, which motivates the use of multivariate analysis.
The Beer-Lambert equation given in (3) enables the attenuation effect of compounds of specified thicknesses and densities on particular X-ray energies to be calculated. The attenuation effect on X-rays of caffeine, paracetamol and microcrystalline cellulose at thicknesses of ∼0.25 cm used in this experiment only become appreciable below ∼11 keV (0.5 nm −1 in this setup) -calculated using tables of mass attenuation coefficients ( ) obtained from the NIST X-COM database [26]. The majority of the energy window for EDXRD profiles is not therefore affected by self-attenuation effects.
A principal component analysis was carried out on meancentred, 'long', averaged profiles of the 22 samples in the training set. Fig. 5(B) shows the scores plot for the first two PCs, which account for 93% and 5.6% of the explained variance across the data, respectively. A two-dimensional principal component projection of the EDXRD data clearly separates the sample profiles closely in accordance with their coordinates in the mixture design of Fig. 2. This is an encouraging result for when the aim is to regress the EDXRD against the reference concentrations to build a regression model.
Packaged samples
The 22 averaged profiles for the packaged mixtures are plotted in Fig. 5(C). The aluminium and polyvinyl chloride (PVC) which comprises blister packaging [27], plus the card material of the outer packaging, contribute to X-ray attenuation and scattering in the profiles. Fig. 6 demonstrates the effect of packaging for samples 4, 9 and 18 compared to unpackaged samples. Despite the attenuation effect of the packaging material, the diffraction peak features are still observed. There is a higher intensity for packaged tablets around a momentum transfer of 1.2 nm −1 than for unpackaged samples which can be attributed to the scattering from the packaging material itself.
In the principal component analysis of the packaged samples the between sample explained variances for the first three PCs were 90%, 8.1% and 0.74% respectively. The first two PCs have been plotted in Fig. 5(D). Despite the packaging effects causing observable (2) distortion to the diffraction profiles, the scores from the PCA still separate the objects in a manner comparable to the ternary design of the mixture compositions shown in Fig. 2.
Unpackaged
PLSR was used for model calibration using the mean spectra for the 22 samples in the training set as explanatory variables, X, and the nominal concentrations of the mixture components as the response variables y. The pre-treatments described in Section 2.5 were applied to the data and a separate model built for each.
The models were validated using leave-one-out cross-validation where for each model a RMSECV was calculated. The optimum number of PLS-factors was then chosen by selecting the number of latent variables which minimized the error statistic with lower numbers of latent variables favoured, when errors were comparable, to avoid overfitting the data. The results are provided in Table 1.
Diagnostic tools presented in Beebe et al. [28] were used to check for sample leverage and sample outliers for each model. No samples had high leverage and residuals simultaneously for any of the models and therefore no samples were identified as outliers. The application of MSC and SNV as pre-processing methods did not improve the model, as can be seen from the similar or higher RMSECVs. For MSC, the diagnostic plot of spectral value versus mean spectral value did not show any strong tendencies for different slopes or offsets between samples, which indicated that it was probably not needed.
In the absence of any improvement to the model performance from using pre-treatments, the use of raw EDXRD in a PLSR model was deemed suitable. The predicted concentrations from crossvalidation for the raw, 'long' spectra are plotted against reference concentrations in Fig. 7.
The paracetamol concentration predictions were more spread, as expected by the variations in spectra from preferred orientation. The RMSECV values thus follow from this by showing that the largest errors were for paracetamol; the caffeine and cellulose values exhibited smaller errors. It is important to note that the nominal concentration values are likely to differ from the actual concentrations due to errors introduced when measuring the powders and due to possible inhomogeneity in the mixture; in a similar experiment by Moore et al., the cumulative error in preparing such tablets was estimated to be 2-3% [16].
These results are therefore encouraging -in fact, the RMSECV values were better than those quoted in the literature for quantitation of ternary mixtures by ADXRD, and on a par with results from Raman spectroscopy [15].
Packaged
The same modelling used for unpackaged tablets was applied to packaged tablets.
RMSECVs for the PLSR model based on raw, 'long' spectra were in general higher than for the unpackaged case. There was a 20% and 24% increase on the RMSECV values for paracetamol and microcrystalline cellulose respectively, but only a 2% increase for caffeine. It is possible that some of the potential change in prediction error for caffeine has been mitigated by the greater effect of attenuation on its first peak, with lower peak amplitudes having smaller errors according to Poisson counting statistics [29].
The results from the modelling of packaged tablets are comparable to those of unpackaged tablets and within the range of uncertainty of the reference chemistry according to Moore et al. [16]. As with the unpackaged models, modelling of the raw EDXRD data was found to be as good or better than when using data pre-treatments and no sample outliers were observed from model diagnostic tests for any model. The predicted concentrations from cross-validation for the raw, 'long' spectra were plotted against reference concentrations in Fig. 8.
Conclusions
A preliminary study using energy dispersive X-ray diffraction to predict the concentration of common ingredients in pharmaceutical tablets has been carried out. An EDXRD system has been developed and multivariate calibration has been used to model the EDXRD profiles when the tablets are unpackaged and packaged. This study shows similar accuracy is obtained for both the unpackaged and packaged scenarios for the mixtures.
One disadvantage of calibration methods for composition resolution is that they cannot model or account for all possible adulterants or interferents which may be encountered during screening. Future work will therefore explore soft modelling methods to characterise the chemistry of samples which are more robust to interferents.
A further limitation to the study is that only three pharmaceutical compounds have been analysed and the formulations were prepared in the laboratory for the purposes of the study. Fur-ther work is required to determine the capability of predicting the concentrations of industrially manufactured medicines using the method described. Furthermore, the current trend to include amorphous compounds in pharmaceutical formulations motivates the evaluation of the method using less crystalline compounds. However, a recent study by Moss et al. [30] has demonstrated the potential of EDXRD to discriminate amorphous materials in breast tissue such as tumour and fatty tissue.
Limitations notwithstanding, the analysis here shows that high accuracy can be achieved using EDXRD to characterise pharmaceutical formulations and the technology could be put to effective use for truly non-destructive counterfeit medicine screening and pharmaceutical quality control.
Contributions
CC set up the EDXRD system, carried out the experiment, performed the analysis and contributed to manuscript drafting. PK drafted the manuscript and consulted on the statistical modelling methodology and analysis. DO'F assisted to set up and calibrate the EDXRD system. RS is the principal investigator for the project. All authors read and approved the final manuscript. | 2018-04-03T04:41:03.571Z | 2018-03-20T00:00:00.000 | {
"year": 2018,
"sha1": "8a3e4e00ecd5c6d5681e16b069e66217ad829081",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jpba.2017.12.036",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "fe992156291ba5197cc44148163d82b3ddfd12d5",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225595685 | pes2o/s2orc | v3-fos-license | A path analysis of five-factor personality traits, self-efficacy, academic locus of control and academic achievement among online students
This study tests the direct and indirect effects of online learners’ personality traits, self-efficacy, and academic locus of control variables on grade point average (GPA) via path analysis. The participants of the study are 525 online learners from two different universities in Turkey. The results of the study reveal a good fit of the proposed model. Relationships in the research model show that self-efficacy has a positive direct effect and external academic locus of control has a negative direct effect on academic achievement. Conscientiousness, openness, and neuroticism have an indirect effect on the GPA, mediated by self-efficacy and external academic locus of control. Results are interpreted with the intent of providing an enhanced understanding of the importance of personality in students’ success at online learning experience.
Introduction
Online learning offers many opportunities such as easy access to education, low cost, flexible learning opportunities, standard learning content, and easy access to experts (Kaya, 2002). Considering the effectiveness of online learning, it is stated that the subject is comprehensive (Tzeng, Chiang, & Li, 2007). In this context, Shachar and Neumann (2003) draw attention to the students' academic achievement, student satisfaction, students' attitudes, and evaluation of teaching. These indicators play an important role in the success and quality of online learning practices. Assessment tools for measuring academic achievements, such as course notes, test scores or general academic average scores, provide a more standard and more objective assessment mechanism, which refers to the same meaning as accepted by educational institutions.
The most important elements of online learning are students and they can affect education processes. Some student related factors affecting online learning processes; dropout (Choi & Kim, 2018), increased responsibility for learning, poor technical competence, communication and feedback problems are some of these (Bartolic-Zlomislic & Bates, 1999;Leontyeva, 2018). Besides, considering online learning environments, students' learning responsibilities are higher than face-to-face learning environments (Kiryakova, 2009), and the limitations of the communication and interaction opportunities in online learning environments emphasize the individual characteristics of students (Moore, 1993). Therefore, considering the individual characteristics, interests, needs, and attitudes of the students is important for the effective online learning process (Bhagat, Wu, & Chang, 2019;Dabbagh, 2007).
The individual characteristics of the students are explained by many variables such as motivation, personality, self-efficacy, self-esteem, self-regulation, anxiety, stress, locus of control, and self-perception. Some individual characteristics frequently change in time. For example, motivation is an important individual characteristic, and many factors can influence and can change it frequently (Viets, Walker, & Miller, 2002). Some individual characteristics develop and are shaped from childhood to adulthood with a rare change. Keller (2010) refers to this situation: …human motivation includes the concept of traits in the form of psychological constructs that define specific personality in regard to various aspects of personality such as the need to achieve, perceptions of control, curiosity, attributions for success or failure, and anxiety. Also, a distinction is made between trait versus state conditions in regard to virtually all motivational concepts. A trait is presumed to refer to a stable predisposition to behave in a certain way. In contrast, states refer to the disposition to demonstrate a given motive or personality characteristic at a given point in time or in specific types of situations. (p. 15) Personality traits are a feature that individuals create throughout their lives and usually show minor change (Harris, Brett, Johnson, & Deary, 2016). Similarly, selfefficacy (Multon, Brown, & Lent, 1991;Usher & Pajares, 2008) and locus of control (Findley & Cooper, 1983) are also individual characteristics that are shaped throughout the lives of individuals. Knowing how the individual characteristics of students affect academic achievement in online learning environments can contribute to researchers and managers in the field to understand online learners. Therefore, which individual characteristics should be addressed is an important question. It may be helpful if the individual characteristics to be addressed are constant and more consistent. It is possible to make better inferences about the students who will be included in online learning by examining the current individual and academic characteristics of the students.
The evaluation of some individual characteristics and academic achievement on a theoretical model will contribute to the design and creation of the effective online learning environments. Besides, this model will contribute to the evaluation of the relationship between the individual characteristics and academic achievement, to reveal the direct and indirect effects between the variables and to control the effects of other variables that may impact success at online learning. When we examine the models that predict student achievement in online learning, it is seen that these studies are not based on distance education theories and the academic achievement variable has not been adequately studied (Aydoğdu & Tanrıkulu, 2013;Freeze, Alshare, Lane, & Wen, 2010;Hassanzadeh, Kanaani, & Elahi, 2012;Lin, 2007;Lin & Chen, 2012;Selim, 2007).
This study aims to develop a model that predicts the academic achievement of the students in online learning by their characteristics such as personality traits, self-efficacy and locus of control (see Fig. 1). This research seeks to address the following questions: 1. What are the direct and indirect effects of the personality traits of students (conscientiousness, extraversion, openness, neuroticism, and agreeableness) on academic achievement? 2. What are the effects of the students' self-efficacy and locus of control (external and internal) features on academic achievement?
Personality traits
"Personality refers to an individual's characteristic patterns of thought, emotion, and behavior, together with the psychological mechanisms-hidden or not-behind those patterns" (Funder, 2013, p. 5). There are many theories about personality. Each theory tries to explain the personality from a specific point of view. Burger (2006) mainly divides personality theories into six approaches: Biological approach, Psychoanalytic approach, Humanistic approach, Behavioral/Social Learning approach, Cognitive approach, and Trait approach. One of the important models of the trait approach is the five-factor model of personality. The five-factor model of personality is widely preferred in educational research (Göncz, 2017). Therefore, this study is based on this model. The Five-factor model summarizes personality traits in five broad factors (Gosling & Mebta, 2013). Factors are as follows: Conscientiousness, Extraversion, Openness, Agreeableness, and Neuroticism (Table 1).
Table 1
Five-factor model of personality
Trait
Explanation Conscientiousness Highly conscientious individuals are orderly, planned, they act dutifully and take responsibility, and hence, they are patient and committed to the success *.
Extraversion
Highly extraverted individuals are assertive and sociable, often self-confident, talkative and they love to be in the community and social environments *. Openness Individuals with highly open personalities are generally cultured, generating new and interesting ideas, having creativity, with a high level of imagination and intellectual curiosity. They are openminded and think independently and untraditionally *. Agreeableness Agreeable individuals are compassionate, respectful, tolerant, confident, trustworthy preferring cooperation, they easily adapt, and they are helpful *. Neuroticism Emotionally unstable individuals are prone to experiencing negative emotions, such as anxiety, depression, irritation, and vulnerability in everyday situations and their mood often changes *.
Note. * Barrick & Mount (1991); Burger (2006); John & Srivastava (1999); Sudak & Zehir (2013) The personality traits and GPA have been studied frequently in face-to-face learning environments, and many studies have found a positive relationship between personality traits and GPA (Poropat, 2009;Salgado & Táuriz, 2014;Trapmann, Hell, Hirn, & Schuler, 2007;Vedel, 2014). On the other hand, Bahçekapılı and Karaman (2015) stated that the studies examining the relationship between the five-factor personality traits and GPA are limited in online learning (See Table 2). The relationship between conscientiousness and GPA has sometimes been positive with somewhat insignificant scores. As to openness, the relationship was generally positive, but one study found a negative relation. Considering the extraversion, the situation seems complex. While in one of the studies, relationship between extraversion and GPA was positive, in other two studies the relationship was found to be negative and in another study the correlation was found insignificant. For neuroticism and GPA, a negative relationship was observed in two studies and an insignificant relationship in one study. Finally, in terms of Agreeableness, two positive and one insignificant relationship have been revealed with GPA. Though all of these findings are based on several studies, they do not provide a clear understanding of the relationship between personality traits and GPA.
Table 2
Relationship between five-factor personality traits and academic achievement in online learning
Authors (year)
Conclusion Orvis, Brusso, Wasserman, & Fisher, (2010) The relationship between academic achievement and Conscientiousness and Openness variables (none), and Extraversion variable (+) Kim & Schniederjans (2004) The relationship between Neuroticism and academic achievement (-) while the relationship between academic achievement and other personality traits (+) Schniederjans & Kim (2005) The relationship between academic achievement and extraversion (none), Neuroticism (-), other personality traits (+) Maki & Maki (2003) The relationship between academic achievement and Extraversion (-), Openness variable (+), other personality traits (none) Note. +: Positive significant relationship, -: Negative significant relationship, none: Not a meaningful relationship. Adapted from Bahçekapılı & Karaman (2015) 2.2. Self-efficacy Bandura (1994) defined self-efficacy as people's beliefs about their capabilities to produce the designated level of performance that exercises influence over events that affect their lives. Self-efficacy affects people's feelings, thoughts, motivation, and behavior (Bandura, 1993). Besides, individuals' physical and emotional state, his/her personal experiences, the experiences emerging from taking others as a model, and social approval influence on self-efficacy (Bandura, 1977). Schunk (2009) stated that self-efficacy is closely related to learning and success. Individuals with high self-efficacy are more resistant to the challenges they encounter, and they become more successful (Kurbanoğlu, 2004). Individuals with high self-efficacy struggle more in places where achievement is needed (Schunk, 2009). Studies reveal that there is a relationship between self-efficacy and academic achievement (Pintrich & de Groot, 1990;Zimmerman, 2009). Further many studies showed that self-efficacy plays an important role in the achievement of students in online learning environments (Ejubović & Puška, 2019;Ergul, 2004;Joo, Lim, & Kim, 2013;Wang, Shannon, & Ross, 2013).
Locus of control
The locus of control is a concept conceptualized by Rotter (1966), signifies the extent to which individuals believe their lives are controlled by themselves (internal locus of control) or by external factors (external locus of control). The studies conducted reveal that the academic achievement of the individuals having a high level of internal locus of control is higher, compared to external locus of control and they become more resistant when they face challenges, and their self-esteem is higher, and they become confident in themselves. Hence, their emotional health becomes better (Yeşilyaprak, 2004).
Given the studies investigating the relationship between locus of control and academic achievement, it is seen that there is no clear understanding of this issue. Some studies reveal a positive relationship between the locus of control and academic achievement (Fulton, Ivanitskaya, Bastian, Erofeev, & Mendez, 2013;Varnhagen & Wright, 2008), some of them point out a negative relationship (Wang & Newlin, 2000;Yukselturk & Bulut, 2007) and another part reveals a non-meaningful relationship (Joo et al., 2013;Levy, 2007).
Participants
The participants of the study are 525 students (200 female, 325 male), studying in two different universities in Turkey; their age varies between 19 and 59 (M = 30.9). The participants in both universities attend a distance education program. The participants attend their classes in a live class environment at a planned time. In the lessons, the instructor teaches the lesson live with the help of a whiteboard, presentation, and other materials in online learning environments. The participants engage vocally in the lesson when required by the instructor, and they may communicate with all participants and the instructor instantly during the lesson by using the chat option. It is possible to access the lesson taught live and access the documents related to these courses. Both universities provided technical support to the participants via telephone and e-mail. The participants take the midterm exams online, but the final exam in a classroom as a proctoring the exam. While the midterm exams have 20% of the total grade, the final exams possess 80% of the total grade.
Measures and instruments for data collection
For data collection, three different scales were used. These are namely: the five-factor model of personality scale, the academic locus of control scale and general self-efficacy scale. Below is the explanation of each tool.
In this study, as an indicator of the academic achievement of the students, the Grade Point Average (GPA) used. GPA is a number representing the average value of the accumulated final grades earned in courses at the end of the first semester. GPA value ranges from 0 to 4. GPA values are obtained from distance learning centers of the universities.
The five-factor model of personality
This scale was used to measure the main personality variables in the model. The fivefactor model of personality scale consists of 44 items to measure personality traits. Benet-Martínez and John (1998) developed this scale under the name "The Big Five Inventory". The scale comprises five factors named "Neuroticism", "Extraversion", "Openness", "Agreeableness", and "Conscientiousness". There are 8 items in "Neuroticism" and "Extraversion" factors, while 9 items exist in "Agreeableness" and "Conscientiousness" factors and 10 items exist in the "Openness" factor. The scale is presented to the participants using a 5-point Likert-type scale ("1 = I strongly disagree", "2 = I disagree", "3 = Undecided", "4 = I agree" and "5 = I strongly agree").
The scale was adapted to Turkish through an international study in which many different countries from the world participated (Sümer, Lajunen, & Özkan, 2005). It is reported that the Cronbach alpha reliability values of the subscales were at acceptable levels (lowest factor: 0.64, highest factor: 0.77). The validity and reliability of the scale were revealed in a cross-cultural study (Schmitt, Allik, McCrae, & Benet-Martínez, 2007). In this study, Cronbach's alpha reliability values were calculated as follows: lowest factor: 0.56, and highest factor: 0.75.
Academic locus of control
The scale was developed by Akın (2007), and it is used to determine the academic locus of control of students. The scale consists of 2 sub-factors namely "External Locus of Control" consisting of 11 items and "Internal Locus of Control" consisting of 6 items. Hence, the scale consists of 17 items in total. The scale is presented to the participants using a 5-point Likert scale "1 = Completely contrary", "2 = Fairly contrary", "3 = Undecided", "4 = Fairly appropriate" and "5 = Completely appropriate". The high scores of participants in both sub-factors indicate that the participant has the traits of the relevant dimension at a very high level. Akın (2007) found that internal consistency, reliability coefficients were 0.94 for the academic internal locus of control and 0.95 for the academic external locus of control, while test-retest reliability coefficients were 0.97 for the academic internal locus of control and 0.93 for the academic external locus of control. In this study, the Cronbach alpha reliability values for the subscales of the scale were calculated as 0.79 for the external locus of control and 0.71 for internal locus of control.
General self-efficacy
It is used to assess general self-beliefs of students. The General Self-Efficacy Scale was developed in Germany by Schwarzer and Jerusalem and translated into 28 languages (Schwarzer & Jerusalem, 1995). The scale uses a 4-point Likert-type scale (1 = "Not true", 2 = "Somewhat accurate", 3 = "More accurate" and 4 = "Fully accurate") and consists of 10 items. While the minimum score is 10, the maximum score is 40. High scores indicate that the participant's level of self-efficacy is high. It is indicated that the internal consistency of the scale varies between 0.75 and 0.91 in studies conducted in different countries (Scholz, Gutiérrez Doña, Sud, & Schwarzer, 2002). The scale was translated into Turkish by Yesilay, Schwarzer, and Jerusalem (1997) and the Cronbach's reliability coefficient at the end of the studies conducted in five countries including Turkey was found to be 0.81 (Luszczynska, Gutiérrez-Doña, & Schwarzer, 2005). In this study, the Cronbach alpha reliability value of the scale was calculated as 0.89.
Procedure
The data were collected at the end of the semester interval from two universities in Turkey. The participation in the study was voluntarily. Participants' approval was taken before the inclusion. While the data were obtained from one university via printed forms, online forms were used to collect data from the other university. While the study aimed to reach 180 students in this way, 160 students voluntarily took part in the study.
In the other university, where the online forms were used for data collection, firstly, the online form as the data collection tool was logged into the education management system and the students were asked to fill out the forms if they participate in the study. In this process, the aim was to reach 2,000 students; however, approximately 479 students filled the forms. This response rate is acceptable for online data collection tools according to Sax, Gilmartin, and Bryant (2003).
Data analysis
To validate the hypotheses, the partial least square structural equation modeling technique (PLS-SEM) was utilized as the method of data analysis, using SPSS AMOS 19. Since the PLS approach is more suitable concerning prediction-oriented objective, this approach was employed in the study. (Dijkstra & Henseler, 2015;Hair Jr, Matthews, Matthews, & Sarstedt, 2017). Before the analysis, the data obtained from the sample were subjected to the following operations: data cleaning, missing data analysis, testing normality, and determining multicollinearity problems.
Evaluation of normality and linearity
Skewness and kurtosis values were examined to determine whether the data showed a normal distribution. It was found that the skewness values of each variable ranged from -0.51 to +0.59 while the kurtosis values ranged from -0.55 to -0.03. This indicates that a normal distribution is achieved. Kline (2011) states that the skewness values between -3 and +3 and the kurtosis values between -10 and +10 can be considered as a normal distribution. Besides, the scatter plot matrix was used to investigate multivariate normality and linearity. Since the scatter plot matrix in the graph shows an elliptical distribution, this is accepted as a sign of multivariate normality and linearity (Çokluk, Şekercioğlu, & Büyüköztürk, 2014).
Evaluation of sampling adequacy and multicollinearity problem
The adequacy of the available data is crucial in testing the model, presented in studies of structural equation modeling. In this study, 525 participants' data were used in the evaluation of the model. While Kline (2011) states that the sample size should be larger than the number of parameters multiplied by 10, Barrett (2007) argues that a sample size below 200 would constitute a problem. Since the number of samples exceeds 200 (n = 525), it indicates adequate number of participants can be said to ensure. Table 3 shows that the correlation coefficients between the variables are less than 0.9. This indicates that there is no multicollinearity problem among the variables of the study (Çokluk et al., 2014).
Testing the main model
The proposed model was tested using the Maximum-Likelihood method in AMOS 19. The prerequisite for using the maximum likelihood approach is multivariate normality (Kline, 2011, p.154) and multivariate normal distribution was confirmed. According to Henseler, Hubona, and Ray (2016), "PLS path models can and should be assessed globally through tests of model fit and approximate measures of model fit". At the end of the test, the goodness of fit indexes is presented in Table 4. (2010); Bryne (1994) If the goodness of fit indexes related to the Intended Model shown in Table 4, it is possible to state that the intended model conforms well.
Testing the hypotheses revealed in the model
After revealing the goodness of fit, the hypotheses are tested in the model as the first research question. Firstly, the direct and indirect effects on the model are presented (Fig. 2). Then, the hypotheses are tested according to the significance level of these effects. The direct, indirect and total effects of the variables in the model are presented in Table 5.
In the light of the data presented in Table 5, the variables included in the model explain the 4.4% of the variance on GPA scores of students. The GPA is influenced at most by the ELoC with an effect size of β = -0.156, and respectively by self-efficacy with an effect size of β = 0.13. At the same time, these two variables directly affect the GPA score. On the other hand, personality traits of conscientiousness, openness, and neuroticism indirectly affect the GPA. While the self-efficacy variable takes a mediating role for conscientiousness (β = 0.022), openness (β = 0.037) and neuroticism (β = -0.023) variables to affect indirectly the GPA, the ELoC of control variable has a similar mediating role for conscientiousness (β = 0.056) and neuroticism (β= -0.011) in this sense. Extraversion and agreeableness variables and internal locus of control variables have no significant effect on GPA.
Discussion
In this section, research results are discussed as two parts in terms of variables (selfefficacy, academic control focus and five-factor personality traits) with and without significant effect on academic achievement.
Variables that have a significant effect on academic achievement
According to the results, when self-efficacy increases, so does the academic achievement of students. The general self-efficacy is a way of thinking individual shapes at the end of experiences during his/her life. Individuals with a high general self-efficacy perception have higher beliefs in their ability to complete a task successfully (Bandura, 1994). Literature reveals that self-efficacy is a prominent variable related to the motivation and performance of students in online learning environments (Lee, 2015). The belief that an individual can execute a job may positively affect academic achievement in online learning environments that require different qualifications and traits. The responsibility of the education for students in online learning environments is more intense in comparison with face-to-face education environments. Besides, as mentioned in some studies (Moos & Azevedo, 2009;Wang et al., 2013), the belief to use efficiently the technology would be a factor affecting positively the achievement in environments, where the technology is largely used. Hence, the studies conducted on this topic in online learning (Joo et al., 2013;Wang & Newlin, 2002) are consistent with this study.
Research findings reveal that when the score of the external academic locus of control increases, there will be a decline in the academic achievement of the individuals. Individual with a high external academic locus of control attributes their achievement to luck or any external factors other than themselves (Yeşilyaprak, 2004). Hence, when individuals' external academic locus of control increases, the individual passes the responsibility on other factors. The individual links the failure with the teacher, system or teaching method, rather than accepting their responsibility. In this respect, the relationship between external locus of control and success in the research was frequently emphasized in formal education (Buluş, 2011;Cassidy, 2012) and online learning (Wang & Newlin, 2000;Yukselturk & Bulut, 2007).
In this study, it was found conscientiousness and openness have a positive and indirect relationship with achievement. However, neuroticism has a negative indirect effect on academic achievement. Online learning requires more learning responsibility for students (Bartolic-Zlomislic & Bates, 1999). Also, online learning constitutes a new and different learning experience for individuals, who were educated largely with face-toface education throughout their lives. When these qualities of online learning environments are taken into consideration, it could be expected that individuals who are orderly, planned, responsible, decisive, open to new and different ideas should be academically more successful. Besides, it can be expected that the individuals, who tend to be anxious and stressed in face of daily events, would be academically less successful since online learning environments are different and unfamiliar in comparison with faceto-face learning environments.
It was observed that while general self-efficacy and external academic locus of control variables are mediating the conscientiousness and neuroticism variables; only general self-efficacy is mediating the openness personality trait. it should be noted that these results reveal that the personality traits affect the academic achievement indirectly and this effect is achieved through self-efficacy and external locus of control. While the general self-efficacy has a positive direct effect on academic achievement, conscientiousness and openness have positive and neuroticism has negative indirect effects on academic achievement through the variable of self-efficacy.
The findings of this research reveal that being planned and orderly, open to independent thinking and to new ideas will increase academic success together with the perception of the individual to succeed a task. Thus, these findings differ from the studies which expressed the effects of personality traits on academic achievement directly, rather than indirectly (Maki & Maki, 2003;Schniederjans & Kim, 2005). This results in the belief that personality traits may influence academic achievement through some mediating individual traits. A similar study conducted by Tabak, Nguyen, Basuray, and Darrow (2009) shows that conscientiousness personality trait affects achievement through self-efficacy variables. The studies in online learning examine the effect of the personality traits on academic achievement by using correlation or regression analysis techniques (Maki & Maki, 2003;Orvis et al., 2010;Schniederjans & Kim, 2005). A similar study conducted by Tabak et al. (2009) shows that conscientiousness personality trait affects achievement through self-efficacy variables. Therefore, this study can identify this indirect effect.
Variables that have an insignificant effect on academic achievement
According to the results of the study, some of the personality traits are not related to academic achievement on the model. These are extraversion and agreeableness as well as the internal academic locus of control. Individuals with a high level of extraversion are usually defined as sociable, outgoing, talkative and active individuals (Barrick & Mount, 1993). One can consider these traits as factors. These traits can be considered as factors which may increase academic achievement in online learning environment. Hence, Moore (1993) emphasizes the interaction in online learning environments. This personality trait reserves elements that may increase the student's interaction in the online learning environment. However, in this study, no meaningful effect has been found in this direction. The results of the studies that investigate the effect of extraversion personality trait on the academic achievement vary, while there are studies that showed positive (Orvis et al., 2010), negative (Maki & Maki, 2003) and insignificant effects (Schniederjans & Kim, 2005).
The results are similar to those of Schniederjans and Kim's study (2005). The results might have been influenced because the students included in the sample encounter an unfamiliar environment during online learning. Hence, in a study that reveals a positive relationship between the trait extraversion and academic achievement, Orvis et al. (2010) obtained data from face-to-face students receiving also an online course. Students do not receive all their courses through online learning. In this respect, the study of Orvis et al. (2010) leads to some dissimilar results. This may show that if the students know each other and the instructor face to face, there may be more effective interaction than a fully online course. Schniederjans and Kim (2005) worked only with fully online students and they found that trait extraversion does not have a significant effect on academic achievement consistent with the results. Thus, it would be possible to assert that the characteristics related to the extraversion do not dramatically affect academic achievement for the students studying in online learning environments. Reversely, it would be possible to observe this in students receiving face-to-face education and following parts of their education or some courses on online learning.
Another personality trait that has no significant effect on academic achievement in the study is agreeableness. The individuals having a high agreeableness personality trait are compassionate, helpful, and reliable, and they also prefer cooperation instead of competition (Burger, 2006). Some elements of this personality trait, such as preferring cooperation instead of competition, may influence academic achievement. Schniederjans and Kim (2005) showed that this personality trait is related positively to academic achievement. Maki and Maki (2003) point out that there is no significant relationship between academic achievement and agreeableness, in parallel with the findings of this study. However, most of the studies that examine the effect of agreeableness personality trait on academic achievement reveal the existence of a positive relationship between personality traits and academic achievement (Bidjerano & Dai, 2007;Vedel, Thomsen, & Larsen, 2015). Since the social interaction level of the online programs is low in this study, it is possible to conclude that the agreeableness personality trait is not distinguished in environments where social interaction is at a low level.
The last variable having no influence on academic achievement was found as the internal locus of control. Joo et al. (2013) pointed out that the internal locus of control does not have a significant influence on academic achievement in the study, which was conducted on 897 online learners to examine the effect of internal locus of control on academic achievement. In the literature, it has been observed that the external locus of the control variable is examined instead of the internal locus of control (Wang & Newlin, 2000;Yukselturk & Bulut, 2007). This study revealed that the external locus of control has a significant effect on academic achievement unlike the internal locus of control. Hence, the study ascertains that the external locus of control rather than the internal locus of control would contribute better for predicting the academic achievement of online learners.
Conclusions and implications
The personality traits, which have significant effects on GPA on the main model, have been identified as the conscientiousness, openness and neuroticism personality traits and general self-efficacy and external locus of control. While the effects of the general selfefficacy and external academic locus of control directly affect academic achievement, conscientiousness, openness, and personality traits have an indirect effect on academic achievement. On the other hand, it was revealed that the variables extraversion, agreeableness and internal academic locus of control do not affect academic achievement.
Based on the positive and negative effects of personality traits on academic achievement, student supports should focus on students themselves. Thus, it is necessary to get more information about their psychological traits when students are registered for online learning and act accordingly in the services provided and treat them more sensitively if needed. It is impossible to change the personality traits of students but Some personality behaviors may be facilitated which have a relation with success by designing appropriate learning and support activities. For example, since the external locus of control has a negative effect on academic achievement, it is necessary to provide students who seek to study in online learning environments with precise information about their responsibilities and how the system functions. Also, it is essential to explain all the technological and educational criteria expected from them and ensure that the registered students fulfill these criteria. As for the positive effect of conscientiousness on academic achievement, it is recommended to provide students the course objectives, detailed learning tasks, and course schedule to help them to be more planned and orderly.
Beyond providing information, the structure of learning tasks can also support students' tendency of some behaviors related to traits. For example, since self-efficacy has a positive effect neuroticism has an adverse effect on academic achievement, the learning tasks to be assigned in online learning should reinforce students' belief that they can accomplish them.
Limitations
Since the number of questions in the survey tool used in the research was exhaustive and most of the data was obtained through an online form, it may affect the responses given to the survey tool. The data obtained in the study are limited by online learners studying in the academic year 2014-2015 at two different universities in Turkey. The results of the study should be evaluated by considering the distance education systems of the universities. | 2020-07-16T09:09:03.979Z | 2020-07-12T00:00:00.000 | {
"year": 2020,
"sha1": "350f88a6ea551efaad121011fb3aec84c68ffe1c",
"oa_license": "CCBY",
"oa_url": "https://www.kmel-journal.org/ojs/index.php/online-publication/article/download/440/435",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "cc3255eb2c592b716b9601b1ae666a5674d9d86b",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
} |
243472640 | pes2o/s2orc | v3-fos-license | Meta-analysis of Heavy Metal and Arsenic Ecological-risk Assessment and Sources in Surface Sediments of Lake Wuliangsuhai, China
25 Heavy metal and arsenic (As) concentrations in the overlying water of Lake WLSH from 26 2013-2017 to evaluate the water quality of the lake. Heavy metal and As concentrations in Lake 27 WLSH surface sediment from studies performed between 2009-2017 were analyzed of heavy 28 metal geo-accumulation, potential ecological risk and toxicity data for Lake WLSH surface 29 sediment was performed to allow heavy metal and As pollution of Lake WLSH surface 30 sediment to be described clearly, objectively, and comprehensively. The following four main 31 conclusions were drawn. (1) The water quality index of the overlying water showed a tendency 32 of slight pollution in the lake from 2013 to 2017. (2) Pollution by the heavy metals (Cu, Zn, Pb, 33 Cd, Cr) and As in Lake WLSH should be given increased attention. (3) The geoaccumulation 34 indices showed that Cd is the most critical pollutant and that the probabilities of Lake WLSH 35 sediment being slightly polluted and moderately polluted were found to be 72.8% and 11.3%, 36 respectively. (4) Cd is the main contributor (75.2%) to potential ecological risks, and although 37 As is at a low toxicity level, its toxicity-risk contribution is higher than that of other metals 38 (approximately 31%). (5) Positive matrix factorization (PMF) model results indicated that 39 industrial and agricultural resources are the main suppliers of heavy metals to Lake WLSH 40 sediment, contributing 43.2% and 42.6% of the heavy metals and As. The summarized results 41 and conclusions can help the local government further understand heavy metals and As 42 pollution in Lake WLSH and develop corresponding pollution-control measures. This study 43 can also serve as a reference for future research on the heavy metals and As pollution of 44 sediment in Lake WLSH and other lakes.
Introduction 49
Lakes are indispensable wetlands for the global ecosystem and play important roles in 50 regulating river-water volume and improving the ecological environment (Liu et al., 2020). In 51 recent years, with the changes in regional climate and environment and the aggravation of 52 human activities, lake-ecosystem degradation, water eutrophication, and water pollution have 53 become major global problems (Yang et al. 2008; Nazari-Sharabian et al. 2018; Benateau et al. 54 2019). In developing countries such as China, the rapid industrial and agricultural growth, as 55 well as other human activities, have led to rising levels of heavy metals in river and lake 56 sediments ; Yan et al. 2018). Accordingly, heavy-metal and As pollution in 57 aquatic environment has become a research hotspot because of its toxicity, persistence, and 58 bioaccumulation to the environment, as well as its adverse effects on organisms and the entire 59 ecosystem (Lin et al. 2016). Lake sediment, as an important part of water ecosystem, provide 60 habitats and food sources for benthic organisms and also serve as secondary sources and 61 reservoirs of heavy metals in water (Yi et al. 2011). To protect the ecological security of lakes, 62 it is important to study the content of heavy metals and As in lake sediments and the associated 63 of Lake WLSH were collected to assess heavy-metal and As pollution levels and potential 122 ecological risks. The main objectives of this work were as follows: (1) determine the spatial 123 distribution of heavy metals and As in the surface sediments of Lake WLSH by collecting data 124 from published papers; (2) use the Igeo, potential ecological RI assessment, and TU to assess the 125 pollution levels and potential ecological risks of heavy metals and As in the surface sediments rating and the grading standards, the overall nutrient level in Lake WLSH is mid-eutrophic 140 , and the annual average deposition depth was 9.61 mm (Yu et al. 2012). The 141 inlet and outflow channels around Lake WLSH are shown in Fig. S1 (Supplementary materials). 142 Since 2000, industries around Lake WLSH (paper mill, pharmaceutical factory, and smelter, 143 etc.) have developed rapidly in an attempt to develop the local economy, and the lake, which 144 receives industrial, agricultural, and residential wastewater, has gradually been polluted (Zhang 147 Data collection 148 The following databases were used to retrieve published literature: ISI Web of Science for 149 searching English literature, China National Knowledge Infrastructure, and Wan Fang Data for 150 searching Chinese literature (Fig. 1a). The search terms " 'Wuliangsuhai' or 'Ulansuhai' " and 151 "metal" were used in the databases, covering studies from 2000 to 2019. To ensure data integrity 152 and continuity, 12 of 172 papers were selected to obtain the data of heavy metals in sediment 153 from 2009 to 2017. In this paper, Lake WLSH was divided into entrance, central, and exit zones 154 based on lake hydraulics and inlet channel flows in the selected literature, as described in 155 Supplementary Materials. The criteria for selecting published literature in this research were as 156 follows: (i) the publications that were selected for this research should involve the investigation 157 of the surface sediments (5-20 cm) in the entire Lake WLSH (i.e., the entrance, central, and exit 158 zones), as shown in Fig. 1b; (ii) the selected literature included sampling information (i.e., 159 sampling date, number of samples, sampling site location, and measured heavy-metal and As 160 concentration), and (iii) the heavy-metal and As concentrations were determined using the same 161 or similar standards. 162 Tab.1. Based on the water-quality index (WQI), the comprehensive pollution-index 172 method takes the heavy metals observed at the same measuring point as a whole to study their 173 influence on the environment under the condition of interaction (Bewers 1995;Cheng et al. 174 2002). Equations are as follows: 175 Pi, represents the pollution index of i th heavy metal and As. Ci represents the measured 178 concentration of the i th heavy metal and As (μg L -1 ). Si, represents the evaluation standard of 179 heavy metal and As (μg L -1 ). Surface water environmental quality standards was used Chinese 180 GB3838-2002, in which the standard limits of Cu, Zn, Pb, Cd, Cr and As are 1000, 1000, 50, 181 5, 50 and 50 μg L -1 , respectively. n is the number of heavy metals and As. WQI consists of three 182 grades as follows (Bewers 1995 However, this is currently still a relatively general screening method that can provide a guide 189 for lake sediment pollution management (Allen Burton 2018). The level of enrichment and 190 toxicity risk of heavy metals and As in the sediments of Lake WLSH were evaluated using the 191 Igeo, potential ecological RI, and TU. The evaluation methods of Igeo, RI, and TU were as follows. 192 Igeo is primarily used to assess the degree of heavy-metal and As pollution by deducting 193 sediment or soil background content from the measured heavy-metal and As content. The Igeo where Cn is the concentration of the n th heavy metal and As measured in sediment. Bn is 197 the background value of the n th heavy metal and As. The correction coefficient of factors such 198 as sedimentary characteristics is 1.5. Igeo consists of five grades (Muller 1969), as shown in Tab. 199
200
The method developed by Hakanson was used to calculate the potential ecological RI 201 caused by the total pollution of the Lake WLSH (Hakanson 1980), as shown in Tab. 2. 202 RI is the potential ecological 205 risks. C i s is the measured concentration of the i th heavy metal and As in sediment (mg kg -1 ). C i n 206 is the background values of the i th heavy metal and As (mg kg -1 ). T i r is the toxic response factor 207 for a given heavy metal and As, i.e., 5, 5, 5, 30, 10 and 5 for Cu, Zn, Pb, Cd, Cr and As 208 respectively. 209 TU evaluation method can be used to determine the influence of heavy metals and As in 210 sediments on water environment (Pedersen et al. 1998 Paatero first proposed the PMF model in 1994, and the method was approved by the U S 223 Environmental Protection Agency for identifying air pollution sources (Paatero 1997). The 224 greatest advantage of the PMF model is that no source profiles are required, and uncertainty is 225 used to weight all the data (Niu et al. 2020). Potential sources of heavy metals and As in WLSH 226 Lake sediments were identified using the PMF 5.0 model, and pollution sources were analyzed 227 using the distribution of five heavy metals and As in Lake WLSH. The aim of the PMF model 228 is to use the concentration and source profiles of the species of interest to solve the mass balance 229 of the species of interest, the calculation equations are as follows (Norris et al. 2014). 230 1 ( 1, 2,3 n; 1, 2,3 m) xik is the heavy metal concentration; gi is factor to sample contribution; fkj is profile species 232 of each source; i, j are the number of samples and chemical species, respectively, and eij 233 represents the sample. 234 Factor contributions and profiles are derived by the PMF model minimizing the objective 235 function Q, and Q is a critical parameter for PMF (Norris et al. 2014). 236 In this study, the concentration of each sample was above the detection limit and the 238 uncertainty value was calculated according to the following equation (Norris et al. 2014).
Concentrations below the method detection limit (MDL) were calculated using Eq. (9), while 240 otherwise Eq. (10) was used. 241 Where, Unc is uncertainty of the concentration; MDL is the method detection limit (Norris 244 et al. 2014). 245 246
Results and discussion 247
Selected studies 248 Tab. 3 showed a summary of heavy-metal and As concentrations in the surface sediments 249 based on the 12 papers selected. Cu, Zn, Pb, Cd, Cr, and As in the surface sediment of Lake 250 WLSH deserved special attention. In the lake surface sediments, Cd and As were relatively 251 high, which were 6 and 4.7 times of the background values, respectively, and Cu was 2.3 times 252 of the background value. Zn, Pb, and Cr were relatively low, ranging from 1.1 to 1.5 times of 253 the background values and slightly higher than the values. From the perspective of coefficient 254 of variation, Cd was at 82%, Pb and As were at 45%, and other heavy metals were at 33%-255 39%. These results showed that Cd concentrations greatly varied in space, and the Cd contents 256 in the sediments of Lake WLSH was highly uncertainty. Compared with the average 257 concentrations of heavy metals and As in surface sediments of Lake Taihu (Niu et al. 2020), 258 Cu, Zn, Cd and Cr in Lake WLSH were similar to those in Taihu Lake, but the average 259 concentration of Pb in Lake Taihu was 1.83 times higher than that in Lake WLSH, while the 260 average concentration of As in Lake WLSH was 3.72 times higher than that in Lake Taihu. Pb 261 in lake sediments mainly originates from human activities such as industry and transportation 262 (Yao et al. 2008). Compared with Lake Taihu, Lake WLSH has weaker human activities, which leads to higher Pb concentration in surface sediments of Lake Taihu than Lake WLSH. 264 Compared with Lake Taihu, industry and agriculture around WLSH Lake basically account for 265 90% of the economy (Inner Mongolia Autonomous Region Bureau of Statistics 2020). 266 Industrial wastewater (paper mills, pharmaceutical manufacturers and metal smelters) and 267 agricultural wastewater (pesticides, fertilizers) are discharged into Lake WLSH through ditches 268 (Zhang 2010; Lv 2018; Lou et al. 2020), which results in As concentrations in this lake being 269 3.72 times higher than those in Lake Taihu. 270 Table 3 Tab.1 shows that the concentrations of Cd and As in the overlying water belong to Class I 283 standard, Cu, Zn, and Cr to Class II standard, and Pb to Class III standard. Therefore, the 284 overlying water standard of WLSH Lake was determined to be Class III according to the 285 environmental quality standard for surface water (GB3838-2002 of China). The WQI method 286 can be used to deal with heavy metals observed at the same measurement location as a whole 287 and examine the impact of these heavy metals and As on the environment through interactions (Cheng et al. 2002). Fig. 3 shows that the WQI of the lake entrance, center, and exit zones were 289 all less than 1, and were in a no polluted status from 2013 to 2017. However, the WQI of 2017 290 was about twice as high as in previous years, the overlying water of Lake WLSH showed a 291 tendency of slight pollution, and the pollution of the lake exit zone increased significantly 292 compared with other zones. These results indicated that the overlying water of Lake WLSH 293 will be polluted by heavy metals and As if no corresponding treatment measures are taken. 294 To better reflect the heavy-metal and As pollution in the surface sediments of Lake WLSH, 315 Igeo, RI, and TU were used to evaluate the reported element-concentration distribution 316 characteristics. Igeo was calculated for the entrance, center, and exit zones of Lake WLSH using 317 Eq. 3 (Muller 1969). The Igeo values for each zone of the lake are shown in Fig. 5. The highest 318 Igeo values for Cd and As in the sediments of the Lake WLSH indicated moderate pollution, and 319 those for Cd indicated more moderate pollution and heavy pollution in the lake exit zone.
E i
r and E i r /RI indices were calculated for the entry, center, and exit zones of the Lake WLSH 338 by using Eqs. 2 and 3 (Hakanson 1980), as shown in Fig. 6. The E i r values of Cd in the three 339 zones of the lake were greater than 160, indicating high risk; and those of the remaining heavy 340 metals and As were less than 40, indicating low risk (Fig. 6a). The high Cd Igeo values also 341 caused high RIs. Cd contributed 75.2% of the potential ecological risk (Fig. 6b), and Cd 342 potential ecological risk in the exit zone is slightly higher than in the other two zones. Cd was 343 also the main contributor to the potential ecological risk in the sediments of Lake Taihu . In this paper, comparing the heavy metals and As concentrations in Lake WLSH than ERM, the maximum concentrations of Cu and Cd were lower than ERL, and the maximum 363 concentrations of Zn, Pb, and Cr were distributed between ERL and ERM. Compared with TEL 364 and PEL, the maximum concentrations of Pb, Cr and As were higher than the PEL and the 365 maximum concentrations of Cu, Zn and Cd were between the TEL and the PEL. In this paper, 366 comparing the heavy metals and As concentrations in Lake WLSH sediments with ERL and 367 ERM, the maximum concentrations of As were found to be higher than ERM, the maximum 368 concentrations of Cu and Cd were lower than ERL, and the maximum concentrations of Zn, Pb, 369 and Cr were distributed between ERL and ERM. Compared with TEL and PEL, the maximum 370 concentrations of Pb, Cr and As were higher than the PEL and the maximum concentrations of 371 Cu, Zn and Cd were between the TEL and the PEL. Toxicity characteristics of heavy metals 372 and As in the sediments of Lake WLSH were calculated by Eq. 6 (Pedersen et al. 1998). The 373 statistical results are shown in Fig. 7. The risk profile of the sediment TU's and ΣTU's in the 374 lake showed that the ΣTU's in the entrance, center, and exit zones were 2.96, 2.78, and 2.75, 375 respectively, indicating low toxicity level, but the lake entrance zone was more polluted. And 376 As of TU's were all higher than heavy metals (Cu, Zn, Pb, Cd, and Cr) in the three zones of the 377 lake, were 0.99, 0.89, and 0.75, respectively, indicating low toxicity level. The TU's of the 378 heavy metals and As were also less than 4, indicating low toxicity grade (Fig. 7a). As metal 379 contributed 33.44%, 32.29%, and 27.26% of TU's in the entrance, center, and exit zones of the 380 lake, respectively (Fig. 7b). The toxicity of heavy metals and As in the sediments of Lake 381 WLSH was As, with a total toxicity contribution of about 30.98%; while the toxicity of heavy 382 metals in the sediments of Lake Taihu was Pb, with a total toxicity contribution of about 32% 383 (Niu et al. 2020). Arsenic in sediments is generally present mainly in the low solubility form, 384 bound primarily to iron oxides and present in the residual phase, and will be released into the 385 overlying water as the sediment conditions (e.g., temperature, pH, etc.) change (Nikolaidis et al. 2004;Arain et al. 2009). Therefore, the sources of pollution in Lake WLSH need to be 387 effectively identified and appropriate control measures should be developed. 388 Factor 2 explains only 4.2% of the contribution of different sources of heavy metals and 410 arsenic to heavy metal and arsenic concentrations in the sediment of Lake WLSH, and factor 411 loadings are low (<10%) for all heavy metals and 0 for As. In geochemical baseline studies, natural sources of heavy metals and As contribute to background values of concentrations in 413 local soils and sediments. Anthropogenic sources contributed much more heavy metals and As 414 to lake sediments than natural sources (Niu et al. 2020), and non-anthropogenic sources 415 contributed slightly to heavy metal and As concentrations in the sediment of Lake WLSH. 416 Hence, factor 2 is related to natural sources. Zn, Pb, Cd, and Cr) and As were the most concern in the surface sediment of the lake between 445 2009 and 2017. In terms of cumulative contamination and potential ecological risk, the lake 446 sediment was most heavily contaminated with Cd, accounting for 75.2% of the potential 447 ecological risk (assessed using RI). Within a toxicity-risk control perspective, although As is at 448 a low toxicity level, its toxicity-risk contribution is higher than that of other metals 449 (approximately 31%). The PMF model indicated that heavy metals and As in Lake WLSH 450 sediment have mainly been supplied by industrial and agricultural resources, which have 451 contributed 43.2% and 42.6% of the total heavy metal and As concentrations, respectively. 452 Natural sources and atmospheric deposition sources have contributed 4.2% and 10.0%, 453 respectively, of the total heavy metal and As concentrations. In order to prevent heavy metals 454 and As in drainage ditch sediment being transported into Lake WLSH because of human 455 activities such as lake ecological water replenishment, the wastewater discharges from 456 industrial and agricultural sources also need to be controlled and monitored more effectively 457 than is currently the case. All these results can provide comprehensive and quantitative 458 reference data for heavy metal and As pollution in Lake WLSH. 459 Table. 1 Statistical description of heavy-metal and As guideline values for overlying water 699
Ethics approval
and water quality grade. 700
Category
Overlying water heavy-metal and As concentrations (μg L -1 ) | 2021-11-05T15:07:46.300Z | 2021-11-03T00:00:00.000 | {
"year": 2021,
"sha1": "4730ae4cda2c6b00b28f4b74ca7c43e883f0b33f",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-945105/v1.pdf?c=1635965140000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "2e0de8c804c0fa5c51223df22dfeb1234940e738",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
199634625 | pes2o/s2orc | v3-fos-license | Adaptive evolution shapes the present-day distribution of the thermal sensitivity of population growth rate
Developing a thorough understanding of how ectotherm physiology adapts to different thermal environments is of crucial importance, especially in the face of global climate change. A key aspect of an organism’s thermal performance curve (TPC)—the relationship between fitness-related trait performance and temperature—is its thermal sensitivity, i.e., the rate at which trait values increase with temperature within its typically experienced thermal range. For a given trait, the distribution of thermal sensitivities across species, often quantified as “activation energy” values, is typically right-skewed. Currently, the mechanisms that generate this distribution are unclear, with considerable debate about the role of thermodynamic constraints versus adaptive evolution. Here, using a phylogenetic comparative approach, we study the evolution of the thermal sensitivity of population growth rate across phytoplankton (Cyanobacteria and eukaryotic microalgae) and prokaryotes (bacteria and archaea), 2 microbial groups that play a major role in the global carbon cycle. We find that thermal sensitivity across these groups is moderately phylogenetically heritable, and that its distribution is shaped by repeated evolutionary convergence throughout its parameter space. More precisely, we detect bursts of adaptive evolution in thermal sensitivity, increasing the amount of overlap among its distributions in different clades. We obtain qualitatively similar results from evolutionary analyses of the thermal sensitivities of 2 physiological rates underlying growth rate: net photosynthesis and respiration of plants. Furthermore, we find that these episodes of evolutionary convergence are consistent with 2 opposing forces: decrease in thermal sensitivity due to environmental fluctuations and increase due to adaptation to stable environments. Overall, our results indicate that adaptation can lead to large and relatively rapid shifts in thermal sensitivity, especially in microbes for which rapid evolution can occur at short timescales. Thus, more attention needs to be paid to elucidating the implications of rapid evolution in organismal thermal sensitivity for ecosystem functioning.
A Phylogeny reconstruction
The final tree produced by RAxML [1] and calibrated to relative time with DPPDiv [2] is shown in Fig The phylogeny generated in this study from which subtrees were extracted for comparative analyses. Colours indicate different phyla, whereas circles show the statistical support for each node, conditional to the topological constraints of the Open Tree of Life [3]. The phylogeny is available in NEXUS format at https://doi.org/10.6084/m9.figshare.12816140.v1.
B Dataset of thermal sensitivity estimates
The distributions of E and W op values across the four datasets for species included in the phylogeny are shown in Fig B. Fig C shows the distributions of thermal sensitivity estimates of r max across the six largest phyla of this study. We first used MCMCglmm to estimate the phylogenetic heritabilities of our six TPC parameters by inferring their variance/covariance matrix, corrected for phylogeny. This allowed us to also extract the phenotypic correlation (r phe ) between E and W op and, thus, to understand the relationship between the two thermal sensitivity measures ( Fig D). Furthermore, the phenotypic correlation was broken down to its phylogenetically heritable component (r her ) and its residual component (r res ). The latter should be driven mostly by environmental effects. To understand why the relationship shown in Fig D arises, we numerically examined how W op is affected by changes in E, and the sensitivity of their relationship to changes in B 0 , T pk , and E D (Fig E).
The expected relationship of the operational niche width (W op ) and E in the Sharpe-Schoolfield model. (a) W op always decreases with E, provided that the other parameters (B 0 , T pk , and E D ) do not vary substantially and systematically with E. This is illustrated here with three arbitrary fixed combinations of the other three parameters; the curves remain practically the same, irrespective of substantial variation in these other parameters. (b) An example illustrating how W op always decreases with E when the other parameters are fixed (values shown in black). As E increases from 0.3 to 1 eV, W op decreases from 22.25 to 12.50 • C.
Besides MCMCglmm, we also estimated phylogenetic heritabilities using Rphylopars and BayesTraits and compared the resulting estimates with those of MCMCglmm ( Fig F).
MCMCglmm Rphylopars BayesTraits
Fig F. Comparison of phylogenetic heritability estimates of MCMCglmm, Rphylopars, and BayesTraits. As MCMCglmm and BayesTraits estimate phylogenetic heritability using a Bayesian approach, the plots show their posterior distributions. Instead, point estimates are shown for Rphylopars. The phylogenetic heritability estimates obtained with Rphylopars are greater than zero for all TPC parameters. While the mean phylogenetic heritability estimates of BayesTraits are generally lower than those of MCMCglmm and Rphylopars, the lower bound of the 95% Highest Posterior Density interval of BayesTraits is always greater than zero (the lowest value is at 2 · 10 −5 for ln(E) among phytoplankton). Furthermore, the distributions of phylogenetic heritabilities of prokaryotes obtained with BayesTraits are much narrower and closer to those of MCMCglmm, compared to those obtained for phytoplankton with the two programs. This is consistent with an increase in the "signal-to-noise" ratio from phytoplankton to prokaryotes, given that the latter dataset is larger (Fig B). In any case, the observed differences in the estimates of MCMCglmm and BayesTraits may arise from i) differences in the priors used by the two methods, ii) accounting (or not) for the uncertainty of each TPC parameter estimate, or from iii) differences in the approaches employed for the estimation of missing TPC parameter values. The raw data underlying this figure are available at https://doi.org/10.6084/m9.figshare.12816140.v1.
To examine how the evolutionary rate of thermal sensitivity varies across the phylogeny, we fitted the stable model of trait evolution [4] (Fig 5 in the main text) but also the free model [5] (Fig G) and the Lévy model [6] ( Fig H).
Finally, to better understand how species explore the parameter space of E, W op , and T pk (whose phylogenetic heritability is ≈ 1; see Figs 2 and F), we combined our two r max datasets and divided the distributions of E, W op , and T pk into four discrete states (Fig I). Boundaries for these states were selected using the Jenks natural breaks clustering algorithm [7], as implemented in the BAMMtools R package (v. 2.1.6) [8]. To estimate the transition rates among states, we fitted the "all-rates-different" variant of the Mk model [9] with the fitMk function of the phytools R package (v. 0.6-60) [10]. Transitions in the discretized parameter space of E, W op , and T pk . The width of the edges represents the natural logarithm of the transition rate between states. Transitions between non-neighbouring states are very common for E (which captures the rise of the TPC), rare for W op (which captures both the rise and the peak of the TPC), and never observed for T pk (which captures the peak of the TPC). It is worth pointing out that T pk also exhibits the lowest transition rates between neighbouring states among the three TPC parameters. These results are consistent with the phylogenetic heritability estimates shown in Fig 2 in the main text. The raw data underlying this figure are available at https://doi.org/10.6084/m9.figshare.12816140.v1.
C.2 Analyses of the dataset of phytoplankton TPCs after excluding Cyanobacteria
Removing Cyanobacteria from the phytoplankton dataset led to qualitatively identical results in our phylogenetic analyses (Figs J and K). Phylogenetic heritabilities of TPC parameters of eukaryotic phytoplankton. The main differences between these results and those using the entire phytoplankton dataset (Fig 2A in the main text) were that, here, ln(E) and ln(B pk ) have slightly higher/lower phylogenetic heritabilities respectively. The former is expected as the thermal sensitivity distribution of Cyanobacteria is very similar to that of Dinophyta (Fig 4 in the main text), despite the long evolutionary distance between them. Therefore, the exclusion of Cyanobacteria would necessarily increase the phylogenetic heritability of thermal sensitivity. Similarly, ln(B pk ) in prokaryotes is more phylogenetically heritable than in phytoplankton (Fig 2 in the main text), explaining the further decrease in its phylogenetic heritability when Cyanobacteria are excluded. The data underlying this figure are available at https://doi.org/10.6084/m9.figshare.12816140.v1.
C.3 Analyses of the net photosynthesis rate and respiration rate TPC datasets
The visualization of the evolution of thermal sensitivity of net photosynthesis rate and respiration rate revealed similar patterns to those of the thermal sensitivity of r max (Fig 6 in the main text). Thermal sensitivity values do not evolve gradually and tightly around a central value ( θ), but explore large parts of the parameter space due to bursts of rapid evolution.
D.2 Fitted models using latitude as a continuous predictor
We rejected models that had one or more non-intercept coefficients with a 95% HPD interval that included zero. We then used DIC to identify the most appropriate model among those remaining. Distribution of minimum generation time estimates for phytoplankton and prokaryotes. Data points were obtained by taking the inverse of all B pk estimates. Horizontal bars represent the phylogenetically-corrected median values, i.e., the inverse of the intercept of B pk from the multi-response regression models that we fitted with MCMCglmm (see the "Estimation of phylogenetic heritability for all TPC parameters using MCMCglmm, Rphylopars, and BayesTraits" subsection of the Methods in the main text). The data underlying this figure are available at https://doi.org/10.6084/m9.figshare.12816140.v1.
F List of nucleotide sequences used for phylogeny reconstruction Table D. Species names and Accession IDs of small subunit rRNA gene sequences that were used in this study. Table E. Species names and Accession IDs of cbbL/rbcL gene sequences that were used in this study. | 2019-08-16T06:18:24.245Z | 2019-07-23T00:00:00.000 | {
"year": 2020,
"sha1": "8f58ad7e82b24ef47f2e666c55abe19c31f10918",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosbiology/article/file?id=10.1371/journal.pbio.3000894&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fa3359800eff90288430e5ee03cbe51190a279a",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
28371229 | pes2o/s2orc | v3-fos-license | Predictive models for estimating visceral fat: The contribution from anthropometric parameters
Background Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. Objective Excessive adipose visceral tissue (AVT) represents an independent risk factor for cardiometabolic alterations. The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are highly at risk of being viscerally obese. The aim of this study was to develop a predictive model for estimating AVT volume using anthropometric parameters. Methods A cross-sectional study involving overweight individuals whose AVT was evaluated (using computed tomography–CT), along with the following anthropometric parameters: body mass index (BMI), abdominal circumference (AC), waist-to-hip ratio (WHpR), waist-to-height ratio (WHtR), sagittal diameter (SD), conicity index (CI), neck circumference (NC), neck-to-thigh ratio (NTR), waist-to-thigh ratio (WTR), and body adiposity index (BAI). Results 109 individuals with an average age of 50.3±12.2 were evaluated. The predictive equation developed to estimate AVT in men was AVT = -1647.75 +2.43(AC) +594.74(WHpR) +883.40(CI) (R2 adjusted: 64.1%). For women, the model chosen was: AVT = -634.73 +1.49(Age) +8.34(SD) + 291.51(CI) + 6.92(NC) (R2 adjusted: 40.4%). The predictive ability of the equations developed in relation to AVT volume determined by CT was 66.9% and 46.2% for males and females, respectively (p<0.001). Conclusions A quick and precise AVT estimate, especially for men, can be obtained using only AC, WHpR, and CI for men, and age, SD, CI, and NC for women. These equations can be used as a clinical and epidemiological tool for overweight individuals.
Introduction
The distribution of anomalous body fat is recognized as an important predictor of cardiovascular risk [1,2]. Abdominal adipose tissue includes subcutaneous and visceral fat deposits that, when in excess, result in special risks to metabolic and hemodynamic parameters [3]. Robust evidence connects visceral obesity to a proatherogenic state [1][2][3][4], highlighting the importance of quantification of it for estimating metabolic risk and stratifying cardiovascular risk in patients in clinical practice.
The possibility of selectively measuring adipose visceral tissue (AVT) and subcutaneous tissue (AST) with due accuracy and reliability has been a notable contribution that has revolutionized the field of body composition1. Only imaging scans are able to quantify subcutaneous fat separately from visceral fat [5,6]. Thus, Computed Tomography (CT) represents the "goldstandard" for this type of evaluation [3,6]. However, its use is limited in clinical practice and in evaluating large population groups, due to the high cost and potential risk of exposure to radiation [7]. These limitations have resulted in only some clinical studies adopting this diagnostic exam for evaluating visceral obesitylevels, and consequently estimating the predictive value that this type of fat has in determining metabolic and cardiovascular alterations.
The search continues for a highly valid marker for estimating visceral adiposity that is a simple and low cost tool able to screen individuals who are at high risk of being viscerally obese. The usefulness of anthropometric indicators as "proxies" for indirectly estimating visceral fat depends on the degree to which these correlate with reference methods, which are those that provide a direst measure of AVT, since they allow it to be differentiated from subcutaneous abdominal fat [3] The results regarding the superiority of one parameter in relation to others are still very controversial. While some results choose abdominal circumference (AC) as a better indirect indicator of intra-abdominal fat and cardiovascular risk, when compared to the body mass index (BMI) and waist-to-hip ratio (WHpR) [8][9][10], some indicate better performance for WHpR [11]. Moreover, other parameters suggested in the literature have not been effectively tested with regards to the predictive ability for AVT, such as waist-to-height ratio, neck circumference, conicity index, neck-to-thigh ratio, waist-to-thigh ratio, sagittal diameter, sagittal index, and body adiposity index.
Some authors have demonstrated the inappropriateness of anthropometric methods in estimating AVT when used in isolation. However, when these variables are included in a regression model, the precision of estimates can be optimized [2,12]. Thus, the aim of this study was to develop a predictive model for estimating visceral fat volume using anthropometric parameters that can be feasibly used in clinical practice.
Research design and methods
A methodological study in which outpatients from a public hospital of reference in cardiology located in the Northeast of Brazil were recruited. At this outpatient clinic, the public seen is predominantly composed of individuals with non-infectious chronic illnesses, including systemic arterial hypertension, diabetes mellitus, metabolic syndrome, and dyslipidemia.
The sample was constructed based on voluntary adhesion, using overweight individuals of both sexes and aged !20. Excluded were individuals with hepatitis and/or splenomegaly, ascites, recent abdominal surgery, pregnant women, and women that had had children up to 6 months before screening, all characteristics that can influence intra-abdominal and/or anthropometric measures. Also considered ineligible were individuals with physical limitations (the amputation of some limb) that made obtaining anthropometric measures impossible. Excess weight was established based on a BMI!25kg/m 2 for adults and a BMI!27kg/m 2 for seniors [13].
Considering an error α of 5%, an error β of 20%, with an estimated average correlation between anthropometric variables and AVT of 0.5 (p) and a variability of 0.15 (d2), and using the formula n = [(Zα/2 + Z β/2)2 x (p x (1-p)] / d 2 , a minimum sample size of 88 individuals was obtained. In order to correct for potential losses, 20% was added to the sample, resulting in 110 sample units.
Adipose visceral and subcutaneous tissues were evaluated using Computed Tomography (CT), using a Philips Brilliance CT-10 slice tomography (VMI Indústria e Comércio Ltda, Lagoa Santa, MG, Brazil). The exam was carried out by a single observer (a medical radiologist) with the patients completely fasted for four hours. The tomographic cross-section was obtained with radiographic parameters of 140 kV and 45 mA, at the lumbar vertebra level L4, with a thickness of 10 mm. The total area of abdominal fat and the visceral fat area were outlined manually with a free cursor contouring each region. The entire skin surface was excluded from the marked area. AVT area was determined taking the internal borders of the abdominal rectus, internal oblique, and lumbar quadrate muscles as limits, excluding the spine, and including retroperitoneal, mesenteric, and omental fat. All of the fatty areas were described in cm2. In order to identify adipose tissue, the density values-50 and -250Hounsfield units were used [14,15].
Weight and height were measured according to techniques prescribed by Lohman, Roche, and Martorell [16], using electronic scales (Welmy1, Santa Bárbara d'Oeste, SP, Brazil), with a 150Kg capacity, 100g division, and a stadiometer attached, with 1 mm precision. AC was calculated with an inelastic metric tape, with 0.1 cm precision, directly over the skin at the midpoint between the last rib and the iliac crest. The bone markings of the last rib and iliac crest were located and palpated by the examiner at the level of the midaxilary line. The measuring tape was placed in a horizontal line around the abdomen in the location mentioned above and special attention was paid to guarantee that the tape was parallel to the floor [17].
Hip circumference was obtained by measuring the hip region at the area of greatest protuberance [18]. NC was measured with an inelastic metric tape with the individuals standing up erect with their heads positioned in the Frankfurt horizontal plane and looking forward. The metric tape was placed perpendicularly over the neck axis at the mid-point of the cervical spine to the mid anterior of the neck. In men with laryngeal prominence, the NC was measured below the prominence [19].
The SD measurement was carried out with the individuals in a supine position, using an anthropometer to measure the distance between the dorsum in contact with the surface and the highest point of the abdomen, between the last rib and the iliac crest [20]. The thigh measurement was obtained on the right side of the body, at the mid-point between the inguinal fold and the proximal edge of the patella [3].
The BMI was obtained from the equation: Weight(kg)/Height(m) 2 and the WHpR was determined by the abdomen (cm) and hip (cm) parameter ratio. The WHtR was evaluated using the ratio between abdominal circumference (cm) and height (cm). For the CI calculation the waist circumference and height measurements, expressed in meters, and body weight (kg) were considered, in accordance with the following mathematical equation [21]: Waist circumference (m)/ {0.109 x p [(Body weight (kg)/Height (m))]}. The NTR was determined by the ratio between neck circumference (cm) and thigh circumference (cm). The WTR was obtained using the ratio between waist circumference (cm) and thigh circumference (cm) [3,22]. The SI was obtained using the ratio between sagittal diameter and thigh circumference: SD (cm)/Thigh circumference (cm) [23]. The BAI was obtained using the equation: [Thigh circumference (cm) / Height (m) 1,5 ]-18.
For each anthropometric point evaluated, a double measure was obtained by a trained examiner. When the difference calculated between the measures was greater than 0.1 cm or 0.1kg, a third measurement was carried out. The final measurement considered was the average between the two closest values.
The study protocol was guided by the ethical standards for research involving human beings, set out in National Health Council resolution 466/12, and was submitted for evaluation by the University of Pernambuco (UPE) Committee on Ethics and Research with Human Beings, and approved under protocol number 271.400/2013. The individuals were previously informed of the research objectives, as well as the methods adopted, and with their agreement, they signed an informed consent form.
The data were analyzed with the help of the Statistical Package for Social Sciences-SPSS program, version 13.0 (SPSS Inc., Chicago, IL, USA). The continuous variables were tested with regards to distribution normality using the Kolmogorov Smirnov test, and as they presented a normal distribution they were described in average and standard deviation form. For the description of proportions, an approximation of the binomial distribution to the normal distribution was carried out using a confidence interval of 95%. The Student t test for independent samples was used to compare between averages of the anthropometric parameters and visceral fat between sexes. The proportions were compared using the Pearson Chi Squared test.
In the multivariate analysis a stepwise multiple linear regression was used for age and anthropometric variables as independent variables (or predictors) and AVT was used as a response variable. A backward regression analysis was adopted for the model and the Wald test was used to verify the statistical significance of the model.
The anthropometric parameters that presented a connection with AVT in the univariate analysis were included in the multiple regression and the models in which the variables presented a VIF (variance inflation factor)<10 [24] were considered. The variables with superior VIF were taken from the regression and a new model was constructed without them. Simple linear regression was used to evaluate the explanatory power of the predictive equation for AVT in relation to AVT volume determined by CT. Statistical significance was established when the p value<0.05.
Results
110 patients were recruited, and after eliminating one loss, 109 individuals composed the final study sample. The average age was 50.3(±12.2), varying from 20 to 75. There was a predominance of females (74.3%; CI 95% :65.0-82.2) and the BMI varied from 25kg/m 2 to 45kg/m 2 . No statistically significant difference was verified with relation to age distribution, and prevalence of DM and SAH between sexes. The men presented greater absolute and relative AVT (p<0.001), when compared to the women ( Table 1). The sample's racial composition was 38.5% white, 10.1% black and 51.4% brown.
Higher averages for the anthropometric parameters that reflect body fat distribution were observed in the males (AC, WHpR, SD, CI, NC, WTR, NTR), when compared to the women. However, when the BAI was evaluated, which reflects the percentage of body fat via a mathematical model that uses hip circumference and height measures, a higher value was verified among the women (p<0.001) ( Table 2).
In the multiple regression analysis, five models were presented for the males and four for the females. The model that included the AC, WHpR, and CI variables was considered the best predictive model for AVT in men, as shown in equation 5: AVT = -1647. (Table 3). (Table 3).
A VIF <10 was defined as a criterion for model selection, indicating that there was no collinearity bias. The VIF of the variables included in the regression model for males varied from 1.31 to 1.76, while in females it was 1.11 to 1.32.
The predictive ability of the equations developed was 66.9% for males and 46.2% for females in relation to AVT volume determined by CT (p<0.001), as can be observed in Figs 1 and 2, respectively.
Discussion
In this study simple equations were developed for predicting AVT based on anthropometric measures and indices that are easy to obtain and can be feasibly reproduced in clinical practice and in evaluating large population groups. These equations can be used to estimate AVT area in overweight individuals of both sexes aged between 20 and 75. Considering that AVT constitutes an independent risk factor for cardiometabolic alterations, estimating this abdominal adipose tissue sub-compartment represents an important tool for screening individuals at risk of being viscerally obese.
There are not a large number of equations for predicting AVT area available in the literature. Moreover, the results from studies cannot be rigorously compared, given the different characteristics of the populations investigated. Therefore, generalization of the applicability of a predictive model for estimating bodily compartments should be made with great caution, observing the age, sex, adiposity level, and racial characteristics of the population in which it has been validated.
The greater AVT concentration in men, for the same BMI, age, and subcutaneous adiposity level, reveals a greater predisposition in men for accumulating fat viscerally, with this result being consistent with previous investigations [20,25,26]. Thus, considering the notable differences in the distribution pattern of body fat and in AVT accumulation, the need for different predictive equations to be developed for the sexes is evident, or at least for sex to be inserted as a variable into the model. The predictive equation developed for males presented a higher prediction level (64.1%), compared to the regression model obtained for females (40.4%), and was relatively similar to previously published results [12,27,28]. Goel et al [12], in evaluating 171 Asians with an average age of 32.2 and an average BMI of 22.9km/m2, developed an equation with a predictive ability of 52.9%. Brundavani et al [27] described a model with 74% prediction in men aged from 40 to 79.
It is important to consider that although the equation proposed for women was only able to explain 40% of AVT variability, since it is impossible to evaluate visceral fat using imaging methods, applying an equation could be an alternative strategy for having a screening tool for individuals at risk of being viscerally obese.
Statistically, the best predictive equations for AVT in our study involved three variables for men and four for women. The number of variables inserted into a regression model represents an important aspect to be considered when selecting a predictive equation, considering that with each variable added to the model a potential source of error is inserted into the estimate, limiting its applicability in practice. Thus, we recommend the models with the smallest number of variables involved. Adding more variables would make the model more complex without adding any significant increase to the estimate. Other authors have also reported that the inclusion of more than three predictive variables for estimating AVT increased the standard deviation and did not result in an improvement in the model's explanatory power [2,29].
The final predictive model for males included AC, WHpR, and CI. The CI incorporates three important measures: weight, height, and AC, the latter being common to the other parameters. It was demonstrated that this index can be quite sensitive in detecting visceral obesity, especially in men, and can detect alterations in fat distribution, allowing for comparisons between individuals that have different body measures of fat and height [5]. WHpR, in turn, has also been listed as an important predictor of AVT. However, these findings are controversial, with it being observed in some results that this parameter presented a strong correlation with AVT [11] and it being described in others that this indicator can represent subcutaneous fat much more than visceral fat [5].
Some authors have indicated an increased correlation between AC and WHpR, and so the two predictors are rarely used in the same estimation model, in order to avoid collinearity problems, which would affect the regression estimate. In our study, in the predictive model for men, the two variables were included, but the VIF of the equations selected for both sexes in our study was lower than 2.0, justifying maintaining all of them. VIF> 10 increases the possibility of collinearity among predictor variables and may decrease the regression model's confidence, which was not observed in our results.
Age, SD, CI, and NC were the parameters inserted into the predictive equation developed for women in this investigation. Age is a very important variable for evaluating body composition, considering the physiological modifications that accompany the ageing process, in which a reduction in fat free mass and an increase in total fat mass are observed, with a notable increase in fat stored in the intra-abdominal and intra-muscular anatomical sites, instead of in the subcutaneous region, as generally occurs in young adults [30]. Therefore, the inclusion of age can indicate that the model is able to predict AVT variations that can occur paripassu with age progression. The insertion of age into the female model reproduces some of the previous results that have aimed to estimate AVT [31,32], with age appearing, in fact, to interfere in determining AVT.
Some evidence indicates that SD isthe anthropometric parameter with the greatest power to explain AVT variability [5,20,32]. SD represents abdominal height, constituting a simple measure with good reproducibility and accuracy, based on the fact that in individuals in a position of dorsal decubitus, visceral fat accumulation maintains abdominal height in the sagittal sense, at the same time that subcutaneous fat is reduced, because it spreads to the sides, due to the force of gravity [20,33].
The relationship between NC and AVT has not yet been extensively evaluated. One investigation carried out by Yang et al [34] indicated that NC was a powerful marker of visceral fat quantity diagnosed by CT. This possible relationship has been attributed to the fact that systemic free fatty acids are mainly determined by fat in the upper part of the body, it thus being suggested that fat deposited in the neck region could play an important role in the pathogenesis of cardiovascular risk factors, especially in obese individuals [35].
The anthropometric parameters inserted into the previously validated predictive models are varied and seem to depend on the characteristics of the population for which they were validated. Nagai et al. [36] developed and validated an equation to predict AVT in men with an average age of 44.4 ±18.4 using WHtR and triglyceride serum level as variables (AVT = 857.66 x WHtR + 0.22 x TG-378.31), presenting high sensitivity and specificity (0.833 and 0.900, respectively). Other equations that have found precise results in AVT estimates were proposed by Ran et al [37], in which the variables AC and age were used for males, and WHpR, weight, and age for females, and by Liu et al [38], who developed a model containing BMI and AC to estimate visceral area in male type 2 diabetic patients.
When the equations were applied in this study sample and the values compared with the reference model (CT), we verified good explanatory power in the predictive model in estimating AVT (r 2 = 66.9% for males and r 2 = 46.2% for females). However, it is worth noting that cross validation would be important for confirming these findings. Some limitations should be considered when interpreting of the data presented. One of these aspects is the fact that the participants in the study had a high level of adiposity, and therefore application of the equation for individuals with different adiposity levels is limited. The possibility of having an equation available that can be applied to estimate AVT in overweight individuals is particularly important in the follow up for these individuals in clinical practice and as a monitoring tool during therapeutic interventions.
The main inconvenience in using predictive equations relates to the fact that they are validated in specific groups, therefore limiting their use in different populations, ethnicities, age groups, and adiposity levels. It is important for these equations to be validated for future use as AVT predictors and their applicability compared with preexisting equations.
Another aspect that should be considered is that the Brazilian population has specific racial characteristics, marked by great miscegenation between black and white races, and caution should be used in employing the equation for populations of other ethnicities. Thus, generalized use of the equation for populations of other races should be preceded by validation in different groups.
Conclusions
This study showed that a quick and precise AVT estimate, especially for men, can be obtained using only AC, WHpR, and CI for men, and age, SD, CI, and NC for women, These equations can be used as a clinical and epidemiological evaluation tool for overweight individuals, allowing AVT volume to be quantified based on anthropometric measures.
Validation of the predictive models developed in this study is recommended in other population groups so that the possibility of their use can be broadened. | 2018-04-03T01:11:52.269Z | 2017-07-24T00:00:00.000 | {
"year": 2017,
"sha1": "3f7c80137809057d5f59aa9329fd512769a43565",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0178958&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3f7c80137809057d5f59aa9329fd512769a43565",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
259650338 | pes2o/s2orc | v3-fos-license | Post COVID dyspnoea and scope of homoeopathy
The effects of the on-going pandemic COVID-19 has rippled through the whole health care system as well as various aspects of human existence. Although the fact that the numbers of recovered persons outnumber the number of deaths is promising, the post COVID status of people who have recovered from the infection in terms of their physical, mental and sociocultural well-being is a menace to the world’s future. Relatively little is known about COVID 19 post recovery period and its long term outcome since it has been studied over only a short period and the ideas about that are still on a dynamic plane. This is an attempt to find out the scope and limitations of homoeopathy in post COVID dyspnea, from available literature and data from different research articles.
Introduction
A degree of breathlessness is common after acute COVID-19. Studies have shown that, survivors of COVID-19 acute respiratory distress syndrome are at risk of long term impairment of lung function [1] . For some patients there will be persistent or recurring symptoms or a variety of other symptoms as an after effect of the infection which include fatigue, exertional dyspnoea, cough, loss of taste or smell, headache and body-ache, confusion etc. Currently there is no universally accepted definition for post COVID period. A more clinically oriented and practical one is as following: Post-acute COVID-19 ("long COVID") seems to be a multisystem disease manifestations extending beyond 3 weeks from the onset of first symptoms and chronic COVID-19 as extending beyond 12 weeks [1] . Patients with even milder forms of COVID-19 have persistent symptoms. Persistent viraemia due to weak or absent antibody response, relapse or reinfection, inflammation and deregulated immune responses and mental factors such as post-traumatic stress disorder may all contribute. Long term respiratory, musculoskeletal and neuropsychiatric sequelae have been described for other corona viruses (SARS and MERS), and these have pathophysiologic parallels with post-acuteCOVID-19 [2] .
Literature review
In a multistate telephone survey of symptomatic adults who had a positive outpatient test result for SARS-CoV-2 infection, conducted in a Multistate Health Care Systems Network-United States, March-June 2020, 35% had not returned to their usual state of health when interviewed 2-3 weeks after testing. Shortness of breath is one among the most common symptoms which failed to resolve post infection. Most common symptoms were cough and fatigue [1] . In a cross sectional observational study conducted in Mount Sinai Hospital entitled "Post-acute COVID-19 syndrome negatively impacts health and wellbeing despite less severe acute infection" it is found that during acute COVID 19 infection, Dyspnea is a predominant symptom. After COVID it persisted in 60% cases [4] . In a study conducted at post-acute care clinic in Rome, Italy, in patients who had been hospitalized for COVID-19 and met World Health Organization (WHO) criteria for discontinuation of quarantine and two negative test results for [SARS-CoV-2] 24 hours apart to assess whether they have persistence of symptoms. This study found that in patients who had recovered from COVID-19, 87.4% reported persistence of at least 1 symptom, particularly fatigue and dyspnea [5] . 78 of 100 patients in an observational cohort study who had recovered from COVID-19 had abnormal findings on cardiovascular MRI (median of 71 days after diagnosis) and 36 of those reported dyspnoea and unusual fatigue [6] . ~ 130 ~ membrane formation, alveolar septal fibrous proliferation, and pulmonary consolidation in discharged survivors with COVID-19. Impairment of diffusion capacity is the most common abnormality of lung function, followed by restrictive ventilatory defects, which are both associated with the severity of the disease [9] . Coronavirus targets alveolar epithelial cells. Cellular changes occurring with ageing such as genomic instability, mitochondrial dysfunction, and epigenetic modification might reduce these cells' ability to respond effectively to viral encounter, triggering pathways that promote both dysregulated repair and fibrosis. Since inflammation can lead to fibrosis in several forms of interstitial lung disease, treatment often targets inflammation [7] . COVID-19 is an inflammatory and hypercoagulable state, with an increased risk of thromboembolic events [1] . So in post COVID dyspnea, pulmonary fibrosis and pulmonary embolism have to be considered [2] . Perhaps 20% of patients admitted with COVID-19 have clinically significant cardiac involvement; occult involvement may be even commoner. Cardiopulmonary complications include myocarditis, pericarditis, myocardial infarction, dysrhythmias, and pulmonary embolus; they may present several weeks after acute COVID-19. Left ventricular systolic dysfunction and heart failure after COVID-19 can occur. They are commoner in patients with pre-existing cardiovascular disease, but they have also been described in young, previously active patients. Various pathophysiological mechanisms have been proposed, including viral infiltration, inflammation and microthrombi, and down-regulation of ACE-2 receptors [1] . Emerging data from different studies is showing that adult patients with acute COVID 19 infection can develop a hyper inflammatory syndrome [8] .
Approach towards a patient with post COVID dyspnoea
Patients with post COVID dyspnoea should have routine examination blood to rule out anaemia. Multisystem inflammatory syndrome has to be ruled out in these patients and they should be tested for markers of inflammation and coagulopathy (CRP, serum ferritin, Interleukin-6, D-Dimer).
A chest x ray/HRCT is needed to rule out pulmonary fibrosis. They should also have an echocardiogram to rule out cardiomyopathy. Patients who had asymptomatic COVID 19 infection should have an antibody test to diagnose them as post COVID case. Severe breathlessness, which is rare in patients who were not hospitalised, may require urgent referral. Breathlessness tends to improve with breathing exercises. Pulse oximeters may be extremely useful for assessing and monitoring respiratory symptoms after COVID-19. Self-monitoring of oxygen saturations over three to five days may be useful in the assessment and reassurance of patients with persistent dyspnoea in the post-acute phase, especially those in whom baseline saturations are normal and no other cause for dyspnoea is found on thorough evaluation [1] . Rehabilitation treatment plans should be individualized according to the patient's needs, taking into consideration their comorbidities. Eating a healthy diet, engaging in physical exercises and getting good sleep will improve outlook and feelings of well-being.
Scope of homoeopathy
Since persistent inflammation affecting different organs, an individualistic approach based on principles of Homoeopathy, will make essential changes and control inflammation. Based on the site of pathology, organ specific medicines especially medicines having action on cardiopulmonary system can be considered while treating post COVID dyspnoea. Some of the important medicines mentioned in the Homoeopathic Materia Medica for chronic cardio respiratory diseases which can be considered for post COVID dyspnea include the following [10,11,12] .
Antimonium arsenicosum:
Catarrhal pneumonia associated with influenza Coca: Palpitaion, dyspnoea, anxiety, sleeplessness Ammonium carbonicum: Fat persons with weak heart, wheezing, feel tired. Much oppression in breathing, worse after any effort Lachesis: Sensation of suffocation and strangulation on lying down. Feels he must take a deep breath.
Carbo vegetabilis:
Persons who have never fully recovered from the acute exhausting effects of some previous illness.
Kalium carbonicum: Weakness, very sensitive chest with stitching pains. Palpitation and burning in the heart region Antimonium tartaricum: Oedema and impending paralysis of lungs. Great rattling of mucus but very little is expectorated.
Calcarea arsenica: Dyspnea with feeble heart. Slightest emotion causing palpitation. Though Homoeopathic literature mentions several medicines for management of chronic cardiorespiratory problems including pulmonary fibrosis, the post COVID dyspnoea should be managed based on severity of the problem. Cases requiring immediate attention should be referred to higher centres. In sub-acute and chronic cases an integrated approach along with standard therapy can be considered to minimize the progression and severity of the disease, and also to improve the quality of life of patients. | 2021-04-16T19:17:24.266Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a64d18450ff8ee12adcc1410a7f2d0799fa9db98",
"oa_license": null,
"oa_url": "https://www.homoeopathicjournal.com/articles/302/5-1-2-332.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a64d18450ff8ee12adcc1410a7f2d0799fa9db98",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": []
} |
150405835 | pes2o/s2orc | v3-fos-license | At the Still Point : The Heart of Conversion
Though religion and performance are often considered together in ritual and liturgy, they may join in other contexts as well. This paper explores the “still point” described in the poet T.S. Eliot’s Four Quartets as playing a role not only in poetry and dance, but equally in moments of religious conversion. Three such moments are explored, framed by theoretical considerations of dance, conversion, and attentiveness to the “here” and “now” in both. These points of space and time are the objects of an intense focus that creates a center to the experience and thus the possibility of the conversionary turn.
In Ecumenica: Journal of Theater and Performance, Caridad Svich glosses the concept of presence.
[I]n the art of art-making, in between the church and the brothel which is sometimes called theatre, we know indeed there are those who came before us . . .
Here. 1 stand.Look.Are you watching?
Here. 1 speak.Listen.Are you listening?
There are many kinds of seeing and listening in theatre.Sometimes in silence, we hear the world. 1 As this brief passage hints, theater is nothing so simple as spectacle-nor, for that matter, is church.Part of the complexity is presence itself, Svich says; "It is the hardest thing in the world sometimes: to be present.To truly be here, now.And not yesterday or tomorrow." 2 The demand of both theater and church is a deep attentiveness, a now placed here without distraction-but both here and now are active and living, not static even when they are still.In the same issue of Ecumenica, we read about the term religion, "religion and theatre have kept close company because both are composed of disciplines by which people become what they weren't before." 3 To become what one was not-or to return again to what one might once have been-is a kind of conversion.The English word comes to us from the Latin verb convertere, "to turn around; to transform." 4The convert is turned around toward something that might be new and unexpected, or lost and rediscovered; it might be hearing or glimpsing again those who were there before.
Let me say a bit about my approach, because it is not anthropological, nor anywhere within the social sciences, though these disciplines are the original ground of performance studies and still predominate there.There are some points of intersection: for instance, in recent decades studies of ritual as performance have been more concerned with the process and progress of the rite than with the script or fixed order of it and movement, of course, is necessary to any sense of turning. 5The still point of now and here, though, seems at once to make that movement possible and to halt it.Victor Turner emphasizes, in transitional or conversionary rites, the "anti-structural" nature of passage between states, so that rather than reinforcing cultural roles and practices, a rite may suspend a culture's past and future in a moment of potentiality. 6Possibility is essential to the conversionary point as well-it is itself a making possible.What it makes possible, however, is not likely to be broadly cultural.
Approaching from the experience of the performer-the convert, the dancer, the poet-I am concerned instead with a paradox, with the point that both centers a circle and remains other to its motion.My approach is in some respects phenomenological, but it necessarily exceeds the well-bounded senses of subject and object that more traditionally ground phenomenology.It is theological, in a long tradition of theological approaches that find presence and absence together.Just as Svich considers presence by looking at and listening to absent sights and voices, so too I want to find the still point of motion, in a here that is both pinpointed and unfindable, in a now that was to come and will have gone.
I want to draw several lines of thought together and trace their turning around the mysterious point that is now, and here, and attended to so entirely that it can be conversionary.I want to ask, in the midst of this turning about, when it might be now, where it might be here.
In one of the most precise and influential early theories of time, Aristotle links time to motion.It is, he says, "something that belongs to movement," though it is not movement itself.Because we mark motion by the before and after of a body's placement at a location, he decides that time must be the "number of motion in respect of before and after." 7Now marks a still point, and then now, again, marks another, and between them, time moves, or has moved.Aristotle writes, it is only when we have perceived 'before' and 'after' in a motion that we say that time has elapsed.Now we mark them by judging that A and B are different, and that some third thing is intermediate to them.When we think of the extremes as different from the middle and the mind pronounces that the "nows" are two, one before and one after, it is then that we say that there is time, and this that we say is time.For what is bounded by the "now" is thought to be time. 8e now that is necessary seems itself a still point, a zero dimension at the edge of the temporal line.And a zero point, too, in the array of lines that move through space: "The distinction of 'before' and 'after,' Aristotle notes, "holds primarily, then, in place; and there in virtue of relative position." 9Here, it is now.
At this point, the attention of our senses is concentrated.Daniel Heller-Roazen argues that the "principle at the root of the unity of the senses" in Aristotle's De Anima demands the unified moment of the now: "It is not possible [for the various senses to discern] at various times," Aristotle declares, concluding, "It is thus an undivided [principle] [that discerns] in an undivided time." 10 As Heller-Roazen glosses this quite difficult passage, "This is the principle that all sensation occurs in 5 See (Turner 1986, esp.76f).Catherine Bell even points out that in some disciplines, work in performance studies shifts from an engagement with the action that follows or fails to follow a script, to a focus on "the very activity of the agent or artist [as] the most critical dimension and not the completion of the action."However, in religious studies that use the idea of performance the emphasis remains on "the execution of a preexisting script . . .or the explicitly unscripted dimensions of an activity in process."(Bell 1998, pp.205-6).6 (Turner 1982).7 Aristotle (1930, p. 4.11).Reprinted and accessed online at the Internet Classics Archive, http://classics.mit.edu/Aristotle/physics.4.iv.html.
8
Ibid. 9 Ibid. 10 (Heller-Roazen 2007, p. 51).Citing Aristotle, De Anima, 3.2.426b24-29.Heller-Roazen's translation.time and, more precisely, at one time in particular-namely, 'now.'"Further, as a point, "now . . .can be defined by its undivided presence alone." 11Here and now are the points at which we listen and we look, points where we seek and sense presence.These points limit an expanse, a duration, but they collect, too, at a center.Alexander of Aphrodisias, in his own consideration of Aristotle's De Anima, uses the image of a circle to describe the way that different senses coincide: "For the straight (lines) drawn from the circumference of a circle to the centre all have the centre of the circle as their terminus, a single point; and this point, being one, is also in a way many, when it is taken as the terminus of each of the lines drawn from it." 12t is not only Aristotle's logical physics that leads us to this point.Poetically, T.S. Eliot places the now-point not at the immeasurable division of a line, but like Alexander at the center of a dancing circle.In a famous passage from Burnt Norton, the first of his Four Quartets, Eliot writes, At the still point of the turning world.Neither flesh nor fleshless Neither from nor towards; at the still point, there the dance is, But neither arrest nor movement.And do not call it fixity, Where past and future are gathered.Neither movement from nor towards, Neither ascent nor decline.Except for the point, the still point, There would be no dance, and there is only the dance. 13nce theorist André Lepecki describes the dancer's attention to this point as a kind of attending and perceiving that may underlie cognition, but is not itself cognitive 14 -in this, again, resembling Aristotle's unifyingly attentive sense of pinpoint presence. 15Dance shares some of the elusiveness of conversion, both of them demanding an attention that one must leave in order to describe it, a now vexingly made present when it cannot be measured, a here that marks an edge and a center but takes up no space itself.So the still point that is now is necessary to the movement of time, bounding and centering though unmarked within it.It is time's opposite, but it is only through time that we can recognize its necessary existence, its presence here.Eliot acknowledges the elusiveness of the point: Words strain, Crack and sometimes break, under the burden, Under the tension, slip, slide, perish, Will not stay still. 16w can motile words say a still point, after all?
Now is the time that cannot be placed, or can only be a place without any extension, bounding the before and after of a moving body, a converting soul.Here is the place that cannot be other than now.A body in motion can only be placed when it is not moving-unless the stillness is within the movement itself.I can only say, there we have been: but I cannot say where.
And I cannot say, how long, for that is to place it in time. 1711 (Heller-Roazen 2007, p. 52). 12Alexander of Aphrodisias, "Explanation of a Passage from the Third [Book of the] De Anima, in which Aristotle Shows that There Is Something with which We Sense Everything Simultaneously," in (Alexander of Aphrodisias 1992-1994, p. 60).
Cited in (Heller-Roazen 2007, p. 48). 13(Eliot 1968, "Burnt Norton," §2, pp.15-16). 14(Heller-Roazen 2007, p. 50). 16 (Eliot 1968, "Burnt Norton," §5, p. 19). 17Ibid., §2, p. 16.I would like to mark the movement of the present discussion with two points.If they measured time between them, they would be before and after the rest of the text, where the ensouled body stands still.If the time measured were a circle and not a line, the two points would be one ("And all is always now," writes Eliot.)Within these framing moments, I want to mark a center, a here where dance and conversion might turn, in time and in space; the beginning and the end will draw themselves together, turning time around an eternity of now.Between, I shall describe three turns, three conversions that return to a point made new, as any now must be: here, where one sees, where one listens.
Let me draw dance and conversion more closely together, closer to the same point.Michel Foucault links conversion to asceticism.This is asceticism in the pre-Christian sense, though, "a matter of attending to oneself, for oneself: one should be, for oneself and throughout one's existence, one's own object," rather than of intentional deprivation or minimalism.Foucault continues, "Hence the idea of conversion to oneself (ad se convertere), the idea of an existential impulse by which one turns in upon oneself (eis heauton epistrephein). . . .[T]he impulse by which the soul turns to itself is an impulse by which one's gaze is drawn 'aloft' -toward the divine element, toward the essences and the supracelestial world where they are visible." 18Conversion is a revolutionary transformation, turning up, in, upon, and around.And attention itself, that religio-theatrical being-present, is at its heart.
Like ascetic practices, conversionary disciplines can be quite bodily.Pulling a disciplinary trio together, Susan Jones says of Eliot, "Presumably the traditions of ballet appealed to Eliot because its training required the subjection of the body to a rigorous physical discipline of the sort he equated with the spiritual discipline of religious acceptance."In dance, he saw an art that "offered, in its religious origins, a liturgical component . . ., a giving up of the entire body to the practice of the form . . .." 19 Philosopher and psychologist William James similarly declares that whether religious conversion is voluntary and deliberate or quite sudden, it always involves some element of self-surrender-a giving up, or a giving over.Often, the giving-over comes as a sudden joyful stillness in the midst of tumult. 20ometimes in silence the convert hears the Word.The dancer hears the music that moves the body.Are you listening?
Eliot is himself a convert, from permissively undogmatic Unitarianism to high church, nearly Catholic Anglicanism.As one might guess, then, he is a lover of religion and ritual as well as of dance.In fact, he seems to turn to them together.Daniel Albright finds that when Eliot's poetry turns to Christian themes, it turns as well to a "poetics of dancing," in which "movement grows precise and patterned, choreographed." 21This formal precision is itself a movement drawing together motion and rest: Only by the form, the pattern
Can words or music reach
The stillness, as a Chinese jar still Moves perpetually in its stillness. 22ere is only the dance, but the dance is on many scales and levels at once, from the singular to the universal body: The dance along the artery The circulation of the lymph 18 (Foucault 1998, vol. 1, p. 96). 19 (Jones 2009, p. 37).Jones cites (Eliot 1928, p. xv).Jones notes that the "Dialogue" also refers to the "drama of the Mass," and that a character in it says that "the ballet is a liturgy of very wide adaptability" (p.xvi). 20(James 1982, Lectures IX and X). 21 (Albright 1997, p. 285). 22 (Eliot 1968, "Burnt Norton," §5, p. 19).
Are figured in the drift of stars. 23e formal pattern expands, contracts, and repeats.The turning returns.Eliot is a deeply philosophical convert; the source of this turning is Aristotle's unmoved mover, his highest divinity, itself unmoving Only the cause and end of movement Timeless . . . 24e play of sound and spectacle on the one hand, silence and stillness on the other, shows up with particular vividness in conversionary moments; not every conversion, to be sure.Birgitte Bøgh distinguishes several types of conversions: intellectual, experimental, affectional, revivalist, and the rare coercive conversion (brainwashing), all different from one another and from "mystical conversion," which "is characterized by a sudden and dramatic burst of insight, induced by visions, voices, or other paranormal experiences, high emotional content and an observable change in the subsequent behavior of the convert." 25In this last form, stillness plays a particularly clear role, one that will weave through the conversionary examples that follow.Each stillness is movement too, each progression of time also a return.There is a point or moment of that stillness that at once eludes and demands expression, and that is essential to the turning about that defines conversion.The attended-to now is the stillness in the dancing movement, from what has been to what could become.From zero dimensions, it opens onto infinity.With Eliot's help, we can watch the conversionary turn, and listen for it, and find the dance in its intensity of attention to the still point at its heart.I offer here three rather different moments of conversion.Eliot began his quartets in England of the 1930s, walking through the autumnal rose garden of a ruined estate called Burnt Norton. 26We begin our trio in the ancient Near East, on a mountain called Horeb.
The Still Small Voice
In the first book of Kings, the prophet Elijah has considerable success in his fight against the worshippers of Baal.He displays the superiority of Yahweh in a dramatic contest on Mt.Carmel (18.16-39), and ends a lengthy drought in the region (17.1-7,18.1, 18.41-45).But after all of his successes, he faces a singular failure before the stubbornness of the Queen, Jezebel, who refuses to abandon her allegiance to the gods Baal and Asherah, and in fact threatens Elijah with death.In chapter 19, Elijah has escaped, but he is worn down and weary.With divine help, he has made it to a cave on Mt.Horeb, and he is just going to hang out there for a while, without much clue as to where, or how, he might go on (19: 1-9).
As Elijah waits, the Lord informs him that he is "about to pass by," as if the moment must be marked before the motion can begin.A great deal of drama ensues: a wind so strong that it shatters the rocks, an earthquake that shakes the mountain, a raging fire.But contrary to our expectations, and perhaps Elijah's too, the Lord is not in any of these (19: 11-12)."And after the fire came a gentle whisper" (19:12), which leads Elijah to the mouth of the cave, where the Lord asks him what he's doing there.This seems oddly undramatic after all of the displays of natural power.Unless we assume that the God of the Hebrew Bible has lost his theatrical flair, something interesting is going on here.
Other translations render the moment after the drama a bit differently: "the sound of a gentle blowing" (NASB), "a whistling of gentle air" (DR), more remarkably, "a sound of sheer silence" (NRSV), and, in the King James version perhaps most familiar to Anglophone ears, "a still small voice."Though "whisper" is the most idiomatic translation, the poetry of the King James may be the most literal, with the Hebrew word demamah implying both stillness and silence; the related dumiyyah suggesting as well a silence that waits, not anxiously, but with attention. 27One hardly needs to be attentive to notice gale force winds, stone-shattering earthquakes, raging fires.The voice of God, though, demands attention by its quiet, draws attention to a point just here, and speaks in the stillness of the surrounding world to the edge of a cave, at the threshold of inside and out.
This quiet is not undramatic after all, but it is certainly strange.It seems to partake of both silence and sound at once-a still voice, a silent sound, or the gentlest whispering breeze right at the edge of no sound at all.Elliot Wolfson emphasizes the paradox: "We can thus speak of the voice being heard through silence.There is no conflict between Elijah's qol demanah and Eliphaz the Temanite's demamah wa-qol eshma, 'I heard silence and a voice' (Job 4:16), for both relate to the voice without that gives expression to the silence within." 28Lepecki writes of the "vibratile stillness" of dance as something comparable to the "sonorous silence" such as might be found in John Cage's music. 29t the stillness of the violin, while the note lasts, Not that only, but the coexistence, Or say that the end precedes the beginning, . . .And all is always now. 30 Now on Mt.Horeb gathers the silence into the divine voice, the motion of the earth itself into the gentle, attentive stillness. Anfter the paradoxical sound, God tells Elijah what is to come, and Elijah returns to the world, returns to his tasks as God's prophet (19.15ff).
Return, writes Thomas Finn, is an early Jewish sense of conversion, the Hebrew root shub conveying a sense of turning and moving similar to that of the Latin vertere.It implies a mutual return of God and his people toward one another. 31Though return was readily understood, a real sense of "conversion" postdates Elijah and his work.Conversion proper becomes a possibility for his people only after they have spent time in exile and are then at a time of geographic return.This return combines with the intermarriages that had shifted Jewish identity a bit from ethnic toward cultic.Thus, in some communities a gentile who accepted monotheism, circumcision, and integration into the community could turn with the returned, could become a Jew, and thus a convert in a familiar sense (if not quite a sense fully equal with those already in the community). 32Not all conversions turn from without; Finn notes that conversions occurred within Judaism as well, not from devotion to another god, but into deeper and often more ascetic forms of devotion, such as that of the Essenes or Johannites. 33ven Elijah, after his effort to return Yahweh's people to their God, must be returned to his engagement on the Lord's behalf, drawn out of his despair and hiding not by spectacular display but by the sudden stillness that bounds it, the now on either side of motion-the now that was at the heart of the movement all along, the ground and the center of both nature's and Elijah's activities.His conversion returns him to himself and to his god.Finn calls the pair of conversionary types 27 Strong's Hebrew Concordance, accessed at https://biblehub.com/str/hebrew/1827.htm and https://biblehub.com/str/hebrew/1747.htm. 28(Wolfson 2005, p. 556, n. 189). 29(Lepecki 2000, p. 346). 30(Eliot 1968, "Burnt Norton," §5, p. 19). 31(Finn 1997, pp.20-22). 32Ibid., pp.91-99. 33Ibid., pp.92-107.
"conversion to Judaism and conversion to 'true' Judaism." 34Elijah's turn is more like the latter, as he does not move outside of his faith, though he might stumble within it.Hearing the silence in the voice turns him back to the work of his god, to the world, to himself.
From Eden to Milan
Whether a conversion marked a turn from another devotion to Judaism or from a more casual to more stringent Jewish devotion, Finn writes, ritual was key.The conversion of a gentile to Judaism required circumcision.The requirement for a conversion within Judaism differed."For [the Essenes at] Qumran, [conversion entailed] the Pentecostal rite of immersion . . .; for conversion to the Johannites, baptism." 35The latter, the baptismal rite of the Johannite sect, becomes notable for its persistence into the part of Judaism that becomes Christianity.The desert ascetic John baptized a great many people into lives of intense devotion; some simply lived devoutly thereafter, and others became John's followers. 36Finn writes, "In the synoptic gospels, Jesus steps onto the Palestinian stage as a disciple of John, receives John's baptism of repentance . . ., and becomes a co-worker, adopting John's mission and way of life." 37The followers of Jesus appropriate the rite, though they shift its emphasis subtly from repentance to redemption. 38ncient baptism came in numerous forms and was understood in quite a range of ways."But the link," Finn explains, "was the end of the age, the eschaton, especially repentance, forgiveness of sins, and the imminence of the last day."Purification through water is, he notes, "a symbol of eschatological expectations." 39This eschatological baptism, for the early Christians, "symbolized the kingdom and their entry into it by rebirth." 40We may think of baptism, including the adult baptism of converts, purely as a beginning, a first entry into a community or the possibility of a life redeemed from sin.Yet baptism here is an expectation of the last things.
In my beginning is my end. 41d those last things return to the first.The "last things" expected by eschatology are death, judgement, and life again in a world to come, death and life circling into contact at their boundary points.Like a birth out of death, a resurrection, the baptismal time returns to begin again.
In my end is my beginning. 42r early Christians, this sense of the end as a beginning is connected to the surprising bodily resurrection of Christ from the dead, a reversal of the order in which death follows life, once each, without variation.Perhaps accordingly, Easter was an especially popular baptismal date: like the older mysteries with which it shares its annual scheduling, Easter marks the vernal return of life.In the end, a beginning-a turning around, a newness built into the never finished cycle of the seasons.
Soon the most famous Christian conversions come to be marked less by the baptismal rite than by their own recurrence.The Egyptian hermit Anthony spends his life in the desert, resisting any temptation that comes-temptations that would lure him to his old life, such that he always renews the moment of renunciation by which he first devoted himself to his Christian god.In the 5th century, 34 Ibid., p. 107. 35Ibid., p. 107. 36Ibid., p. 137. 37Ibid., p. 137. 38Ibid., p. 145. 39Ibid., p. 140."As a symbol of eschatological expectations, purification through water is rooted in the visions of Isaiah (1:16-17) and Ezekiel (36:25-28), and it is abundantly evident in works like the Dead Sea Scrolls and especially First Enoch . . .Ibid.,§5,p. 23. inspired by Anthony among many others, Augustine of Hippo records his own complicated and recurrent conversion in his work Confessions.
As Augustine tells it, he had been intellectually persuaded by Christianity thanks to the efforts of Simplicianus and of Ambrose of Milan, two Christians who showed him unexpected depths in these scriptures that he had once thought simplistic.He had been inspired by stories of other converts, such as Anthony and Marius Victorinus.But despite the conviction of his intellect, he could not get his will to turn, could not make himself direct his desires toward God, a refusal he found profoundly exasperating. 43As an intellectual-which he certainly was-he might have wanted to understand what he was doing before he could do it.However, as Karl Morrison writes, "for those who experience it, conversion is not so much understood as felt.The experience is aesthetic, which is also to say affective, and often instantaneous (as seeing a face at the window).Some measure of understanding may come later, but that is understanding of what memory retained of an event, or of a concatenation of events, that is over." 44Once more we encounter the problem of cognition in relation to an intense but elusive perception.Augustine will come to understand both the problem and the experience, but only after the perception, which he cannot reach by knowing.
You are not here to instruct yourself, or inform curiosity, Or carry report. 45gustine sets the stage for his own conversion with considerable literary flair.He is in Milan.He is in despair.And he is dramatic about it."Then in the middle of that grand struggle in my inner house," he writes, "which I had vehemently stirred up with my soul in the intimate chamber of my heart, distressed not only in mind but in appearance, I turned to Alypius and cried out . . .." 46 His sensible friend Alypius regards him cautiously as Augustine heads from the house into the enclosed garden.The garden itself is a place of return.Augustine is certain that fallen humanity cannot get before its original sin, but he will put himself as close as possible to the place of paradise, where God has been heard, hoping to undo some measure of his self-inflicted exile.
You are here to kneel where prayer has been valid. 47gustine tells us that in the garden he paced, "tore my hair, . . .struck my forehead, . . .intertwined my fingers and clasped my knee . . .." 48Alypius has followed him quietly, but as there arose in him "a vast storm bearing a massive downpour of tears," Augustine walks away, since "solitude seemed to me more suitable for the business of weeping." 49But the lord was not in the storm.Augustine's emotional distress is downright exemplary, even before the torrents of tears; he is "deeply disturbed in spirit, angry with indignation and distress . . .." 50He cannot master the perfect patience that would require him to surrender himself.In his impatience, in his "sickness and torture" he wants to be beyond his conversion, to place it before with himself after: "Inwardly I said to myself: Let it be now, let it be now." 51Yet now is not to be commanded.The pinpoint moment offers no extension to grasp.I said to my soul, be still, and wait without hope For hope would be hope of the wrong thing; wait without love For love would be love of the wrong thing; there is yet faith But the faith and the love and the hope are all in the waiting.
Wait without thought, for you are not ready for thought So the darkness shall be the light, and the stillness the dancing. 52re in the garden, Augustine seeks to master and overcome his lingering worldly, especially sexual, desires.But the still voice, opening infinity in the immeasurable point, calls him not to self-mastery, but to self-abandonment.James reviews the stories of several sudden conversions (sudden even if they have been long in the making), and notes how often the convert-to-be begins in extreme emotional agitation-an agitation, we might realize as we read, that tends to be remarkably self-absorbed, and can evidently be overcome not by mastery but by self-surrender.He describes the 18th century case of Henry Alline, who begins his account, "As I was about sunset wandering in the fields lamenting my miserable lost and undone condition, and almost ready to sink under my burden, I thought I was in such a miserable case as never any man was before."Like Augustine, Alline finds himself quieted and converted by a peculiar speaking: "the following impressions came into my mind like a powerful but small still voice." 53n the midst of all this drama, a still small voice.An unseen, unidentified, even ungendered child "from a nearby house" sings repeatedly, "pick up and read." 54A word-lover like Augustine is not about to ignore such an instruction.The nearest book is a collection of Paul's letters, and Augustine, opening it to and reading the first line he sees, declares, "At once, with the last words of the sentence, it was as if a light of relief from all anxiety flooded into my heart.All shadows of doubt were dispelled." 55James notes a similar joyous stillness in nearly every case he reviews; "The stillness was very marvelous, and I felt supremely happy," says one of his subjects. 56dden in a shaft of sunlight
Even while the dust moves
There rises the hidden laughter Of children in the foliage Quick now, here, now, always-57 Not the drama, but the stillness, turns Augustine, suddenly.The always of eternity irrupts into the frantic quick movement of his body in time, and the still moment recenters him.Again the stillness somehow bounds the span of duration; passage is marked by presence, at the center that is the circumference, at the limit that is the heart.The voices of the hidden children turn him back to the vibrance of now.Augustine, born to a fiercely Christian mother, returns to her God."I was being turned around [convertebar].And I was glad, my God, that your one Church, the body of your only Son in which Christ's name was put on me as an infant, did not hold infantile follies." 58While Eliot suspects that only a saint can apprehend "the point of intersection of the timeless/ with time," he also sees, or hears, the two crossing over and over. 59A children's song slows Augustine's frantic movement.From their voices he is moved to Paul's, and within Paul's words he can read divine silence, and Augustine's chaotic pacing, weeping, and tearing all come to a still point. 52(Eliot 1968, "East Coker," §3, p. 28). 53(James 1982, Lecture X, p. 217). 54Augustine, Confessions, 8.12.29. 55Ibid., 8.12.29,The reading is Rom.13:13-14. 56(James 1982, p. 169). 57(Eliot 1968, "Burnt Norton," §5, p. 20). 58Augustine, Confessions, 6.5.5. 59(Eliot 1968, "The Dry Salvages," §5, p. 44).
The powerful stillness, like the newness of life, is likewise a matter of the flesh.Lepecki asks, "What force is produced and condensed at the there of the body that dances in stillness?" 60This "production" is not the creation of an agent, of a self.Dancers, and their teachers, will often speak of the need to get the self out of the way in order to focus on the dance instead; this is Jones's "giving up of the entire body to the practice of the form."Lepecki says of that condensed stillness that it is "similar in all respects to the one Eliot writes about in his poem: never fixed, never locatable, but always dancing . . .as the subject surrenders itself to the world." 61Augustine must quiet himself to listen to the children and to Paul, both voices wise in words he might once have found infantile.
As Finn points out, "Augustine speaks about the garden experience as his conversion-'you converted me to yourself' (convertisti enim me ad te)-and has been taken at his word ever since." 62Any discussion of Augustine's conversion will focus, as I have, on the turmoil and stillness in the garden-or will explain why it does not.Finn rightly continues, "But the Confessions is the autobiography of a conversion in progress.It lays out the full story of his conversion-from birth to baptism-as he came to understand and see it unfold a decade later." 63n early Christianity, we recall, baptism was the point of a new beginning that could come to be only after the end.Augustine was baptized the year after his experience in the Milanese garden, at the Easter Vigil in 387.His son Adeodatus and his friend Alypius, who found his own scriptural passage just minutes after Augustine's conversionary reading, were baptized with him.After baptism, Finn writes, "the task of the trio is to sustain the momentum of their conversions throughout their lives." 64Only if they find the motionless center can they remain in motion.The still point cannot be held still for good, but it can be found, over and over, at the center of the conversionary turn, where now has met itself again, always (as if) for the first time.
Again, a Garden
In 2009, in Glasgow, in another garden, a talkative artist turns to silence.His is a more gradual conversion, a slower turn, and one that vividly draws together the religious and theatrical.Performance artist Adrian Howells spent three years as a Creative Fellow at the University of Glasgow.In his time there, he was shadowed by the scholar and writer Deirdre Heddon, who later co-authored It's All Allowed: The Performances of Adrian Howells.Howells found himself troubled by the reign of social media and "reality" television, with their endless confessional chatter and their proliferation of personal revelations that still seemed, somehow, to leave their speakers and readers at least as isolated as they were before.
open improvisational format, in staged settings such as a laundromat, 67 a hotel, 68 a hair salon, 69 or a bedroom. 70Here he used his own confessional speech and imagery to draw out his audience, hoping that they might find some comfort or "lightening" in the exchange.
My words echo
Thus, in your mind. 71 this early work, speech calls to speech.Howells speaks and displays confessionally to make space for his audience to speak back.Gradually, however, Howells' approach shifts.Heddon writes that the series of performances that she traces over her years with Howells "reveal the entirely unexpected shift from a form of performance that uses talking at its heart as a prompt for and signal of 'intimacy,' to the use of silence as a way to structure other types of intimacy and 'confession.'" 72Like his earlier, more talkative works, Howells' quieter performance pieces are usually one-on-one with an audience/participant.In "Held," Howells began in conversation with the participant, and ended with the two of them spooning on a bed, with the options of a pillow between them or not, and of talking or silence.He writes of one participant, "In the final stage of Held, 'spooning' her on a bed in silence, I felt every sinew and muscle of her body relax and let go over the course of that half hour.In the context of what had taken place in the previous two stages this felt very much like a bodily confession and, for me, a different way of listening." 73As Heddon notes, this listening, like the earlier and more obvious confessions, is likewise a mode of "risk-taking" and "transformation"-a mode of self-surrender.
With this move toward greater self-surrender, the religious aspect of Howells' work becomes more explicit too.Howells' next performance was called Foot Washing for the Sole, and it re-created the Holy Thursday foot washing service outside of a dogmatically Christian context. 74In fact, the performance was inspired by "a foot-washing service at St. Columba's church in Glasgow," where Howells found himself "struck by the intimacy of this act, an intimacy often structured between strangers and framed as an act of 'giving.'"Beyond the generosity of the act, he notes, "Reading the account of Jesus washing the feet of the disciples at the Last Supper in St. John's Gospel, I was struck by the words he uttered as he performed this task: 'What I do for you now, go and do for one another.'Foot washing was framed as generative." 75The gift must be returned, but the economy is far from a simple exchange; rather, this conversionary generosity allows the giver to retain by giving, to create and continue a cycle.Howells 67 Heddon and Howells, "From Talking to Silence," pp.2-3.Howells writes, "For Adrienne's Dirty Laundry Experience (2003), for example, performed at the Arches in Glasgow as part of the annual queer festival, Glasgay!, I transformed a bland basement room of a theatre space into a Laundromat-cum-living room, complete with plumbed-in washing machine, installed tumble dryer, and washing lines; all the other paraphernalia that you would usually find in a Laundromat were added.Audience-participants were invited to bring me their dirty laundry to wash and for the time it took the wash cycle to run its course, I would get them to share their metaphorical dirty laundry over a cup of tea and a biscuit. . . .This was a photographic representation of my own dirty laundry of the past forty years." 68Ibid., p. 3: "Adrienne: The Great Depression (2004) was performed at the Great Eastern Hotel in London.For a week, I inhabited one of their rooms and lived with(in) several self-devised rules. . . .My time with the audience-participant was committed to talking openly and honestly about the suffering of my depression and confiding very private details of attempted suicide, self-loathing, pain, and despair.Ibid., p. 3: "Salon Adrienne (2005), meanwhile, presented in a hairdressing salon in Glasgow, was lighter in tone but nevertheless prompted both me and the participants to engage with the inevitability of aging, using the mirrored surfaces of the salon as a space for literal and metaphorical reflection." 70Ibid., pp.4-5: "Held, staged in three different spaces of an apartment that year, tested different degrees of physical intimacy with individual spectators, each gesture reflecting the room in which we were situated.We held hands across the kitchen table, talking about hand-holding, about what it means culturally, and personal memories of hand-holding.In the living room, we sat side-by-side on a sofa and talked about music and memories for fifteen minutes.In the bedroom, I spooned the audience-participant, on a bed, for half-an-hour.Given a choice, participants opted for silence rather than talking." 71(Eliot 1968, "Burnt Norton," §1, p. 13). 72Ibid., p. 2. 73 Ibid., p. 5. 74 Ibid., pp.7-8. 75Ibid., p. 7. began his interchanges with a brief conversation, but the foot washing and the massage with scented oils were to be carried out in silence.
It is in the silence too that we find "The Garden of Adrian."Previewing the event for the Guardian, Lyn Gardner writes, "Howells-whose work has often combined a strong sense of intimacy, confession and ritual-offers another one-to-one performance which explores the possibility of achieving absolution or a sense of ease in our secular culture.Drawing on the idea of the Stations of the Cross, the piece takes the form of a sensory journey undertaken by the audience in Howells's company through seven installations." 76 Howells notes explicitly that he is, "Again taking a cue from religious frames" in this garden setting, full of live plants and installations by other artists, "built inside a theatre which had, appropriately, originally been a church."I said to my soul, be still, and let the dark come upon you Which shall be the darkness of God.As, in a theatre, The lights are extinguished, for the scene to be changed . . . . 77e Stations of the Cross, leading to death, are returned to the garden, paradigmatic of life.In this setting, Howells explores silence, "recognizing . . .that silence is not just an absence of sound, but might be a space carved out from the contemporary culture of particularly cacophonous noise (a noise to which mass-mediated confession contributes)."The garden is meant, instead, to provide "time, space, and stillness." 78eddon considers her own experience as an audience participant in this still space."In The Garden of Adrian, the final performance of Adrian's three-year research fellowship, the 'babble' of confession has been exchanged for silent contemplation.My physical and literal journey through the internal garden is matched by an internal reflection that carries me on a journey through childhood reminiscences, remembering tastes, smells, sounds, and textures." 79The sense of expansiveness-"time, space, and stillness"-seems to make room for a remarkable range of highly sensory, almost re-lived memories.The theatrical garden expands, accommodating not only earlier memories from this life, but tangential possibilities too: "We tarry long enough in this space for me to encounter other versions of my self on this journey, slowing down, allowing details to rush in, and then staying with them so that the information they carry is revealed to me . . .." 80 Other echoes Inhabit the garden.Shall we follow? 81multaneously with this expansiveness, all of that life, all of those possibilities, contract themselves into this still-religious space, here.As Michael Spencer says of Eliot's gardens in the Quartets, "The garden at all times is a concentration of reality . . .." 82 The movement of chatter and revelation is contracted into an epiphanic moment, an open point.
Even without either Yahweh or Christ, Howells' performance space has a distinctly religious feel-of a space in "which people become what they weren't before."As a space of possibility where other movements-times, spaces, motions, words-gather, concentrated, it offers the conversionary potential of a transformation.It is a theater, a space for other selves, other places, other movements to occur.It is equally church and brothel, where touch between bodies undoes the divisions of extension.
In the theater, Howells and his audience move through the garden as if through the Stations of the Cross, not toward the death that ends the Good Friday service of the Stations and leads into the darkened vigil of Holy Saturday, but not, either, toward the confidence that comes only after the Easter vigil is over-rather, toward the same possibility of a rebirth that demands the loss, the surrender, of the self.
There Is only the Dance
In one cave and two gardens, on a mountain and in a courtyard and a church, we find a conversionary devotion in stillness.While tumult and talk are necessary to each mode of quiet, we might still be unsure that these are not just opposites that follow one another, rather than marking crossing points of temporal and eternal, bounded and bursting, coexistence.From the three vignettes, then, let us return once more to the four quartets, for a movement less finale than reprise.
As Spencer points out, the garden is a place of both movement and stillness, and it is a place that comes back repeatedly in the Four Quartets. 83This is vivid from the outset, where we begin in a space that must have been still for a long time, with "the passage we did not take," "the door we never opened," and undisturbed "dust on a bowl of rose leaves." 84Yet in the second stanza of "Burnt Norton," we already begin to hear "other echoes," those voices that came before us; we already begin to move: Quick, said the bird, find them, find them, Round the corner.Through the first gate, Into our first world . . . 85en we begin to turn, rounding corners, we are on our way back to the beginning, to the first world.Eventually, together with the garden's roses, "we moved, and they, in a formal pattern . . .," 86 dancing.Before this first segment of the first Quartet ends, the dry concrete pool will both fill with water, from which a lotus rises, and then it will show itself empty and still again-or perhaps at once.
And as Spencer likewise observes, the Quartets, having begun in a garden, end in one as well. 87In the first of the four poems, "Burnt Norton," the children hide in the leaves, calling out without revealing themselves. 88In the final verse of the final poem, "Little Gidding," "When the last of earth left to discover/ Is that which was the beginning," the children who in "Burnt Norton" hid in the leaves, "excitedly, containing laughter" return as "the children in the apple-tree . . .heard, half-heard, in the stillness." 89Voices, laughter, and song are never quite heard, but always concealed, contained, uncertain.Though the forbidden fruit may have been a grape, a fig, or wheat on a stalk that "rose like the cedars of Lebanon," Western tradition tends to place an apple tree in Eden, as that which was in the beginning. 90Throughout the Quartets, ends and beginnings interlie one another."Opposites" are not reconciled into indifference, but hold the still point at the center of movement, where "the fire 83 Ibid., p. 37. 84 (Eliot 1968, "Burnt Norton," §1, p. 13). 85Ibid., §1, p. 14. 86 Ibid., §1, p. 14. 87 (Spencer 2005, p. 36). 88(Eliot 1968, "Burnt Norton," §5, p. 20). 89(Eliot 1968, "Burnt Norton," §1, p. 14; "Little Gidding," §5, p. 59). 90For the claim that the forbidden fruit was the grape, see Rabbi Meir, Babylonian Talmud, Tractate Sanhedrin, 70a: "The tree from which Adam the first man ate was a grapevine."In The William Davidson Talmud, at https://www.sefaria.org/Sanhedrin.70a.22?lang=bi&with=all&lang2=en.In the same tractate, Rabbi Nehamaya says that the fruit of the Tree of Knowledge must have been a fig, "because it was with the matter with which they sinned that they were rehabilitated."For the claim that the "fruit" was wheat, see Rabbi Yehuda in the same text, as well as Rabbi Zeira in Genesis Rabbah 15, Sefaria Community Translation, at https://www.sefaria.org/Bereishit_Rabbah.15?lang=bi.
and the rose are one," destruction and blooming at the same point. 91Here Eliot quite rejects ordinary physics, as Spencer argues; here "matter becomes expressive, flesh becomes a verb." 92n the expressive matter of the garden, at the edge of the safe mountain cave, in the quiet re-consecration of the theater, the verb of flesh is dancing.According to Jones, Eliot seems to have had "a good sense of the phenomenological experience of dance practice," not least the awareness that the lines of the dancer's body depend upon "an internal point of origin that forms the focal point and stimulus of all movement and line," "a strongly felt inner point" from which movement begins and travels outward. 93This is literal and experiential-the point "may be located in ballet at a level midpoint in the trunk and in contemporary dance forms lower in the abdomen." 94In both his poetry and his written remarks on the dancer and choreographer Léonide Massine, Eliot shows an unusual empathy with "the effort and motivations of mind and body that frequently gather when the dancer is apparently at rest . . .." 95 Yet this precisely locatable center of the moving body is more than itself, because the stillness is other than a muscular holding at the heart of each motion.This "stillness which denies fixation," to use Lepecki's phrase, is "unlocatable . . .both in space and especially in time." 96He calls this quality of dancing stillness a "vibratile intensity," noting that even when the body really does appear, in performance, to be still, it is still in a way that is very different from the customary thoughtlessness of our everyday halting or movement. 97It is attentive, and so it draws attention.It is unlocatable, because it is what defines location.
Jones describes this attention at work in a famous passage from Sleeping Beauty's Rose Adagio, in which the now-awake princess is given a rose by each of her suitors, and with each of them holds a pose en pointe. 98Her body is alive with movement in stillness.In fact, as Jones points out, part of the genius of choreographer Marius Petipa is that he "incorporates the 'still points' of the Rose Adagio . . .not just by choreographing a series of attitude balances, where the ballerina is aided by the support of four consorts, but by inserting such moments into Princess Aurora's first entrance."This entrance dance precedes the spell and the sleep that give the work its name; the Rose Adagio occurs after Aurora's awakening.In the earlier dance, "her fleeting movement along a diagonal from upstage left to downstage right is punctuated by a delevopé devant en relevé, arrested at the moment of its greatest height and registering stillness at the moment of an intake of the breath."The developé devant en relevé is the unfolding of one leg behind the body while the dancer is balanced on the point of her other foot; the leg does not just lift, but lifts bent, and then extends, and just for a breath, the body holds this position.Jones continues, "These moments, if performed with integrity by the ballerina, are not simply 'pauses' emphasizing the 'fixity of the pose' but are both of the dance itself and are the dance . . .." 99 Aurora, awakened, returns to life.In her movement thereafter, each moment of vibratile stillness, each here held, both resonates with the suspended moments of her first entrance 91 (Eliot 1968, "Little Gidding," §5, p. 59). 92(Spencer 2005, pp.36, 35).Spencer cites (Denis 1920, p. 34). 93 (Jones 2009, pp.38, 39). 94Ibid., p. 39. 95 Ibid.,p. 38: "Writing in the Criterion in 1923 that as an actor Massine was 'the most completely unhuman, impersonal, abstract,' and as such 'belongs to the future stage,' Eliot draws on his poetics in describing the dancers rare quality: 'The difference between the conventional gesture of the ordinary stage, which is supposed to express emotion, and the abstract gesture of Massine, which symbolises emotion, is enormous.'(Eliot 1923, pp.305-6).Eliot's application of the word 'abstract' to describe Massine's gestures is significant here.He intimated that the dancer's (offstage) personality is subsumed, that the dancer is the medium of choreographic invention in the same way that Eliot regarded the poet as medium-that is, a conduit of verbal expression distinct from his subjective personality and feeling." 96(Lepecki 2000, p. 334). 97(Lepecki 2000, pp.354, 338-40). 98Jones also notes that Eliot's interest in roses and in dance intersect again in "Little Gidding," §3, p. 56, where "Eliot's reference to the 'spectre of the Rose' deliberately conjures a vision of the romantic essence of Fokine's ballet of a previous generation."(Jones 2009, p. 33). 99(Jones 2009, p. 35).and is marked by the garden bloom of the rose that each suitor hands her.(The rose returns her, even, to herself; in the Grimm brothers' version the princess is called Briar Rose.)On Spencer's reading of Eliot, "Movement is natural to the garden . . .because the garden is . . .a concentrated statement of the character of existence."This concentration, as I have tried to suggest, can emerge in other spaces of return as well.Spencer continues, "Movement, Heraclitus-like, is the constant of the universe, movement which Eliot also speaks of as the dance." 100 Spencer does not invoke Heraclitus at random.Two lines from this enigmatic pre-Socratic philosopher are set at the head of the Quartets.The first tells us that the logos is common to all, though few realize it: the pattern, the logical or formative principle, is the same at every level. 101The second declares, "The way upward and the way downward are the same." 102To climb the mountain in tumult, to descend it in silence, are the same, but with the sameness of movement: the same not because the path itself holds still, but because the movement always moves.The garden and the mountain and the theater are spaces of stillness and storm, movement and rest, because just for a moment the dry pool fills with sunlit water, and the dance opens out all the way to the turning stars.They are places of attention.In those moments, recognizing the pattern that moves within the core of the body and among the stars, more intimate than the inmost self, 103 here and now come together to center the turning world, the turning body, the turning soul.Elijah warily waits out the world's drama until the stillness speaks.Augustine listens to the small voice of a child and is brought to stillness himself.Howells' quiet attention draws a series of selves from more spaces and times than the self ever knew.Sometimes the conversion transforms; sometimes it deepens.Sometimes we cannot quite tell these apart.
The inhalation of the developé in Aurora's opening variation, the held position of the Rose Adagio, are, says Jones, "moment[s] is full of potential, where the possibility of movement fills the stillness . . .." 104 In the conversionary moment, the possibility of the divine fills and stills the everyday.It is only having felt, heard, seen the stillness that the convert can likewise find the stillness in the movement, the sacred in the everyday, not transcending but within the chaos, white noise, torrents of tears, earthquakes, and storms.Just here, just now. | 2019-04-12T05:42:20.260Z | 2019-04-04T00:00:00.000 | {
"year": 2019,
"sha1": "327166ba24e79e2c7c114248e85551837b70291a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-1444/10/4/249/pdf?version=1554374236",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "327166ba24e79e2c7c114248e85551837b70291a",
"s2fieldsofstudy": [
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
22917387 | pes2o/s2orc | v3-fos-license | New precut sphincterotomy for endoscopic retrograde cholangiopancreatography in difficult biliary duct cannulation.
AIM
To retrospectively investigate the effect and safety of various new type precut sphincterotomy techniques (VNTPST) in endoscopic retrograde cholangiopancreato-graphy (ERCP) due to difficult biliary duct cannulation (DBC).
METHODS
A plough-like pull-type sphincterotome (PLPTS) or improved short nose sphincterotome or improved needle knife was applied. VNTPST was carried out in 30 of 280 patients, whose biliary tract could not be exposed well or deep cannulation was difficult to perform during ERCP with traditional methods. Patients were followed up for short-term complications and the therapeutic effect of VNTPS was observed and compared with that of traditional endoscopic sphincterotomy (EST).
RESULTS
A total 280 patients underwent ERCP, of which 3 failed in operation because of pathological features in stomch or duodenum, 247 successfully underwent traditional ERCP (89.1%, 247/277), 30 failed (10.8%, 30/277). VNTPS technique succeeded in 24 (80%, 24/30) of 30 cases. The successful rate of deep biliary duct cannulation increased 8.6% (24/277), the total cannulation successful rate following precut was 97.7%. There was a significant difference between the two groups (97.7% vs 89.1%, c2 = 17.1, P < 0.01). The incidence of complications was 9.3% (26/277) for traditional ERCP group and 13.3% (4/30) for VNTPS technique group. Guideline tip was broken in pancreatic duct (KPDGP) of one patient, and there was no pancreatitis, slight or moderate bleeding postoperatively occurred in 2 patients, 1 patient had bleeding during operation (PDWN). There were no differences between VNTPS technique group and traditional ERCP (TRERCP) group (13.3% vs 9.3%, c2 = 0.478, P > 0.05).
CONCLUSION
VNTPS procedure and Deng's precut are highly effective methods to get biliary access during ERCP with DBC. With skillful techniques, it can increase the successful rate for deep cannulation of biliary duct and decrease complications. VNTPS technique, especially Deng's precut is as effective and safe as EST. This technique can be well performed in hospitals without particular equipments.
INTRODUCTION
McCune reported ERCP in 1968 for the first time [1] . In China, ERCP was imported by Peking Union Hospital in 1974. Dr. Ming-Zhang Chen initially reported endoscopic sphincterotomy (EST), but it has a lower successful rate. With the advances in surgical techniques and equipments, the successful rate for ERCP was 96.1% in 1990s in China [2] , which is similar to the international reports [3,4] . Due to various factors, catheter cannot always be inserted into the bile duct successfully, leading to failure in TERCP.
Precut was initially reported by Siegel [5] , the successful rate for ERCP can go up with the success of precut. It was reported that the successful rate for precut is 77%-91% [6,7] . The application of precut is limitted [8][9][10][11] because more complications occur, few reports on its use are available.
We retrospectively analyzed the clinical data obtained from 280 cases and evaluated the effect and safety of VNTPST in diagnostic and therapeutic endoscopic retrograde cholangiopancreatography (DTERCP) with DBC.
Patients
Two hundred and eighty patients underwent ERCP in our hospital from April 2004 to September 2006, of them 247 succeeded in TRERCP, 30 (13 males, 17 females, aged 26-88 years, with a mean age of 66 years) failed, and 24 succeeded in percut technique. Among the 30 cases, 7 were diagnosed as Oddi's sphincer stenosis (OSS) combined with choledocholithiasis, 2 as inlayed choledocholithiasis, 8 as simple constrictive papillitis (one as combined juxtapapillary diverticula), 3 as pancreatic head carcinoma, 2 as duodenal papilla carcinoma, 1 as papillary adenomatoid hyperplasia, 3 as ampullary carcinoma and distal bile duct cancer (DBDC), 2 as common bile duct inflammatory strictures, 2 as upper and middle bile duct carcinoma combined with inflammatory papilla stenosis. GF240 electric duodenal endoscope and PLPTS were from Olympus Company. Modified short nose sphincterotome or needle knife was a plough-like pull-type sphincterotome (PLPTS). An Olympus's high-frequency electrosurgical unit and several kinds of guide wire were used in all procedures.
Procedures
Preoperative preparation was the same as the standard ERCP. Patients underwent standard ERCP at first, followed by VNTPST when deep cannulation was difficult.
Modified pancreatic sphincter precutting (MPSP):
Before precut, the exact direction of incision was determined and some movements were made in the correct direction as previously described (Figure 1 A-C) [12] . Incision was made in 10-12 o'clock point from the papillary orifice. The pure precut currently used was set at index 3.5-4.5, precut was performed by exerting pressure with the wire at the roof of PLPTS. The spin and PLPTS were lifted and the precut length was about 0.5-0.8 cm. The incision was extended in the submucosa until biliary effusion was detected. The precut was successful when catheter or guide wire was inserted into the common bile duct (CBD). General EST was performed with guide wire. A needleknife precut was used if biliary cannulation failed. MPSP was more effective for type "Y" pancreaticobiliary ductal junction or type "Ⅴ" without papillary stenosis. The pancreatic sphincter could be inserted with the help of guide wire or scalpel. PSP was not available for type "Ⅱ". No bleeding and perforation occurred. MPSP was safe and less dangerous, but complications such as pancreatitis occurred. Electrical coagulation and blending current should be limited Precut down with needle (PDWN): When biliary cannulation was unsuccessful due to papillary stenosis, ampullary edema or abnormal ampullary anatomy (Figure 2 A-D). PDWN improved by PLPTS should be applied. To expose the needle (3-5 mm in length), the power source was set at index 3.5-4.5 with the exact direction adjusted. Incision was made in the 11 o'clock direction to the papillary orifice. The electric current index depended on the ampullary size and the edema degree. The incision was extended in layers from the papilla duodenal mucosa to the bile duct sphincter or bile seepage, then a catheter assisted with ultra-slipping guide wire was inserted into the bile duct. PDWN was performed repeatedly if biliary cannulation failed. When the bile duct sphincter was exposed or the incision was very deep, and the bile duct orifice could not be found or a catheter could not be inserted, it suggested that the anatomy of common bile duct was abnormal and precut should be stopped. Otherwise biliary duct or duo- denum perforation might occur. Since perforation would occur after EST due to moving up of the incision orifice after precut, EST should not be very long.
Precut up with needle (PUWN): PUWN was suitable for the mini papilla with a small orifice (Figure 3 A-B). The pure cutting current was used in the course, with the papilla adjusted to the left side of the visual field, the wire (3 mm) exposed, the needle anchored at the orifice. The bridge or elevator rotating the endoscope in the direction of anticlockwise was lifted while the current was applied for incision. PUWN was performed. The following step was the same as PDWN. This kind of precut might cause pancreatitis and the depth of incision was variable and uncontrolled. PUWN was not performed when the pailla orifice was not detectable.
Short nose knife precut (SNKP): SNKP could be performed when papilla orifice was variable. A short nose knife was inserted into the orifice in 11 o'clock direction with pure cutting current until a seepage of bile was detected, then a standard catheter was inserted into CBD. The depth of incision by SNKP was easy to control, not resulting in perforation, but it was not so convenient in comparison with a needle knife for duodenal ampulla calculus incarceration.
Pancreatic duct guideline precut (PDGP): A short nose knife or needle-knife was inserted into the papilla orifice while keeping the pancreatic duct guideline. The direction and depth could be controlled by pancreatic duct guide wire, which was not convenient for the small aperture endoscope.
Mucosal bridge precut (Deng's precut):
When PLPTS was inserted into the bile duct with difficulty, the assistant strained the knife tightly to bend the sphincterotome, with the knife action adjusted in line with bile duct axis, and inserted the PLPTS tip from the papillary orifice, the hard guideline piercing through the ampullary duodenal mucosa, cut the ampullary duodenum side mucosal bridge until a seepage of bile was detectable, then inserted the guideline or the catheter into the duct (Figure 4 A-D). The performance could be followed by NKF if the attempt failed. This method could also be applied in patients with inflammatory papilla stenosis or combined upward papillary diverticula.
Up-removal orifice technique (UROT):
This technique was applied when bile duct cannulation could not be achieved for the papilla orifice or bending endoscope could not be lifted up to the orifice ( Figure 5 A-C). The assistant strained PLPTS tightly and tried to access the orifice with PLPTS tip, then raised the papilla mucosa and cut it for removing the orifice. When the bile duct axis was exposed after UROT, normal standard cannulation was easily performed.
Bending or rotating endoscope technique(BERET):
When the papilla could not be lifted due to pathological changes or postoperative adhesions in the duodenum, which may result in standard cannulation failure, deep cannulation could be achieved by bending or rotating endoscope ( Figure 6 A and B).
Pancreatic duct guideline cannulation (PDGC):
When the knife was inserted into the pancreatic duct repeatedly, the guide wire might be kept in the pancreatic duct, then the knife or catheter must be inserted into bile duct through the endoscopic biopsy duct (Figure 7 A and B).
Comprehensive technology (CT):
When incision could not be made by one single method, other precut procedures such as NKF, could be performed in combination until cannulation was achieved ( Figure 8 A and B).
Statistical analysis
Data were expressed as percent and processed with chi-square test. P < 0.05 was considered statistically significant. Among the successful cases, 2 succeeded in MPSP, 3 in PDWN, 2 in PUWN, 1 in SNKP, 2 in PDGP, 2 in Deng's precut (one was inflammatory papilla combined with upward papilla diverticula), 3 in UROT, 2 in BE-RET, 1 in KPDGC. Comprehensive technology achieved success in 6 cases and failed in 6 cases. Among the unsuccessful cases, 1 was diagnosed as OSS combined with cholelocholithiasis, 1 as pancreatic head carcinoma, 2 as duodenal papilla carcinoma, 1 as distal bile carcinoma, 1 as lower segmental stenosis of common bile duct (over 1.5 cm in length).
RESULTS
The incidence of complications in traditional ERCP was 9.3% (26/277). The final diagnoses included suppurative cholangitis in 2 cases (1 was cured with conservative treatment, 1 underwent surgical therapy), acute severe pancreatitis in 2 cases, mild acute pancreatitis in 8 cases, bleeding during sphincterotomy in 4 cases, massive hemorrhage in gastrointestinal tract in 1 case, moderate bleeding after phincterotomy in 2 cases, transient abdominal pain and jaundice in 2 cases, cholangitis in 2 cases, transient cerebropathia in 2 cases, fever in 1 case. Mortality was not related to the endoscope.
The complication rate in VNTPS technique group was 13.3% (4/30). The tip of the soft wire was broken in the pancreatic duct of 1 case. Pancreatitis was not found in PDGP. Mild to moderate bleeding occurred in 2 cases and cured after procedure (PDWN). One patient with bleeding during operation (PDWN) was also treated with norepinephrine rinse, submucosal injection of epinephrine and electric coagulation, no other severe complications were found. There were no differences between VNTPS technique group and conventional ERCP group(13.3% vs 9.3%, χ 2 = 0.478, P > 0.05). These finding suggest that VNTPST was one of the approaches when standard techniques failed.
DISCUSSION
The standard cannulation technique of ERCP can achieve satisfactory effects, especially application of ultra-slipping guide wire increases the successful rate for bile duct cannulation and declines the incidence of complications. In our study, 247 cases in TRERCP group were successfully treated with a successful rate of 89.1% (247/277). Complications occurred in 26 cases, the incidence of complications was 9.3% (26/277), which is consistent with the
A B
reported data both in China and in foreign countries [13][14][15] . Although the successful rate for standard ERCP is high at present, the selective bile duct cannulation failure rate is 5%-10%. Due to anatomic and physiological factors such as shortage of common cholangiopancreatic duct, duodenal diverticulum, small ampullary orifice, or cervical of the ampulla, it is difficult to perform selective biliary cannulation. Pathologic conditions, such as Oddi's sphincer stenosis, duodenal inflammation, ampulla and papillary neoplasms, impacted calculi, may result in cannulation failure. However, VNTPST plays a salvage role in solving such cannulation difficulties. In our study, the successful rate was 80% (24/30)and the incidence of complication was 13.3% in VNTPST group, which is in line with the reported PSP both in China and in foreign countries [13,[15][16][17] , indicating that as long as precuts are skillfully performed, VNTPST is safe and effective and plays an important role in increasing the ERCP successful rate.
The precondition of PSP is that the tip of scalpel should be inserted into the papilla orifice. In general, it is often applied when pancreatic duct cannulation is performed repeatedly and TRERCP fails. The technique is suitable for type "Y" or type "Ⅴ" pancreaticobiliary ductal junction, especially for type "Y". The advantage of PSP is that the direction and depth are easy to control, but complications such as severe pancreatitis may occur, because pancreatic duct edema may occur after PSP [17] . In our study, the pure current was applied to the MPSP precut, and the direction of incision was made at the 10-11 o' clock direction, thus avoiding the occurrence of edema and trauma in pancreatic duct orifice and pancreatitis after MPSP. MPSP cannot be applied to the small papilla orifice since PLPTS cannot be inserted into the pancreatic duct. For type V in particular, sometimes it is not easy to achieve such a succees, PDWN or PUWN should be performed. For the impacted stones in duodenal ampulla or duodenal ampulla mass, it is not as convenient as needle knife.
Needle-knife is the major tool for precut [17] . In this study, needle knife was used in 19 cases (19/30, 63.3%) with PDWN or PUWN or PDGP or comprehensive technology, the successful rate was 68.4% (13/19). During the procedure, the tip of the needle should be put in the middle of visual area. If cannulation of the CBD through the opening is difficult, cannulation can be achieved with needle knife technique in most cases.
Due to carelessness of nurses, the soft wire was not replaced by a hard one. The tip of soft wire was broken off in the pancreatic duct (PDGP group) in 1 case, but there were no postoperative complications. Bleeding occurred in 1 case when small blood vessels were cut by needle-knife, but was stopped and no severe complication occurred. Our data indicate that NPK is as successful and safe as MPSP.
In Katsinelos study, 68 cases underwent needle knife precut, bleeding occurred in 5 cases (7%) and AP in 3 cases (4%), which were treated with conservative therapy [18] . The complication rate in our study was 13.3%, which is in line with the reported data [12] , there were no severe complications or death, indicating that the procedure is highly successful and quite safe. Compared with TRERCP, postsphincterotomy hemorrhage often occurred after needle knife precut, but it was minor and able to be treated with epinephrine rinse or nephrine injection. One patient had hemorrhage in our study and no adverse effect was found after treatment. Abdominal pain should be closely observed after precut to avoid perforation for which an abdominal image is necessary. Once perforation occurs, titanium clips, gastrointestinal decompression and fast should be taken.
Compared with needle knife, although the depth of incision is easy to control and may not result in perforation, SNKP is not so convenient for duodenal ampulla calculus incarceration. When the direction is controlled by pancreatic duct guidewire, PDGP is good for protruded papilla but not for small aperture endoscope [19] . The papilla orifice often cannot be turned up for the duodenum adherence or malformation after abdominal part operation, UROT can move up the incision. When standard ERCP is difficult to turn up the papilla orifice, BERET can be performed. When the guide wire is inserted repeatedly into the pancreatic duct, PDGP can avoid repeated catheter insertion. When the bile cannulation is achieved, standard ERCP follows. To improve ERCP successful rate, when a single method cannot complete deep cannulation, CT can be used in various precuts to obtain cannulation.
The reasons for the failure of cannulation in all kinds of precut are as follows: The bile duct not found in 1 case of OSS combined with cholelocholithiasis, the bile duct distorted by neoplasm in 1 case of pancreatic head carcinoma, duodenal papilla carcinoma diagnosed in 2 cases, unnecessary precut, 1 case of distal bile duct cancer and distal stricture of CBD.
Based on our study, the operation indication should be strictly controlled [20] . Precutting should be avoided for diagnostic purposes because other available methods, such as MRCP and endosonography can provide diagnostic information. Precut should be avoided if the distal stricture of CBD is longer than 1.5 cm. When cannulnation of the common bile duct is not possible after precut, ERCP should be repeatedly performed 5-7 d later when edema caused by precut relieves and cannulation can be achieved.
In conclusion, needle knife technique can be performed when orifice cannot be exposed for the inflamed papilla or ampullary edema for which PSP is not available. The exposed length of needle knife is determined according to the size and shape of papilla. The isolating sheath should be fixed by nurse, otherwise intraperitoneal perforation may occur. Capillary hemorrhage in the NPK can be stopped by electric coagulation combined with needle knife. Perforation may occur in subsequent EST after the precut in patients with cholelithiasis, therefore, the incision size should not be too big in subsequent EST. The precut should be slowly extended step by step, from ampullary mucosa to submucosa in layers, finally ended up with a seepage of bile detected or the bile duct sphincter tissue seen. Both PUWN and PDWN can be used in patients with duodenal ampulla calculus incarceration, whereas PDWN should be carefully performed in patients with small papilla. When distal stenosis of CBD is too long, needle knife should be cautiously used. For a right bile orifice, opening of the bile ducts may be found in the right side of precut. PUWN is suitable for the small papilla with a small orifice.
Background
McCune reported endoscopic retrograde cholangiopancreatography (ERCP) in 1968 for the first time, with the successful rate of 25% [1] . It was early applied in clinical practice in 1972 [21] , and the therapeutic ERCP (TERCP) was developed. Kawai and Nagai in Japan used endoscopic nasobiliary drainage (ENBD) for acute obstructed suppurate cholangitis (AOSC) patients and achieved success in 1978. In China, ERCP was introduced into Peking Union Hospital in 1974. Dr. Ming-Zhang Chen initially reported endoscopic sphincterotomy (EST), but it was not widely used for its complex technique and lower successful rate. With the advances in surgical techniques and equipments, the successful rate for ERCP was 84.0% in 1970s, and reached 93.5% in 1980s and 96.1% in 1990s in China [2] , which is similar to the international reports [3,4] . TERCP was initially applied in early 1980s and has become an important method in treatment of biliary duct and pancreatic disorders. The therapeutic ERCP for patients with biliary duct and pancreatic disorders has achieved satisfactory curative effects on biliary duct and pancreatic diseases in China [8] . The successful rate for different diagnostic ERCP (DERCP) techniques depends on EST, which is associated with selective and deep cannulation of the bile duct. Due to anatomy, physiology factors and pathologic variation, the guide wire, catheter and scalpel cannot always be inserted into the bile duct successfully, leading to failure in TERCP. So it is the key step to various new type precut sphincterotomy techniques (VNTPST) in ERCP.
Research frontiers
Precut was initially reported by Siegel [5] , the successful rate for ERCP can go up with the success of precut. The so-call "precut" can be explained as following: when biliary cannulation is difficult to perform, papilla and bile duct sphincters need to be partially cut in advance for catheter insertion into the bile duct, then the procedures could be well done. So it is different from the general EST. It was reported that the successful rate for precut is 77%-91% [6,7] . The application of precut is limitted [8][9][10][11] because more complications occur. Since needle shape scalpel is very difficult to use and not safe during the precut papillotomy, few reports on its use are available.
Innovations and breakthroughs
We first designed and reported the mucosal bridge precut (Deng's precut) which is especially suitable for patients with inflamed papilla combined with up papillary diverticula. The technique can correct the cannulation angle, and make cannulation easy. It is safer than needle knife. We suggest that this procedure can be extensively used because it is of higher successful rate and safety.
Applications
The study suggests that when efforts using standard techniques have failed in ERCP and biliary access is required, Deng's precut, a combined endoscopic "precut" technique is available. In experienced hands of ERCP, the ratio of ERCP can be increased with remarkable effects. Precuts appear to be as safe and effective as standard EST and can be widely performed in large and middle-sized hospitals.
Peer review
This is a well written paper. In the study, the authors retrospectively investigated the effect and safety of VNTPST in ERCP for difficult biliary duct cannulation (DBC). VNTPS and Deng's precut are highly effective to get biliary access during ERCP for DBC. They can increase the successful rate for deep cannulation and decrease complications. | 2018-04-03T00:45:36.821Z | 2007-08-28T00:00:00.000 | {
"year": 2007,
"sha1": "80b1e892c5b12dfcfeeb7a7264609787d3dd5585",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3748/wjg.v13.i32.4385",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "546df1edf0ddc708812502dc10b977091aa544da",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220490157 | pes2o/s2orc | v3-fos-license | Spatiotemporal gait characteristics and ankle kinematics of backward walking in people with chronic ankle instability
Backward walking offers a unique challenge to balance and ambulation. This study investigated the characteristics of spatiotemporal gait factors and ankle kinematics during backward walking in people with chronic ankle instability. Sixteen subjects with chronic ankle instability and 16 able-bodied controls walked on a treadmill at their self-selected speed under backward and forward walking conditions. Gait speed, cadence, double limb support percentage, stride time variability, and three-dimensional ankle kinematics were compared between groups and conditions. During backward walking, both groups had significantly slower gait speed, lower cadence, and greater stride time variability. In addition, under backward walking condition, subjects in both groups demonstrated significant sagittal and frontal kinematic alternations, such as greater dorsiflexion and inversion following initial contact (0–27.7%, 0–25.0% of gait cycle respectively, p < 0.001). However, there were no significant differences between groups in any of the measured outcomes. This indicates that subjects with chronic ankle instability adapt to self-selected speed backward walking similarly to healthy controls. Assessments with more challenging tasks, such as backward walking with dual task and backward walking at fast speed, may be more appropriate for testing gait impairments related to chronic ankle instability.
www.nature.com/scientificreports/ This indicates that higher attentional demands during a functional task may further limit the ability of this population to control movement. Backward walking (BW) is an activity with additional complexity compared to regular forward walking (FW). Although some studies have shown that BW is a mirror image of FW, considering that initial contact during BW is done by the toe instead of the heel 17,18 . BW differs from FW in many aspects.
Step length is shorter and gait speed is slightly slower during BW 19 . Kinematic and kinetic studies demonstrated greater range of ankle dorsiflexion, reduced plantar flexion, and more even plantar pressure distribution 20,21 . BW requires greater muscle activity and has higher metabolic costs 18,19,22 . Moreover, BW involves increased activation of the sensorimotor control system due to altered or absent visual feedback 18,19,23 . BW may be extremely novel even for healthy individuals. Kurz et al. 24 investigated gait variability and cortical activation in healthy adults during FW and BW and reported increase in sensorimotor cortical activation measured by functional near infrared spectroscopy and a greater stride-time variability during BW. It has also been reported that exercise of BW in untrained healthy adults caused neural adaptations 25 . Overall, BW offer a unique challenge to balance and movement. Thus, BW is used increasingly in rehabilitation programs to promote balance control 17 .
Due to the nature of a relatively untrained and challenging gait task, assessing characteristics of BW among people with CAI, as compared to healthy controls, may provide additional information regarding sensorimotor control in this population. Furthermore, the ability to walk backward may be a useful measure of mobility, as well as balance training strategy. Thus, it is important to quantify the performance of BW in people with CAI.
Therefore, this study investigated the characteristics of spatiotemporal gait factors and ankle kinematics during BW among people with CAI. For this purpose, we compared changes between FW and BW among individuals with and without CAI. We expected that only BW would differ between these populations. Specifically, we hypothesized that during BW, individuals with CAI would have slower gait speed and increased ankle inversion, as compared to healthy controls.
Participants.
The sample size for this study was determined based on a power analysis calculation that was conducted using G*Power version 3.1 26 . To detect a true and meaningful difference of 0.1 m/s in BW gait speed, with standard deviation of ± 0.1, power of 80%, and a 95% confidence level, a sample of 16 subjects in each group was needed. Consequently, 16 subjects with CAI and 16 healthy controls participated in the study. The enrollment criteria for the CAI group were based on previously established standards to identify individuals with CAI 2,27 . Participants with CAI were included if they met the following criteria: (i) history of at least one significant ankle sprain that occurred at least 12 months prior to the study and was diagnosed by a physician or a physical therapist based on clinical examination 28 , (ii) history of at least two episodes of 'giving way' (regular occurrence of uncontrolled and unpredictable episodes of excessive inversion of the rear foot) and feelings of ankle joint instability, (iii) the most recent injury occurred more than 6 weeks prior to study enrollment, (iv) answering "yes" to at least five yes/no questions of the Ankle Instability Instrument developed by Docherty et al. 29 . This should include the first question: "Have you ever sprained your ankle?" and at least four other questions related to the severity of ankle symptoms, and (v) able to bear full weight on the injured lower extremity with no more than mild discomfort. The control group included healthy participants with no history of ankle sprain. Exclusion criteria for all groups were a history of ankle fracture, other pathological conditions or surgical procedures in the lower extremity and vestibular or neurological disorders. Participants were recruited from a university setting and provided written informed consent prior to participating in the study. Ariel University Institutional Review Board approved the study.
Procedure. The study was conducted during one visit at the Neuromuscular and Human Performance Laboratory, at Ariel University. Gait was evaluated under both FW and BW walking conditions, while participants walked on a treadmill (VO2 Challenger, Taiwan). Participants were given standard instructions to walk at their comfortable, self-selected pace. Before data collection, subjects were provided with an opportunity to habituate to walking on the treadmill. They walked barefoot and wore tight, black sports pants and t-shirts. To capture gait data, markers were placed directly on the skin using double-sided tape. A total of 15 reflective markers were placed on each side of the participant's iliac crest, anterior superior iliac spine, posterior superior iliac spine, greater trochanter, lateral and medial femoral condyles, tibial tuberosity, ankle medial malleolus, ankle lateral malleolus, heel, first toe metatarsal head, first toe metatarsal base, fifth toe metatarsal head, fifth toe metatarsal base, and second and third metatarsal base. In addition, cluster markers were placed at mid-thigh and midcalf. A six-camera motion capture system (Qualisys, Göteborg, Sweden) sampled at 250 Hz was used to obtain three-dimensional ankle kinematics and the spatiotemporal data. Data was exported to Visual 3-D software (C-motion, Inc., Kingston, ON, Canada), and processed through a 6-degree of freedom anthropometric model. Ankle angles during walking were calculated using the cardan rotation sequence 30 . To normalize the gait cycle, gait events were identified automatically, as suggested by Zeni et al. 31 and De Asha et al. 32 .
Under each walking condition, 17 consecutive strides were recorded for each participant. Then, the first and last strides were omitted, and the remaining 15 strides were analyzed. In the CAI group, the tested limb was the involved limb. The limb used for analysis in the control group was matched to the CAI by side (right or left). The spatiotemporal outcomes examined were gait speed (m/sec), cadence (steps/min), the percent of the gait cycle spent in double limb support (%DLS), and stride time variability (100 × [standard deviation of stride time/mean stride time]). Outcomes of ankle kinematics included the average and 95% confidence interval (CI) of sagittal and frontal ankle angle throughout the gait cycle. www.nature.com/scientificreports/ Statistical analysis. Descriptive statistics included mean and standard deviations (SD). Normal distribution of continuous data was verified using Shapiro-Wilk test. Simple chi-square and t-tests were used to compare baseline characteristics between the CAI and control groups. A two-way linear mixed model was performed for each spatiotemporal gait outcome with the factors of group (CAI, healthy control) and walking condition (FW, BW). The interaction effect was evaluated to determine if there were differences between groups in their walking adaptation from forward to backward. To analyze the kinematic parameters, mean sagittal and frontal ankle angles were plotted throughout the gait cycle with their corresponding 95% CI, as was previously described 5,14 . A significant difference was defined in case non-overlapping CI was found. In addition, a two-way repeated measures ANOVA using Statistical Parametric Mapping (SPM) was used to analyze the effects of group, condition and interaction (group x condition) of the kinematic data. Significance was determined as P < 0.05. The analysis was conducted using IBM SPSS, v24.0 (SPSS, Armonk, NY: IBM Corp) and the SPM1D v.0.4 package for Python 3.7 33 .
Results
Subject characteristics. Subject characteristics are summarized in Table 1. There were no differences in baseline characteristics (age, height, weight and sex) between groups. The average time since last sprain in the CAI group was 20.5 (18.18) weeks and the average Ankle Instability Instrument score was 6.00 (1.15).
Spatiotemporal gait outcomes.
The changes in spatiotemporal characteristics from FW to BW in both groups for all four spatiotemporal gait outcomes, and the results of the linear mixed model are presented in Table 2 and Fig. 1a-d. The analysis showed a significant effect of condition for gait speed, cadence, and stride time variability. During BW, both groups had slower gait speed (p < 0.001), lower cadence (p < 0.001), and higher stride time variability (p < 0.001). However, there was no significant group or interaction effects for these parameters, indicating no difference between groups in their adaptation to BW. In addition, no between-condition difference was evident in %DLS, as well as no between-group difference in both conditions (FW and BW).
Ankle kinematics. Figure 2 presents mean sagittal and frontal plane ankle kinematics with their corresponding 95% CI under both walking conditions. As depicted in the Fig. 2, overlap of ankle kinematics CIs between the CAI and healthy controls were consistent throughout the gait cycle, indicating no significant difference between groups. Similarly, Fig. 3 presents between conditions (i.e. FW vs. BW) comparison of ankle kinematics, demonstrating no consisted CIs overlapping, indicating significant differences. The SPM analysis indicated that significant between-condition differences were found both in the sagittal and frontal plane. In the sagittal plane, BW demonstrated greater dorsiflexion at 0-27.7% of the gait cycle (p < 0.001), greater plantarflexion at 34.5-58.9% of the gait cycle (p < 0.001), and greater dorsiflexion at 62.4-100.0% of the gait cycle (p < 0.001). In the frontal plane, BW demonstrated greater inversion at 0-25.0% of the gait cycle (p < 0.001), greater eversion at 46.0-62.0% of the gait cycle (p < 0.001), and greater inversion at 67.8-100.0% of the gait cycle (p < 0.001). No significant group × condition interaction was found.
Discussion
The current study found that under BW condition, spatiotemporal and ankle kinematics gait characteristics of subjects with CAI and healthy controls were significantly different when compared to FW. For example, during BW, gait speed was reduced, whereas stride time variability and ankle dorsiflexion were increased. Yet, while major differences were found between BW and FW, there were no differences between groups. These indicate that during BW, subjects with CAI adjust their spatiotemporal and ankle kinematics characteristics similar to the way healthy controls do. To the best of our knowledge, this study is the first to document these results.
While several studies have clearly demonstrated substantial differences in movement analysis between subjects with and without CAI 5,6,13,13 , our findings are consistent with previous reports that did not find differences between individuals of these two groups. Two recent reviews indicated that postural stability assessments using a stable surface with the eyes open may not always discriminate between individuals with CAI and healthy controls 4,34 . De Noronha and colleagues 35 reported no differences in proprioception or motor control between CAI and a control group. A systematic review with meta-analysis conducted to determine the ability of functional performance tests to differentiate between individuals with CAI and healthy controls, concluded that clinical implementation of these tests should be limited, due to inconsistent results 36 .
Furthermore, one of the most common characteristics that has been reported in the literature that differs between patients with CAI and healthy participants is greater inversion of the foot relative to the tibia during walking 6 . Yet, conflicting results were reported in other studies where CAI subjects were not found to have more inversion 5,37 or even have greater rearfoot eversion 38 . Similarly, the results of the present study did not observe increased ankle inversion in the CAI group in FW or BW, as compared to controls.
Several explanations may be suggested for the inconsistencies observed between studies. It is possible that some discrepancies were due to the heterogeneity of the CAI population. Hertel and Corbett 39 recently presented an updated model of CAI. According to this model, there is a list of impairments that people with CAI as a group are likely to demonstrate; however, each individual may present certain clinical and performance outcomes that are affected by personal and environmental factors. It seems that the inconsistencies between studies may be partially explained by this model. www.nature.com/scientificreports/ Specifically, all the CAI participants in the current study met established standards for CAI 27 . However, according to the Ankle Instability Instrument, only 3 of 16 participants reported that they feel unstable while walking on a flat surface. Previous studies have shown that only very complex walking situations, such as walking with a cognitive dual task, may differentiate CAI subjects from controls 8,16 . Thus, it is possible that while BW on a treadmill required some level of adaptation from both groups, it was not challenging enough to discriminate between their gait performance. Assessments with more challenging tasks, such BW with dual task and BW at fast speed may be more appropriate for testing gait impairments related to CAI.
Another aspect that may have affected the results is related to the procedure of data collection. In the present study, data were collected while subjects were barefoot, as this state detects frontal plane kinematics more accurately. Systematic reviews indicated significant differences in kinematics, kinetics and muscle activity during barefoot and shod walking and running 40,41 . Likewise, previous research with CAI participants has shown that gait outcomes vary when data are collected during barefoot walking [42][43][44] or with shoes 45,46 . For example, Herb et al. 45 evaluated gait kinematics while the subjects wore shoes and reported on differences in shank-rearfoot coupling between CAI and control groups across gait cycle. The authors explained the results by altered sensorimotor function in the CAI group due to their ankle pathology. In contrast, our results did not demonstrate www.nature.com/scientificreports/ differences in movement analysis between subjects with and without CAI, even under a task that requires greater sensorimotor activation such as BW. A possible explanation for this difference may be related to the uniqueness of barefoot walking. Barefoot walking promotes higher plantar loading, resulting with enhanced afferent feedback of proprioception, which is desirable for control of gait and kinematic adjustments. Furthermore, BW walking relies more on proprioception rather than on visual feedback. Thus, the augmented feedback provided by the barefoot walking may increase the ability of the sensorimotor system to organize movement patterns. Although BW did not distinguish between groups, it affected the spatiotemporal and kinematic variables in both groups, compared to FW. This finding is in agreement with previous studies that reported changes in spatiotemporal and kinematic characteristics in young adults during BW, as compared to FW 19,20,23,47 . Consistent with previous research, BW was characterized by slower gait velocity, reduced cadence, increased ankle dorsiflexion and decreased plantar flexion 20 . The present study also documented increased stride time variability, which may indicate less stability during BW 48 .
Until recently, BW was considered to be a simple reversal of FW. It was hypothesized that a single spinal mechanism controls both FW and BW 49,50 . However, current evidence suggests that BW utilizes additional elements, presumably supraspinal, in addition to a common spinal drive 18,22 . The significant adaptations during BW in both groups, and particularly the increased stride time variability, may support the notion that control of BW mechanisms may require more central nervous system resources than does FW.
Another interesting finding of the current study is related to sagittal ankle kinematics. Subjects with CAI were reported to have decreased peak ankle dorsiflexion during FW walking compared to healthy controls 5 . In the current study, the dorsiflexion peak ankle during FW was similar to findings of previous studies with CAI 5,43 with no difference between groups, but greater dorsiflexion (+ 4.31°) was observed under the BW condition. Emerging research suggests that BW can improve locomotion in patients with neurological lesions, as well in patients with musculoskeletal disorders. A recently published study reported the effectiveness of BW as a rehabilitation technique for patients after anterior cruciate ligament reconstruction 51 . Based on our results, clinicians may consider training subjects with CAI under varied BW conditions in order to enhance their sensorimotor control of ambulation. Furthermore, the increased ankle dorsiflexion during BW may suggest that this condition can be utilized to gain greater ankle dorsiflexion. To the best of our knowledge, there is no published data to document the effectiveness of BW training for patients with CAI. Thus, future research should be performed to confirm the effectiveness of this intervention.
In the present study, gait was evaluated while the subjects walked on a treadmill. When walking over-ground, a constant speed is not usually sustained for a long period of time 52 . In contrast, during treadmill walking, the speed is fixed. The constant speed and rhythm during treadmill walking may influence the spatiotemporal and kinematics variables. However, while changes in spatiotemporal variables and hip and knee kinematics were demonstrated, ankle kinematics seem to be similar under forward self-selected walking over ground and on a treadmill [53][54][55] . This may support the ecological validity of the findings regarding ankle kinematics during FW in the current study.
As far as we know there are no studies that compared BW over a treadmill to BW over ground. Visual information is important for maintaining equilibrium and stability during locomotion. While walking backward visual information is limited and the subject cannot observe potential obstacles. Furthermore, during over-ground locomotion the subject moves with respect to the surroundings, while during treadmill walking the opposite occurs, and the surroundings moves with respect to the subject. This may add more complexity to visual perception during BW treadmill walking. Thus, future studies that will compare BW over ground and on a treadmill are warranted.
This study had several limitations. It was originally powered to identify differences between the groups for gait speed and did not account for the additional spatiotemporal and kinematic variables. Additional limitations are that we did not separate CAI subjects according to mechanical and functional instabilities, and the subjective report of perceived instability during BW was not tested. Updated models of CAI indicate differences between mechanical and functional instability among individuals with CAI and stresses the importance of evaluating self-reported perceived instability 39 . Thus, further investigations with a larger cohort should be undertaken to confirm the study results and assess and analyze relevant subgroups of patients with CAI.
conclusions
Participants with CAI and healthy controls demonstrated significant changes in spatiotemporal and ankle kinematics gait characteristics between BW and FW conditions. However, there were no significant between-group differences in both conditions, indicating that subjects with CAI adjust their spatiotemporal and ankle kinematics characteristics during BW similar to the way healthy controls do. Clinicians should consider this information with caution when assessing and designing training programs for individuals with CAI, due to the heterogeneity of this population.
Data availability
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. | 2020-07-13T14:20:34.224Z | 2020-07-13T00:00:00.000 | {
"year": 2020,
"sha1": "2125a14f69eeef6e36e0f3d850aa2a2679df34a1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-68385-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2125a14f69eeef6e36e0f3d850aa2a2679df34a1",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
37207210 | pes2o/s2orc | v3-fos-license | Learning Disorder Secondary Epilepsy: A Case Report
Introduction: Epilepsy is a syndrome characterized by the presence of seizures that can affect the cognitive performance of the individual. Neuropsychology has studied the idiopathic epilepsies to understand if the behavioral and cognitive impairments are associated with electrical discharges in the brain, and not with an injury itself. Objective: To identify cognitive impairments of a child with epilepsy associated with diagnosis of learning disorder. Method: The sample consists of a child diagnosed with epilepsy, nine years old, from Maceió-AL. The methodology applied is a qualitative and descriptive study of a case report. Neuropsychological tests are applied for that purpose. Results: The results of the tests show cognitive deficits, impaired attention, memory and slowness of reasoning. Conclusion: Despite the results, it cannot be said in this case that epilepsy was the only factor that triggered the learning disorder, because the child had related comorbidities.
Introduction
The International League Against Epilepsy (ILAE) in 2005 defined Epilepsy as a disorder of the brain characterized by a predisposition to generate epileptic seizures [1].
Epilepsy is usually seen as a traumatic condition. Even today, with more information and knowledge about the disease, it is a condition that could prejudice behavioural abilities of patients [2]. It is a disease that affects cognitive function compromising learning [3].
The constant difficulties of the patient who has epilepsy during the learning process, together with the social and family environment, make them more likely to suffer from anxiety [4]. Considering these aspects, precocious diagnosis and treatment, providing less loss of life quality is the primary goal of the health team, school and family. Deep understanding of neurological and educational mecanism involved in this pathology has been the challenge of researchers from different scientific areas.
From the data presented, it is significant to learn about disorders associated with epilepsy. Therefore, this research aims to determine which cognitive impairment may have a child diagnosed with epilepsy and learning disorders. Although many children with epilepsy have few difficulties in social and cognitive development, the literature shows that epilepsy is associated with increased risk for a variety of behavioral and learning problems. Children with epilepsy commonly show a discrepancy between academic performance and intellectual ability [5]. The most frequent complaints are: difficulty of memory, slowing of reasoning and lack of attention. In turn, patients with epilepsy are still important source of knowledge to neuropsychology, i. e., studies with epileptic patients have contributed to the knowledge of the workings of the brain and cognitive function.
Materials and Methods
This study was executed according to the Resolution 466/12, after approved by the Institutional Ethical Committee, with protocol of number 418,738 on 16/10/2013.
The sample consisted of a child diagnosed with epilepsy referred by a neurologist.
The patient is a nine years old, male, attending third grade at a public school in the state of Alagoas. This is a case report in which we used the qualitative and descriptive method which allows us to study in depth, systematic discussion of a particular case.
Study Procedures
Responsible for the child was invited to participate of the study. At this moment, it was explained and presented the document of informed consent, as well as the commitment of statements with the results and allocation of materials and/or data collected. It was explained also the child with adequate language, and dates of the sessions have been set.
The materials used were psychological tests: WISC III, Raven and complex figure of Rey, Psychological anamnesis, neurological evaluation, educational and behavioral school performance reports.
The Wechsler Intelligence Scale for Children or WISC III is a clinical tool for individual application, to assess the intellectual capacity of children aged 6 years to 17 years old [6].
Coloured Progressive Matrices Raven can be described as "observation tests and A. R. de Albuquerque Sarmento Omena et al.
clarity of thought", and aims to assess general intelligence [7].
Complex figures of Rey is one of the 10 most commonly used neuropsychological tests in the world, due to the variety of cognitive processes designed to measure, such as: constructive praxia, planning, strategies for problem solving, perception, motor function and visual memory [8].
Data Analysis Method: Comparative and there was the confrontation of data obtained in the case study with the data found in the search.
Finally, we evaluated the results obtained in the tests, and then delivered the child to the responsible.
Case Description
LHFS, nine years old, born in 02/02/2004, is a male, in the third year of elementary school, and native of Alagoas, Brazil. According to mother's reports, the pregnancy was accidentally diagnosed in a CT (made use of iodinated contrast media). It was a high-risk pregnancy because the mother was using chemotherapy to treat ovarian cancer and right horn. It was mooted the possibility of interrupt the pregnancy, but after contact with the doctor's Cancer Institute of Rio de Janeiro was chosen for its maintenance, but was very eager for pregnant women terminate the pregnancy. The pregnant reported that had periods of negative emotions, conflicts and difficulties in the relationship with her husband. It was a term pregnancy, caesarean delivery, and no fetal distress.
The child had adequate psychomotor development for age, tone of neck at the three months. Sat without any support at seven months, crawled at eight months and walked at 12 months, spoke the first words between 10 to 12 months. Among common childhood diseases had chickenpox. At 3 years old he had an epileptic seizure. He suffered three falls with fractures of the upper limbs. Two hospitalizations due to three surgeries (phimosis, umbilical hernia and fracture). Mother of the child refers as main complaint the poor school performance and heteroaggressiveness He can read little, can not write, don't knows money, as well as days of the week and hours. He began to show aggression, neuropsychomotor agitation and seizures to the three years of life. Guided by the school, the genitor sought a neurological assessment, due to the child assaulting other children. Was performed an electroencephalogram (EEG) and epileptogenic activities were detected predominantly in temporo-parietal-occipital areas, and initiated treatment with carbamazepine and pericyazine.
The mother states that maternal uncle has epilepsy and also reports occurrence of migraine (maternal aunt, mothers' and grandmother). Patient were avaluated by liver and kidney functions, blood count and computerized electroencephalography (EEG).
He was diagnosed with temporal lobe epilepsy, secondarily by external causes-disease of pregnancy and iatrogenic medicines during the pregnancy), i.e. ICD10: G40. 3-epilepsy-F71-moderate mental retardation. He was treated with carbamazepine 200 mg, gardenal
Results
During neuropsychological assessment, he agreed to perform all activities. He performed the following assessments: HTP test, he showed no interest. In all drawings the child finished saying it could not draw any detail in the drawing. He did quickly what was requested. The test was used to the purpose of establishing rapport. Coloured (Table 1 and Table 2) achieved with the application of the WISC III in verbal scale, verbal IQ, including The scores in relation to the verbal comprehension index remained in the average; index of perceptual organization and processing speed were classified as in limit and resistance index to distraction as intellectually deficient.
The test of Complex Figure Rey (Figure 1) was used according to the age of the patient. In the step of picture playback, was observed the immediate memory. He started drawing in details, and then copied the elements next, resulting in a distorted copy. The Pacient was identified as Type IV due to juxtaposition of details. He made the copy in 8 minutes and obtained percentile of 10, classifying him as below of a standard deviation. When prompted to perform the drawing by evocation initially the pacient showed resistance, saying that didn't know, but was encouraged to do what he remembered. He made a quick drawing in 2 minutes and with few elements getting a percentile of 10, clearly insufficient.
The patient showed multiple cognitive impairments, affecting several cognitive functions. However, despite the finding it was not possible to confirm that the epilepsy is the sole cause of learning disorders, since in the analysis studied was observed multiple comorbidities. Therefore, it can be said that the learning disorder is multifactorial causes.
Discussion
It According to analysis, the patient in question, showed significant deficits in attentional system, visuo-constructive perception, organizational ability, memory and slowing of reasoning. His intellectual capacity was lower than expected for children of their age and education. Diniz (2010) states that are described attentional alterations, of executive functions and general intelligence level in patients with epilepsy. The evaluation Neuropsychological added important data for treatment planning. However, one can not say that the learning disorder is due exclusively epilepsy, as they may be associated with the comorbidities of the case.
Final Considerations
This research does not exhaust all the possibilities to evaluate the disorder of secondary learning to epilepsy, but contributes to new fields of study. Individual study (case study), when minutely detailed, will guide the studies of greater magnitude and breadth.
A. R. de Albuquerque Sarmento Omena et al.
In this context, the neuropsychological assessment is crucial to trace the neuropsychological profile of the patient, and a good evaluation should start in the interview.
Thus, after data collection neuropsychological tests will be applied. In a way, the essential purpose of a neuropsychological battery is the overall assessment of cognitive functions by specifying the dysfunctions of attention, memory, language and executive functions that are basis for the development of intellectual abilities. Thus, the neuropsychological assessment contributes to a multidisciplinary understanding of the case in study of this research.
Thus, the learning disorder secondary to epilepsy was evaluated by taking into account the results of tests that were applied to the patient and medical history with the mother, and with support from reports of the schools/teacher, which were the pillars to realize that the patient in this study had learning disorders, making it impossible to say that his diagnosis of epilepsy was responsible to cause only all comorbities.
It is recommended that the professionals involved with patients with learning disability observe the results obtained by neuropsychological assessment, with a view to detecting the needs of changing therapeutic project, allowing improvement of the treatment in order to find strategies that improve techniques habilitation/rehabilitation and more appropriate therapies, thereby improving the psychomotor development and social interaction to better socialize the individual in your world. | 2017-08-30T10:59:25.407Z | 2016-10-12T00:00:00.000 | {
"year": 2016,
"sha1": "20b422d7cd651155a2612838b4e701e81e6324aa",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=71954",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "20b422d7cd651155a2612838b4e701e81e6324aa",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
265019146 | pes2o/s2orc | v3-fos-license | The AGN fraction in high-redshift protocluster candidates selected by Planck and Herschel
A complete understanding of the mass assembly history of structures in the universe requires the study of the growth of galaxies and their supermassive black holes (SMBHs) as a function of their local environment over cosmic time. In this context, it is important to quantify the effects that the early stages of galaxy cluster development have on the growth of SMBHs. We used a sample of Herschel/SPIRE sources of $\sim$ 228 red and compact Planck-selected protocluster (PC) candidates to estimate the active galactic nuclei (AGN) fraction from a large sample of galaxies within these candidates. We estimate the AGN fraction by using the mid-infrared (mid-IR) photometry provided by the WISE/AllWISE data of $\sim650$ counterparts at high redshifts. We created an AllWISE mid-IR colour-colour selection using a clustering machine learning algorithm and two {\it WISE} colour cuts using the 3.4 $\mu m$ (W1), 4.6 $\mu m$ (W2) and 12 $\mu m$ (W3) passbands, to classify sources as AGN. We also compare the AGN fraction in PCs with that in the field to better understand the influence of the environment on galaxy development. We found an AGN fraction of $f_{AGN} = 0.113 \pm 0.03$ in PC candidates and an AGN fraction of $f_{AGN} = 0.095 \pm 0.013$ in the field. We also selected a subsample of `red' SPIRE subsample with a higher overdensity significance, obtaining $f_{AGN} = 0.186 \pm 0.044$, versus $f_{AGN} = 0.037 \pm 0.010$ of `non-red sources', consistent with higher AGN fractions for denser environments. We conclude that our results point towards a higher AGN fraction in PCs, similar to other studies.
INTRODUCTION
Galaxies in the universe are not randomly distributed in space; instead, they can be isolated (i.e. a field galaxy) or in gravitationallybound structures, such as groups or galaxy clusters (e.g.Oort 1983;Waldrop 1983).
This raises important questions about the differences between galaxies as a function of their environment and how their evolutionary paths vary over cosmic time from the first density fluctuations ★ E-mail: calgaticai@gmail.com to local current structures.To answer these questions, it is necessary to study protoclusters (PCs) of galaxies, the progenitor structures of today's massive clusters, during the epoch of their formation (Muldrew et al. 2015;Overzier 2016;Muldrew et al. 2018).Consequently, there is a need to identify and characterise PCs at high redshift at ∼ 2 − 3 ('cosmic noon'), which corresponds to the time in cosmic history when the peak of the SFR density of the Universe occurs (Madau & Dickinson 2014;Förster Schreiber & Wuyts 2020).
In order to be able to provide a complete picture of galaxy evolution as large-scale structures assemble and develop, we must understand the simultaneous growth of galaxies and their supermassive black holes (SMBHs).The active growth of a SMBH in a galaxy is typically signalled during its most vigorous phases of mass accretion (active galactic nucleus; AGN).In this regard, there is evidence of a peak at ∼ 2 − 3 for high-luminosity AGN (Hasinger et al. 2005;Fanidakis et al. 2012), cosmic BH accretion (e.g.Croom et al. 2009;Delvecchio et al. 2014) and space-density of quasars (e.g. Brown et al. 2006;Richards et al. 2006).
AGN activity in different environments has also been explored.Results include lower average AGN fractions in clusters at redshift < 0.5 (Mishra & Dai 2020) when compared to the field, no dependence of the optical AGN activity on environment in blue galaxies (Miraghaei 2020), higher AGN fractions for massive galaxies than lower mass galaxies (Pimbblet et al. 2013), similar AGN fractions in clusters and the field at 0.5 < < 0.9 (Klesman & Sarajedini 2012), and an increase of AGN fractions with redshift (Eastman et al. 2007).Nevertheless, it is still unclear whether or not the local environment of galaxies plays a significant role in the growth of galaxies and their SMBHs.To clarify this, a statistical study of the environment hosting AGN activity is required, and in particular, it is important to determine the occurrence of AGN in PCs and at different and higher redshifts.
Studies of high redshift PCs concluded that they exhibit higher fractions of AGN and star-forming galaxies compared to the field, as opposed to overdensities at lower redshifts which have lower fractions than the field (e.g.Overzier 2016, a review).For instance, AGN fractions measured in PC range between 2 and 20 times higher than in the field (Lehmer et al. 2009(Lehmer et al. , 2013;;Digby-North et al. 2010;Krishnan et al. 2017).Also Polletta et al. (2021) found similar results for AGN fraction (= 13%±6%) in a PC at = 2.16.Recently, Macuga et al. (2019) found a PC at = 2.53 with an AGN fraction ∼2 times lower than in the field, indicating a lack of clarity regarding the AGN activity in PCs.Further, all of these studies showing a larger AGN fraction are based on X-ray selected AGN (see Casey et al. 2014 for a review) .This type of selection is biassed against highly obscured sources (Hickox & Alexander 2018;Hatcher et al. 2021, and references therein).Therefore, to provide a complete picture, other methods must be used to select AGN.
Comparing all of these studies is difficult, since they all present different methods for selecting AGN or AGN contribution, different sensitivity limits, and definitions of non-AGN host galaxies (see Padovani et al. 2017 for review).Whether differences in AGN fractions are due to redshift evolution, observational biases of PC selected in different halo masses, or evolutionary states, variations in the general and systematic properties of PC depending on how they are selected or just individual PC-to-PC variations remains a crucial open question.
Large samples of PC candidates have been built using large photometric surveys that have mapped significant areas of the sky, and some effort has been made to characterise these kind of environments (e.g., Chiang et al. 2013;Umehata et al. 2015;Lee et al. 2016;Shimakawa et al. 2018;Miller et al. 2019).Performing a larger census of galaxies, especially AGN, within PCs is crucial to understanding the physical processes involved and determining whether the environment of a forming galaxy cluster at high redshift can trigger or drive the growth of SMBHs in its member galaxy population.
The main goal of this study is to measure the AGN fraction in a large sample of PC candidates.We use the sample of 228 Planckselected PC candidates found in Planck Collaboration et al. 2015 (hereafter Planck XXVII), which itself is drawn from a more general sample of the Planck list of high-redshift source candidates (PHZ, Planck Collaboration et al. 2016).This sample was followed up by Herschel/SPIRE, and it is biassed towards highly star-forming regions.We combined the Planck XXVII catalogue with data from the Wide-field Infrared Source Explorer (WISE; Wright et al. 2010) All-WISE data release, which has mapped the whole sky.Using WISE sources allows us to use a mid-IR method that selects both obscured and unobscured AGN (Stern et al. 2012).For this, we built a classifier that includes both a clustering machine learning algorithm and W1-W2-W3 colour cuts.With the classification of our sources, we were able to estimate AGN fractions in both PC members and field galaxies.
This paper is organised as follows: in Section 2 we describe our Planck XXVII sample and its WISE counterparts, along with the control sample needed to construct our classifier; in Section 3 we present how we classify AGN sources with our classifier together with estimates of the method uncertainty; in Section 4 we present the measured AGN fractions and our comparison to previous results in the literature; in Section 5 we discuss our results; and in Section 6 we summarised our results and present our conclusions.
DATA & CATALOGUE
In this work, we use a catalogue of Planck colour-selected sources from the Planck Collaboration et al. 2015 (Planck XXVII), which corresponds to a catalogue of high-redshift protocluster candidates.This sample has follow-up observations with Herschel/SPIRE and the sources detected at > 3 in the Herschel/SPIRE 350m band will be referred to as "SPIRE sources".This sample is what we consider our main sample, and it is described in Section 2.1.1.To have higher resolution photometry than Herschel/SPIRE, we derive the AGN fraction using their WISE counterparts.The description of this sample is in Section 2.1.2.Also, to create a classification scheme that selects AGN, we compiled a control sample that includes catalogues of AGN (see Section 2.2.1) and non-AGN sources (see Section 2.2.2).
Planck XXVII
Our main sample consists of the Herschel/SPIRE follow-up observations of 228 Planck sources from Planck Collaboration et al. (2015).These fields, selected as cold sources of the cosmic infrared background (CIB) and from the Planck catalogue of Compact Sources (PCCS), were chosen for follow-up because their rest-frame farinfrared colours show a peak between the frequency range 353-857 GHz, allowing the selection of ultra luminous infrared galaxies.
This sample is dominated by dusty far-infrared galaxies, with high star formation rates, suggesting the signatures of highly star-forming protoclusters at high redshift, some line-of-sight projections (Negrello et al. 2017), and strongly-lensed sources.Therefore, it is important to note that this study targets a specific population of galaxies in protoclusters, i.e. their most star-forming population.
Particularly for this study, we have discarded the Herschel/SPIRE sources that are considered lensed (Cañameras et al. 2015, Dole H., private communication).After removing the lensed sources, we are left with 193 Planck sources.
Although this catalogue offers a good opportunity to study a large number of star-forming galaxies in protoclusters, it does not provide certain redshift measurements nor does it have enough multiwavelength observations to derive a redshift estimation such as photometric redshifts.However, we do have an idea of the redshift range for these sources.
First, since these sources are considered 'cold' sources of the cosmic infrared background (CIB), we know that they are at redshifts > 1 because the CIB is considered a proxy of intense star formation as those redshifts (Planck Collaboration et al. 2015, 2014, andreferences therein).
More specifically, Planck observations show that these sources have spectral energy distributions (SEDs) that peak around 353 and 857 GHz, which equates to redshifted infrared galaxies at ∼ 2 − 4 (Planck Collaboration et al. 2015).
Also, Planck Collaboration et al. ( 2015) followed the approach of Amblard et al. (2010), and found a suggested redshift range of ∼ 1.5−3 with their Herschel colours and SEDs of modified blackbodies, with the redshift distribution of the SPIRE sources peaking at =2 or 1.3 for dust temperatures of d = 35K or 25K, respectively.
Planck Collaboration et al. (2015) separated the SPIRE sources in two regions, the 'in' region and the 'out' region.The 'in' region is defined as the 50% Planck intensity contour at 545 GHz, the map with the best signal-to-noise ratio (SNR), and has an approximate radius of ∼ 5 arcmin.Planck Collaboration et al. (2015) did an statistical analysis on the number counts for these regions and compared them with two control samples, the HerMES 'level 5' Lockman-SWIRE field (Oliver et al. 2010) and the Herschel Lens Survey (HLS) cluster fields at < 1 of Egami et al. (2010).The statistical analysis shows that IN regions exhibit a chromatic excess consistent with a population of high-redshift ( = 2 − 4) lensed candidates, IN regions in both 350 and 500 have higher counts when compared to samples of the Lockman field and the < 1 HLS cluster fields, IN regions have an excess of SPIRE sources, and that OUT regions have number counts consistent with the Lockman field and the HLS cluster fields with similar density to blind surveys.Thus, the analysis suggests that the IN and OUT regions would be a good method for selecting PC Purple and cyan dots show the sources that are 'in' and 'out' of the Planck 50% intensity region, respectively.The 'in' displays the same spread in colours as the sources in the 'out' region.The over-plotted contours show the colour distribution for our control sample at the 1, 1.5 and 2 levels.The blue contours show the distribution for AGN sources, while the red contours show the non-AGN sources.Our control sample is used to train and test our classifier (see Section 3) and includes the sources described in Table 1 (see Section 2.2).We see that most of the SPIRE sources have colours in the same range as the colours from the control sample.The AGN distribution tends to be redder in the W1-W2 colour and bluer in the W2-W3 colour when compared to non-AGN galaxies.member candidates and field sources, respectively.For instance, this approach is used by Lammers et al. (2022).
In Figure 1 we show the WISE image of one of the PC candidates as an example.We show the W2 band image for the field PLCK_HZ_G086.1plus61.6,along with the 'in' and 'out' sources (pink and cyan circles, respectively).The Herschel 500m emission is also shown in yellow contours, showing the difference in resolution between Herschel and WISE.Also, we show the contour at 50% of the peak flux for the Planck image at 545 GHz, which separates the 'in' and 'out' regions.
We have thus decided to take advantage of the AGN diagnostic power provided by WISE, to assess the presence of AGN activity in the Planck XXVII PC sample, by using the WISE counterparts of our SPIRE sources.Moreover, since we expect PC members to be bright and red sub-mm sources, we are reducing contamination from non-members by only selecting WISE sources associated with Herschel sources.
The SPIRE sources were cross-matched with the AllWISE data release 1 (Wright et al. 2010;Mainzer et al. 2011), using the public database from the NASA/IPAC Infrared Science Archive 2 (IRSA).
The match was done with the SPIRE 250 band and considered only the closest source as a counterpart (avoiding multiple counterparts) in a radius of 9 ′′ , which is half the resolution of the SPIRE's 250 band.This was a conservative choice to limit the wrong associations.We also considered that the WISE sources were photometrically not affected by contamination or artefacts (cc_flags='0000') and that they were point-like (ext_flg=0) as expected for highredshift sources.We use w#mpro Vega magnitudes (where # in the observing band 1, 2, 3 or 4), which is the appropriate magnitude of non-extended sources.
After the cross-match, it was necessary to choose a SNR threshold for the WISE bands to obtain a trustworthy sample of counterparts.To decide on an SNR threshold, we derived the completeness level of the sample at different SNR values.For the estimates of the completeness of the sample, we searched all sources in AllWISE within a search radius of 20 arcmin from the Planck's field centres, which is a few times larger than a typical Planck 'in' region, obtaining more than 1 million sources.Then, we compared the mean fluxes of the sample with different SNR limits for each band, with the AllWISE catalogue completeness.Details on how this completeness is computed are found in the AllWISE documentation 3 .After this comparison, we decide on an SNR threshold of SNR ≥ 7, which corresponds to a completeness level of at least ∼ 45% for our AllWISE counterparts sample.We argue that a higher completeness level is not required, considering that our sources from the Planck XXVII catalogue are secure SPIRE detections.Also, this SNR threshold is consistent with a confident point source detection (Lonsdale et al. 2015).
Finally, from our 6904 SPIRE sources, we obtained 646 AllWISE counterparts.Out of the total AllWISE sample, 150 are considered PC members (or 'in' sources) and 496 are considered field galaxies (or 'out' sources).A WISE W1-W2 vs. W2-W3 colour-colour diagram of our AllWISE sources is shown in Figure 2. Here, we show both the PC members ('in') and field galaxies ('out') sources.
Control Sample
To train the AGN classifier, a control sample was compiled with a combination of catalogues for AGN and non-AGN sources with available WISE colours.Considering the suggested redshift ranges for the SPIRE sources (discussed in Section 2.1.1),and the redshift range of the confirmed structures from the sample, sources were selected between 1 ≤ ≤ 3.
If the catalogue includes the WISE photometry, then the magnitudes and colours were retrieved from the catalogue itself.Otherwise, a cross-match with AllWISE was done, following the same procedure of the SPIRE sources, but using a search radius of 6 ′′ , which corresponds to half the best angular resolution of the WISE bands.
AGN sources
For the AGN subsample, we selected AGN sources from the Million Quasars (Milliquas) catalogue, version 7.2 (Flesch 2021), the AGNs in the MIR using AllWISE data (Secrest et al. 2015) The Flesch (2021) catalogue corresponds to a compilation of ∼ 800, 000 quasars up to 30 April 2021, and is the updated version of the Flesch (2015) catalogue.It includes different types of sources, and we only selected secure quasar objects.
Non-AGN galaxies
Non-AGN sources were selected from the catalogue of star-forming galaxies at ∼ 1.6 in the FMOS-COSMOS survey from Kashino et al. (2019) and the catalogues from the Cosmic Assembly Near-IR Deep Extragalactic Legacy Survey (CANDELS4 ; Grogin et al. 2011;Koekemoer et al. 2011).Particularly, we used the GOODS-S CAN-DELS and UDS CANDELS stellar mass catalogues from Santini et al. (2015), the CANDELS-EGS stellar mass catalogue from Stefanon et al. ( 2017) and the CANDELS-COSMOS Multiwavelength catalogue from Nayyeri et al. (2017).
The catalogue from Kashino et al. (2019) contains 5,484 objects observed over the COSMOS field, with ∼ 30% of them being within 1 ≤ ≤ 3. AGN sources were discarded using catalogues of Xray sources (Kashino et al. 2019).Only 516 sources remained, after cross-matching, with AllWISE photometry.
The CANDELS catalogues (Stefanon et al. 2017;Nayyeri et al. 2017;Santini et al. 2015) were chosen because they include an AG-Nflag, which allows the selection of non-AGN sources.This flag comes from SED fitting of multi-wavelength observations.
Balanced control sample
The resulting count of sources is a total of ∼ 21, 000 AGN sources and 636 non-AGN.The number of AGN sources is much greater than the number of non-AGN sources, but a well statistically balanced sample is necessary to train the classifier, to avoid biases.
Therefore, we reduced our AGN sample to match the number of non-AGN sources.For this, we randomly selected 636 AGN sources.Then, we have 636 AGN and 636 non-AGN sources, and a total of 1272 sources, summarised in Table 1.
The colour-colour diagram for our final control sample can be seen in the right panel of Figure 2.Here we distinguish between the AGN and non-AGN samples.At first glance, no strong separation between AGN and non-AGN can be seen.However, the AGN distribution tends to be redder in the W1-W2 colour and bluer in the W2-W3 colour when compared to non-AGN galaxies.
AGN CLASSIFICATION
We design a colour-colour selection criterion (i.e. a classifier) to sort galaxies of our SPIRE sources as AGN or non-AGN galaxies.This is achieved by finding a way of separating both types of galaxies on a W1-W2-W3 colour-colour space.This classifier uses two main criteria.First, a K-means clustering machine learning algorithm (Macqueen 1967;Lloyd 1982) is applied to the WISE colour-colour diagram of known AGN and non-AGN sources (i.e. the control sample).After that, a mid-IR/WISE colour selection criterion is applied.
In this case, we use two colour cuts of W1-W2 and W2-W3 (see next subsection).The colour cut at W1-W2 > 0.94 is higher than the value used in other studies (e.g.0.8 in Stern et al. 2012 and0.5 in Blecha et al. 2018).This type of classifier is based on similar studies, that were able to distinguish different types of galaxies, mostly at lower redshifts, in a colour-colour diagram of WISE W1-W2 and W2-W3 colours (Lake et al. 2012;Mingo et al. 2016;Jarrett et al. 2017).
After classifying the WISE counterparts of our SPIRE sources, we estimate the AGN fractions for both the PC members ('in' sources) and field galaxies ('out' sources).The AGN fraction uncertainty is estimated via a Monte Carlo approach.
Building the classifier
Before training our classifier, we subdivide our control sample into a training set, that corresponds to an 85% of the full sample, and a test set, corresponding to the 15% left of the sample.This resulted in 1,081 galaxies for the training set and 191 galaxies for the test set.The percentages that we used to make each sub-sample were decided based on having enough sources to train the classifier, and enough sources that allowed us to evaluate the accuracy of the classifier.
The first part of the classifier consists of using a k-means algorithm from the Python package Scikit-learn (Pedregosa et al. 2011).Kmeans is an unsupervised, machine-learning, clustering algorithm.This algorithm subdivides the sample into clusters so that the sum of the squares of the data values in the W2-W3 vs W1-W2 colour-colour space within each cluster is minimised.
The K-means module uses the K-elbow parameter to decide how many clusters the algorithm will divide the data into.Considering that we want to distinguish between AGN and non-AGN, we set the K parameter as = 2, thus dividing the data into two clusters.
Once the algorithm finishes assigning every data point in the training set to a given cluster, each point gets flagged with either 1 or 0, which means that the source was selected as either an AGN or a non-AGN, respectively.The separation is given by W1-W2 = 1.53(W2-W3) -4.80.Since running the k-means algorithm alone does not cleanly divide our sample, we added two colour cuts into the classifier.The colour cuts were defined as the mean minus 3 of the W1-W2 AGN distribution, and as the mean plus 3 of the W2-W3 colour from our control sample.This corresponds to colour cuts at W1-W2 > 0.94 and W2-W3 < 4.04 (see Figure 6).In summary, we consider a source to be an AGN if its W1-W2 and W2-W3 colours agree with the following:
Testing
We estimate the completeness, reliability and accuracy of the classifier using the test sample, with a size of 191 sources.We first verify the classification only by considering a k-means clustering.This results in a classification of 97 true positives and 68 true negatives.Considering the completeness as the number of true positives divided by the sum of true positives and false negatives, we get a completeness of 98%.For the reliability, measured as the number of true positives divided by the sum of true positives and false positives, we get a reliability of 80%.Lastly, for the accuracy, measured as the sum of true positives and true negatives divided by the total number of sources, we get an accuracy of 86%.
We then tested these parameters using the combined k-means algorithm with the colour cut criterion.This resulted in 83 true negatives and 96 true positives.This essentially means that adding the colour cut improves the accuracy of our classifier to a 94%, with a 97% completeness and a 91% reliability.In Figure 4 we present the confusion matrices, summarising these values.
Monte Carlo simulation
To estimate the uncertainty of our method we performed a 10,000step Monte Carlo simulation.Each step simulates colours in the space W1-W2 vs. W2-W3, for which we use our classifier and then measure an AGN fraction.To construct our simulation, we interpolate the W1-W2 and W2-W3 colour distributions of the SPIRE sources.The SPIRE distributions in each colour and their interpolations are shown in Figure 3.
Each step simulates the data using random data points generated from the previously mentioned distributions.The W1-W2-W3 colours of one of the artificially generated distributions are shown in Figure 5.After that, each simulated data point gets a designation of AGN or non-AGN, using our classifier.Finally, the AGN fractions are measured.The Monte Carlo simulation returns a normal distribution, in which the standard deviation is the corresponding uncertainty.
Classification of the SPIRE sources
We ran the classifier with our SPIRE sources and found the following outcome.Out of the full catalogue of 646 sources, we found that 64 were selected as AGN, while 582 were selected as non-AGN sources.In particular, we found that there are 17 AGN that correspond to members of PC candidates and 47 that correspond to sources outside the PC candidates, i.e. field galaxies.When it comes to non-AGN, we found 133 that are also PC members and 449 sources that correspond to non-PC members.These numbers are summarised in Table 2.The final classifier is represented in Figure 6.In particular, we show the W1-W2 vs W2-W3 colour-colour diagram for our SPIRE sources.The different symbols distinguish between the sources that the classifier selects as AGN or non-AGN.Also, filled and empty symbols differentiate member galaxies of PC from field galaxies, respectively.We also over-plotted the (training) control sample as blue and red contours for AGN and non-AGN objects, respectively, to show the distribution of the galaxies we used to train our k-means method.
AGN fractions
After the classification of sources in our SPIRE sample, we proceeded to measure the AGN fraction in both the PC candidates and the field.The resulting AGN fraction for PCs is = 0.113 ± 0.03 or 11 ± 3%.For the field, we found an AGN fraction of = 0.095 ± 0.013 or 10 ± 1%.
The uncertainties to each AGN fraction come from Monte Carlo simulations.The Monte Carlo histograms are shown in the top panels of Figure A1, of the Appendix A. We note that the AGN fractions measured by the Monte Carlo simulations are quite similar to the actual AGN fractions.This is a good probe that our simulated data are a good representation of the observations.For a better understanding of our results, we also measured the AGN fraction for 'red' SPIRE sources.The 'red' sources come from the selection of the reddest Herschel sources by Planck Collaboration et al. (2015), defined as 350 / 250 > 0.7 and 500 / 350 > 0.6, based on source density distributions.This sample of SPIRE red sources has a higher overdensity significance than the SPIRE sources (Planck Collaboration et al. 2015, see Figures 6 and 7), suggesting this method as another way of selecting PC members.Therefore, in this case, we measure AGN fractions for PC members and non-members, considering red SPIRE sources as the PC member candidates and the non-red sources as field galaxy sources.
The AGN fraction of red SPIRE sources is = 0.186 ± 0.044 or 19 ± 4%.For the 'non-red' sources, the AGN fraction is − = 0.037±0.010or 4±1%.The Monte Carlo histograms showing the estimated uncertainty are in the middle panels of Figure A1, of the Appendix A. At first glance, if we consider the PC members as 'in' sources and field galaxies as 'out' sources, we find an AGN fraction higher in PC candidates, but with a difference not statistically significant considering the uncertainties of our estimates.However, if we consider the PC members as the 'red' sources and field galaxies as the 'non-red' sources, we find a clear increase of AGN fraction in the PC candidates with respect to the field, by at least a factor 3 (with 1 uncertainty).
To compare these AGN fractions, we also measured the AGN fraction of the HerMES 'level 5' Lockman-SWIRE field (Oliver et al. 2010), which has a similar depth to our SPIRE sources (Planck Collaboration et al. 2015).We find that = 0.075 ± 0.008 or 8 ± 1%.The Monte Carlo histogram showing the estimated uncertainty is in the bottom panel of Figure A1, of the Appendix A. This AGN fraction is lower than the .Figures 7 and 8 summarise these fractions.
Since we do not have the exact redshift for each source and we are only working on a suggested redshift range of 1 < < 3, we plotted the fractions as a horizontal bar that extends through that redshift range.To compare our results, we added AGN fractions from Macuga et al. (2019, and references therein) at a redshift of z = 2.53.The figure also includes measurements for different PCs from Lehmer et al. (2009), Digby-North et al. ( 2010), Lehmer et al. (2013), Polletta et al. (2021) and Krishnan et al. (2017), at redshifts of z = 3.09, 2.3, 2.23, 2.16 and 1.6, respectively.We found similar values of in PCs to those in Krishnan et al. (2017) and Lehmer et al. (2013), while the others seem lower.It is important to keep in mind that these studies only measured the AGN fraction within one PC, instead of a fraction within a large set of PC members, like in this study.It is also important to mention that these studies are based on different AGN selection approaches than this work, therefore it is difficult to compare them directly.However, they still mostly find an increasing number of AGNs in PCs than in the field.
AGN selection
We expect that training our AGN classifier with a richer data set would return a higher accuracy of classification and better statistical results, since here we were limited by a relatively small sample of starforming galaxies at high redshift with WISE photometry.According to Stern et al. (2012) and references therein, one could decide on a different colour cut between the range 0.7 ≤ 1 − 2 ≤ 0.8, 'trading' completeness (bluer colour cut) for reliability (redder colour cut), however, our W1-W2 colour cut is higher than this range (W1-W2 > 0.94).
Keeping this in mind, plus the fact that our classifier shows an 94% of accuracy (see Section 3.2 and Figure 4), we compared our classification method with the one shown in Assef et al. (2018), which also classifies AGN based on a colour condition.Particularly, we compared the number of SPIRE sources selected as AGN, following our criteria versus Assef et al. (2018).This was made by comparing our 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0 z 10 4 (and 1𝜎 significance), for the red SPIRE sources (blue), non-red SPIRE sources (red), SPIRE sources that are inside PC (light blue), outside PC (dark red) and HerMES field (grey).Literature values from Polletta et al. (2021) and Macuga et al. (2019, and references therein) are added as references for cluster/PC (filled stars) and field (empty stars) galaxies.The arbitrary y-axis was chosen to better distinguish the difference in the AGN fractions, taking into account their significance.See also Figure 7. positives for R90 (R75), while our method has an extra 6% of false positives, slightly surpassing the 90% reliability of Assef's method by 1% and reaching a completeness of 97%.In Figure 9 we show a comparison between Assef et al. (2018) and our AGN selection criteria.In Table 3 we summarised this comparison.We conclude that our method and the method from Assef et al. (2018) are both useful and reliable methods to classify AGN.However, since the goal of this study is to measure AGN fractions, i.e. both the number of AGN and non-AGN are important, we argue that the most important characteristic of the classifier must be the completeness.In this case, our method is more appropriate because our completeness is 97%, while at the same time we reach a 91% of reliability and an accuracy of 94%, in contrast to the 17% completeness of Assef et al. (2018) for R90 or to the 28% completeness for R75.
AGN fraction and their implications in protoclusters
When considering the 'in' and 'out' sources as PC members and field sources, respectively, the AGN fraction that we find in PCs is not significantly higher than the fraction measured in the field as found by other studies.An important thing to keep in mind is that only a few PCs in our sample are confirmed (see Section 2.1.1).Therefore, we may have sources that are line-of-sight alignments as suggested by Negrello et al. (2017), instead of members of the overdensities, contaminating our sample.We tested if measuring the AGN fraction in a subsample with a higher overdensity significance (i.e. a subsample of red sources), resulted in a higher AGN fraction.We found a higher AGN fraction than the field by at least factor 3 with 1 uncertainty.This could suggest that by selecting this redder sample we were in fact cleaning our sample and removing 'lineof-sight alignments', and we would be consistent with higher AGN fraction in PCs.
Another possible explanation for finding an AGN fraction not as highly significant in PCs is that, as described in Section 2.1.1,we are using a sample of PC candidates that are the most star-forming and dustiest members, instead of the full PC population, and the AGN population might not overlap with these.
Finally, the most likely explanation is that many PC members are too faint to be detected by WISE.Further, several PC AGN members may be detected only by W1 and W2 bands, and not in W3.In that case, this would mean that we are looking into the brightest AGN in the structures, which are rare.In order to test this statement, we looked into how many members of the protocluster PHz G237.01+42.50(G237) at = 2.16 are detected by WISE.This PC has 31 spectroscopically confirmed members (Polletta et al. 2021).Using a crossmatch radius of 6.5 ′′ (W3 band resolution), out of the 31 sources, we found 5 WISE counterparts.For these counterparts, none of them are detectable in the W3 band, i.e. they have, on average, an SNR< 1 in the W3 band.In other words, a ∼ 16 % of the members were detected in W1 and W2 bands.Similarly, we consider the protocluster MAGAZ3NE J095924+022537 at z=3.37 (McConachie et al. 2022).Out of 22 spectroscopically confirmed members, we found 7 sources within 10 ′′ ; none were detected in the W3 band.Thus, a ∼ 31 % of the members were detected only in W1 and W2 bands.
Following this analysis, a diagnostic based only on the W1 and W2 may be considered for future work.In this case, we find that using W3 became a disadvantage in our method, and maybe other colours should be tested to find a better separation between AGN from star-forming galaxies, without biassing the sample to the most star-forming sources.Alternatively, a stacking analysis on the W3 signal could be done to reveal sources that are too faint to be detected individually.Also, our analysis could point to the fact that the small difference we found in the AGN fractions for field and PCs may be significant even if statistically is not.Thus, even if we did not find a highly significant difference, we think our results are still hinting at a higher AGN activity in PCs.
One of the main limitations of this study is that we are using photometrically selected PC candidates, instead of spectroscopically confirmed structures, due to the paucity of confirmed PCs available.Having a large data set of spectroscopically confirmed overdensities at high redshift would make it possible to better understand the relationship between AGN fractions − and, therefore, the growth history of SMBHs in galaxies − and the evolutionary state of early dense environments.
Nevertheless, WISE-selected AGN appear to be good indicators of overdensities (Jones 2017), as well as other AGN selections in general (e.g.Noirot et al. 2016Noirot et al. , 2018)).Plus, follow-up observations from Spitzer/IRAC for some of these PC candidates (Martinache et al. 2018), continue to support the idea that these sources, or at least a good fraction of them, are true members of PC overdensities.
CONCLUSIONS
We estimated the AGN fraction in ∼228 protocluster candidates selected by Planck XXVII and followed up by Herschel Planck Collaboration et al. (2015), a representative sample of high redshift PC candidate members.This sample provides the photometry for 7099 sources and allows us to compare the measured AGN fraction of galaxies inside the overdensities and compare them with field galaxies.We used the WISE counterparts of these sources since WISE provides higher-resolution photometry and the possibility of probing the stellar emission.This resulted in a catalogue of 646 counterparts.
In order to select the AGN in our sample, we constructed a classifier based on a mid-IR AllWISE colour-colour selection criterion.This is achieved by combining W1-W2 > 0.94 and W2-W3 < 4.04 colour cuts, which corresponds to the mean minus 3 of the W1-W2 and mean plus 3 of the W2-W3 AGN distributions of a control sample made up of AGN and non-AGN catalogues, and a k-means clustering algorithm that separates the control sample following the W1-W2 = 1.53(W2-W3) -4.80 relation.Our control sample includes known AGN and non-AGN galaxies that were used to train our classifier.
For further study of the AGN fraction in PCs, we also measured the AGN fraction in a 'redder' ( 350 / 250 > 0.7 and 500 / 350 > 0.6) subsample of our SPIRE sources, which has a higher overdensity significance.In this case we consider the red sources as PC members and the non-red sources as field galaxies.We found an AGN fraction of = 0.186 ± 0.044 or 19% ± 4% and a − = 0.037 ± 0.010 or 4% ± 1%.Moreover, to assess our AGN fraction for the field sample, we also measured the AGN fraction in the Lockman-SWIRE field from HerMES.We found an AGN fraction of = 0.075 ± 0.008 or 8% ± 1%.
In terms of AGN activity in PCs, we found that our AGN fraction is not significantly higher in PCs when compared to the field, when considering the 'in' and 'out' sources as PC and field galaxies, respectively.For the field, we found that both our sample ('out') and the one from HerMES have a similar AGN fraction, thus suggesting that we have a representative field sample.However, we think that our results hint towards a higher SMBH activity in overdensities, specially since we found a higher difference in the AGN fraction for the red and non-red samples, which are proportional to the overdensity significance of the sample.
Our main conclusion is that it is complicated to assess the AGN and SMBH activity in overdensities, particularly at these high redshifts.We believe that it is necessary for a combined and complete multiwavelength study to better understand the role of the environment in the evolution of galaxies and their SMBHs.We expect that new observations from the James Webb Space Telescope will improve this kind of study by delivering deeper and higher resolution data for galaxies and large-scale structures in the redshift interval considered in this work.
project of the Jet Propulsion Laboratory/California Institute of Technology.WISE and NEOWISE are funded by the National Aeronautics and Space Administration.
Figure 1 .
Figure1.A 20 × 16 arcmin 2 WISE observation at 4.6 (W2 band) of one of our PC candidates, PLCK_HZ_G086.1plus61.6,shown as an example.The yellow contours show emission levels at 2 and 3 of the Herschel/SPIRE observation at 500, for the same field.The red contour corresponds to the 50% of the peak flux of the respective Planck image at 545 GHz, which separates the 'in' and 'out' regions.WISE 'in' and 'out' sources are enclosed by magenta and cyan circles, respectively.The sources enclosed by a blue star were classified as AGN according to our method (see Section 3).
Figure 2 .
Figure 2. W1-W2 vs. W2-W3 colour-colour diagram of the WISE counterparts for the SPIRE sources.Purple and cyan dots show the sources that are 'in' and 'out' of the Planck 50% intensity region, respectively.The 'in' displays the same spread in colours as the sources in the 'out' region.The over-plotted contours show the colour distribution for our control sample at the 1, 1.5 and 2 levels.The blue contours show the distribution for AGN sources, while the red contours show the non-AGN sources.Our control sample is used to train and test our classifier (see Section 3) and includes the sources described in Table1(see Section 2.2).We see that most of the SPIRE sources have colours in the same range as the colours from the control sample.The AGN distribution tends to be redder in the W1-W2 colour and bluer in the W2-W3 colour when compared to non-AGN galaxies.
Figure 3 .
Figure 3. SPIRE WISE counterparts distributions for the W2-W3 (left panel) and W1-W2 (right panel) colours.Purple and cyan colours represent sources flagged as in and out, respectively.The green and blue dashed curves show the interpolation to the in and out distributions.We used these interpolations to generate the simulated data for the Monte Carlo simulation (used to estimate the uncertainty of the classifier).
Figure 4 .Figure 5 .
Figure 4. Confusion matrices of our classification test with a 191 sources test sample.Each matrix shows the number of true positives (bottom right), false negatives (bottom left), false positives (top right) and true negatives (top left).Left panel: Confusion matrix for the classifier without adding the colour cuts at W1-W2 > 0.94 and W2-W3 < 4.04.The accuracy of the classification is 86%, with 98% completeness and 80% reliability.Right panel: Confusion Matrix for the classifier, now adding the colour cuts at W1-W2 > 0.94 and W2-W3 < 4.04.The accuracy of the classification using the colour cut increases to 94%, with 97% completeness and 91% reliability.
Figure 6 .
Figure 6.Classification result for the SPIRE sources (green) in the W1-W2 vs. W2-W3 colour-colour diagram.To consider a source as an AGN, three conditions must be met.The source must be located: (1) above the black horizontal line, which corresponds to a 3 level threshold for AGN in W1-W2 colour, (2) to the left of the black vertical line, which is the 3 level threshold for AGN in W2-W3 colour, and (3) over the red background area of the colour-colour diagram, which corresponds to the AGN classification given by the k-means separation.Filled (empty) stars represent the sources inside (outside) the PCs that were classified as AGN.Filled (empty) circles are the sources inside (outside) the PCs classified as non-AGN.The blue and red contours show the 1, 1.5 and 2 levels of the AGN and SF/non-AGN sources in the training data set of our control sample, respectively.This shows that both our main sample and control sample have a similar colour range covered.The upper panel shows the histogram distribution for the W2-W3 colour and the 3 level threshold shown as the solid black line for the SPIRE sources (green), and for the AGN sources (blue) and the non-AGN sources (red) of the control sample.Similarly, the right side panel shows the histogram of those distributions for the W1-W2 colour, including the 3 level threshold shown as the solid black line.The different coloured dashed lines show the fitted Gaussian model for each distribution.
Figure 9 .
Figure 9.Comparison of AGN selection criteria between this work and Assef et al. (2018).In the left (right) panel we show the colour-colour distribution of our control (SPIRE) sample (grey dots).The dashed black line shows our AGN selection criterion while up-and down-pointing triangles correspond to sources selected as AGN following Assef et al. (2018) for R90 and R75, respectively.The AGN selected sources are colour-coded for W2 magnitude.The majority of our AGN selected data, 109% from the control sample, 84% from the all SPIRE sample, and 95% from the red SPIRE sources were also selected as AGN followingAssef et al. (2018)'s criteria, principally for R90.
Table 1 .
Summary of our final control sample.For each catalogue, we show the type of galaxy selected, the number of sources and the corresponding reference.
Table 2 .
Classification result of the SPIRE WISE counterparts.
AGN fractions, , for the red (blue), non-red (red), 'in' (cyan) and 'out' (dark red) SPIRE sources, and HerMES field (grey) versus redshift.For easier visualisation, we show the 1 significance of the AGN fractions as boxes in arbitrary redshift positions.Literature values from Polletta et al. (2021) (black star) and Macuga et al. (2019, and references therein; black circle, triangle, square and diamond) are added as reference for cluster/PC (filled black markers) and field (empty grey markers) galaxies.Here we show that the AGN fraction is, in general, greater in PCs than in the field.See also Figure 8. AGN fractions,
Table 3 .
Ratio of AGN classifications following this work and Assef et al. (2018) and the true number of AGN.General comparison between AGN classification from this work and classification from Assef et al. (2018). | 2023-11-06T06:41:04.523Z | 2023-11-03T00:00:00.000 | {
"year": 2023,
"sha1": "86162f7bf14b8f2885f91bb3facb0763278ab703",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/mnras/advance-article-pdf/doi/10.1093/mnras/stad3404/52818976/stad3404.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "86162f7bf14b8f2885f91bb3facb0763278ab703",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
211563137 | pes2o/s2orc | v3-fos-license | Transcranial photobiomodulation with near-infrared light from childhood to elderliness: simulation of dosimetry
Abstract. Significance: Major depressive disorder (MDD) affects over 40 million U.S. adults in their lifetime. Transcranial photobiomodulation (t-PBM) has been shown to be effective in treating MDD, but the current treatment dosage does not account for head and brain anatomical changes due to aging. Aim: We study effective t-PBM dosage and its variations across age groups using state-of-the-art Monte Carlo simulations and age-dependent brain atlases ranging between 5 and 85 years of age. Approach: Age-dependent brain models are derived from 18 MRI brain atlases. Two extracranial source positions, F3–F4 and Fp1–Fpz–Fp2 in the EEG 10–20 system, are simulated at five selected wavelengths and energy depositions at two MDD-relevant cortical regions—dorsolateral prefrontal cortex (dlPFC) and ventromedial prefrontal cortex (vmPFC)—are quantified. Results: An overall decrease of energy deposition was found with increasing age. A strong negative correlation between the thickness of extracerebral tissues (ECT) and energy deposition was observed, suggesting that increasing ECT thickness over age is primarily responsible for reduced energy delivery. The F3–F4 position appears to be more efficient in reaching dlPFC compared to treating vmPFC via the Fp1–Fpz–Fp2 position. Conclusions: Quantitative simulations revealed age-dependent light delivery across the lifespan of human brains, suggesting the need for personalized and age-adaptive t-PBM treatment planning.
Introduction
(2) burdensome side effects of antidepressant medication. 7 Furthermore, many patients prefer to self-manage, which leads to the low treatment rates. 8 Therefore, new, effective, safe, and easy-toadminister treatment methods are needed to battle MDD. Photobiomodulation (PBM) is a near-infrared (NIR) light-based therapy technique and has shown therapeutic effectiveness for various neuropsychiatric disorders, including MDD. [9][10][11] The transcranial PBM (t-PBM) technique delivers NIR light through the scalp and skull. [12][13][14] Due to the penetration depth of NIR light in human tissues, clinically effective light dosages can be delivered to the disease-responsible brain regions without damaging superficial tissues. Although the molecular mechanisms of PBM remain a topic of active research, some studies report that treatment effects may derive from the excitation of a mitochondrial chromophorecytochrome c oxidase-at the NIR spectra, 14 stimulating the mitochondrial respiratory chain and increasing adenosine triphosphate production. 10,11 The concurrent upshot of reactive oxygen species may trigger cytoprotective and antioxidation pathways within the cell, with effects potentially lasting on the scale of days to weeks. 15 A wide range of studies, in both animal models and humans, have shown that PBM causes minimal or no adverse effects while producing therapeutic effects. 10,16,17 Although MDD has a broad age of onset, 2,4 most previously published studies have focused on t-PBM treatments in only middle-aged adult brain models. 18 However, personalization of treatments is key to increasing success rate and tolerability; therefore, our interest is in developing precise PBM treatment strategies adapted for individual patients. One of the main factors impacting t-PBM light dosage is the thickness of extracerebral tissues (ECTs), including both skull and scalp. [19][20][21] Therefore, a quantitative analysis on how brain development and senescence could impact the effective dosage in a t-PBM treatment can provide valuable guidelines for clinicians to optimize their procedures and maximize treatment efficacy and tolerability.
To capture the variations of anatomical features among age groups, we have to first create anatomically appropriate brain/full-head models, including skin/skull/brain three-dimensional shapes and thicknesses. Fortunately, a number of recent studies have published comprehensive magnetic resonance imaging (MRI) atlases outlining the development of human brains from infants to elders. [22][23][24][25] In addition, several groups, including our own, have developed sophisticated brain segmentation and meshing pipelines to convert neuroanatomical scans into highquality multilayered brain models. These resources make it possible to quantitatively investigate how the development and senescence of the human brain influences light penetration at different stages of life.
In addition, advanced photon transport models must be used to accurately account for the complex light-tissue interactions during t-PBM procedures. In this study, we applied the Monte Carlo (MC) method-a stochastic solver for the radiative transfer equation (RTE)-which is widely considered the gold standard for light modeling in complex tissues. 26 While alternative models, such as the diffusion equation (DE), are dramatically faster and applicable to many types of human tissues, 27,28 for brain tissues, DE is known to produce erroneous solutions due to the presence of low-scattering media, such as air cavities and cerebrospinal fluid (CSF). 29,30 The MC method solves the RTE rigorously by simulating large numbers of photons following a set of known probability models derived from physics. 31 The only major limitation is that MC methods are computationally expensive. To improve computational efficiency, we applied our widely disseminated hardware-accelerated MC modeling platform-Monte Carlo eXtreme (MCX). 32 This tool can shorten the simulation runtime by several hundred times compared to conventional CPU-based simulations. 32,33 The rest of the paper is organized as follows. In Sec. 2, we detail the preprocessing steps to create four-layer head segmentations from the neurodevelopmental MRI brain atlas library. 22 We also report the steps to obtain brain parcellations and the placement of light source positions. In Sec. 3, the simulated energy depositions for 2 t-PBM source placements and 5 selected wavelengths on 18 selected brain/head atlases, ranging between 5 and 89 years of age are reported. In Sec. 4, we highlight the findings regarding the efficiency of different wavelengths, the energy deposition, and the exposure duration in the wide span of age groups. In addition, we also correlate our findings with the anatomical changes associated with the brain development and senescence.
Creating Multilayer Head Models from MRI Brain Atlas Library
Brain segmentations are created by processing the Neurodevelopmental MRI database. [22][23][24] We select a total of 18 age groups, ranging from 5 through 89 years of age. Specifically, the average atlas for age groups 5, 10, 14, 18, 20 to 24, 25 to 29, 30 to 34, 35 to 39, 40 to 44, 45 to 49, 50 to 54, 55 to 59, 60 to 64, 65 to 69, 70 to 74, 75 to 79, 80 to 84, and 85 to 89 are used in our study. For each atlas, a four-layer full-head segmentation is created, including the white matter (WM), gray matter (GM), CSF, and ECT. An additional air cavity segmentation is created to properly model light propagation inside the nasal and pharyngeal cavities. As a result, a total of five tissue labels are considered. Three samples of segmented brain volumes at 5, 40 to 44, and 85 to 89 years of age are shown in Figs. 1(a)-1(c), respectively. The brain atlas probabilistic tissue segmentations (PTSs) provided in the Neurodevelopmental MRI database were derived from averaging subjects in each age group. The GM/WM volumes directly calculated from the atlas PTS volumes using a simple threshold show discrepancies compared to the GM/WM volumes estimated from the original group-based data published by the same authors. 23,24 We believe that this discrepancy results from the averaging and nonlinear effects of the atlas creation process.
To correct for this discrepancy, an adaptive threshold (T ∈ ½0; 1) is applied to the PTSs of WM, GM, and CSF of all atlases. As shown in Fig. 1(d), before this correction, a uniform threshold T ¼ 0.5 of the atlas segmentation (dashed lines) appears to underestimate the GM volume and overestimate the WM volume along age compared to the previously reported averaged volumes of the population 23,24 from which the atlases are derived. To reduce this artifact, we dynamically estimate a threshold for GM/WM to match the tissue volumes to the population-derived estimations. The corrected GM/WM volumes (solid lines) over age are shown in Fig. 1(d). In comparison, such discrepancies for the CSF layer are relatively small compared to previous studies. 23,24,34,35 For simplicity, a uniform threshold T CSF ¼ 0.5 is applied to T1weighted (T1w) CSF PTS.
To obtain the exterior surface of the ECT layer, i.e., the scalp surface, we use FMRIB Software Library (FSL) and the "betsurf" add-on to process the T1w MRI images provided by the atlas database. 23,36 It is worth noting that the FSL pipeline for segmenting ECT tissues has been validated in adult brains/heads. For young-age children, only limited studies have been reported. [37][38][39] Therefore, our results for young children are intended for qualitative assessment only.
Segmentations of Air Cavities in the Brain Atlases
It is important to note that the presence of air cavities in the brain atlases has significant impact to light dosimetry due to the low absorption of air. However, most neuroanatomical analysis tools do not have the ability to automatically extract these cavities. Here, we use a combination of manual segmentation and clustering analysis to extract various head cavities.
The frontal sinus and sphenoid sinus are manually segmented using the T1w MRI images and ITK-SNAP. 40 To segment the nasal and pharyngeal cavities, we apply the k-means algorithm to the raw T1w MRI images. 41 A total of three clusters are segmented for the ECT layer below Fpz point and the cluster with the lowest intensity is used as the air cavities. The eyes and the spine also appear as low-intensity regions in T1w images. A manual flood-filling operation is performed to identify these regions. Examples of the created multilayer brain anatomies are shown in Fig. 2(a).
Brain Parcellations and Target Regions
Similar to our previous study, 18 our primary interest is to develop effective PBM treatment approaches for emotion regulation and depression. Thus, our main focuses in this dosimetry study are the dorsolateral prefrontal cortex (dlPFC) and ventromedial prefrontal cortex (vmPFC) regions [ Fig. 2(b)], both involved in the emotion regulation circuitry. However, our approach is general and can be used to characterize all brain functional regions. We used the MarsAtlas parcellation in our previous study. 18,42 This parcellation of the vmPFC region includes the frontal pole (Brodmann Area 10), whereas several other studies did not. [43][44][45][46] This may potentially cause a discrepancy in calculations of energy deposition in vmPFC. In this study, we consider both definitions by selecting and merging subregions from the Desikan-Killiany-Tourville (DKT) parcellation 47 that best comprises the vmPFC regions in either definition. The brain DKT parcellation is obtained using the "recon-all" workflow provided in FreeSurfer v6.0 48,49 [see . The FreeSurfer brain parcellation workflow has been validated in previous publications for ages greater than 3 years of age. 45,50 In this study, dlPFC is considered to cover two regions in the DKT parcellation-caudal middle frontal gyrus and rostral middle frontal gyrus [labels 1 and 2 in Fig. 2(c)] according to Ehrlich et al. 51 The superior frontal gyrus is excluded because it is not relevant to t-PBM. As mentioned earlier, a general consensus on the boundary of the vmPFC has not been reached. In previous literature, vmPFC is either defined as the combination of (1) medial orbitofrontal cortex and lateral orbitofrontal cortex 45,46 [labels 8 and 9 in Fig. 2(d)] or (2) frontal pole, medial orbitofrontal cortex, and lateral orbitofrontal cortex [labels 6, 8, and 9 in Fig. 2(d)], referred to as the extended vmPFC 52,53 hereinafter. Furthermore, due to the slight mismatch between the FreeSurfer parcellations and the GM in our four-layer segmentation, the final parcellations are defined as the intersection between the two models.
Light Source Positions
We focus on characterizing the t-PBM treatment strategy. A pair of transcranial source configurations 18 are investigated: (1) F3-F4: two light-emitting diode (LED) array sources are placed above and centered, respectively, at the F3 and F4 (10-20 EEG positions); (2) Fp1-Fpz-Fp2: one LED array is placed on the forehead and between the Fp1-Fpz-Fp2 positions (referred to as the Fpz position hereinafter). The definitions of these source positions and orientations are similar to the study by Cassano et al., 18 with the exception that the F3-F4 source arrays are rotated ∼90 deg, as shown in Fig. 3, to better align with the dlPFC region and maximize light delivery. The simulated LED arrays follow the dimensions and parameters of an available PBM source (Omnilux New-U, Photomedex, Horsham, Pennsylvania).
Simulation Settings
In our study, a total of 15 simulations, a combination of 3 source positions and 5 wavelengths (670, 810, 850, 980, and 1064 nm), are performed for each segmented brain atlas using our graphics processing unit (GPU)-accelerated photon transport simulator (MCX). 32,33 The optical properties of each layer (WM, GM, and CSF) are identical to those from our earlier study, 18 with the exception that the optical properties of the ECT layer (Table 1) are derived using the weighted average of the properties for fat, muscle, skin tissue, and skull; the weights are derived from volume fractions of the respective tissue types in the Colin27 atlas. 54,55 For simplicity, we assume the optical properties for each tissue type are independent of age. In all simulations, a total of 10 9 photons are launched. To reduce the effect of shot noise in regions distal to the light source, an adaptive nonlocal mean filter is applied. 56 All atlases share the same resolution of 1 × 1 × 1 mm 3 isotropic voxels. Normalized average energy deposition (J∕cm 3 ), as described in the study by Cassano et al., 18 is then computed for each atlas and compared across different age groups. The only exception is that peak fluence (99th percentile of the target) is adopted for computing the exposure duration for one treatment session.
Extracerebral Tissues Thickness Estimation
To further understand the age dependency of t-PBM light dosage, we also compute the thickness of the ECT layer. In Fig. 4, three ECT regions for F3 (green), F4 (red), and Fpz (blue) positions are created for computing the ECT thickness. The average ECT thicknesses are estimated by (1) using the "Iso2Mesh" toolbox 57 to create meshes for the inner and outer surfaces of the ECT layer for each atlas, (2) projecting the illuminated areas from the outer surface inward along Table 1 The estimated optical properties of the ECT layer at five wavelengths. μ a -absorption coefficient (mm −1 ), μ sscattering coefficient (mm −1 ), g-anisotropy, and n-refractive index. the normal direction to generate a truncated volume, and (3) computing the average thickness by dividing the enclosed volume by the area of the illuminated area.
Photon Dosimetry Assessment
Sample sagittal energy deposition maps are shown in Figs. 5(a)-5(c) to demonstrate qualitatively the changes of light distributions over age. The selected plots are generated using an Fpz source at 810 nm for three different age groups-5, 40 to 44, and 85 to 89 years. In Fig. 6, we summarize the age-dependent average energy depositions in dlPFC and vmPFC using F3-F4 and Fpz positions, respectively, for five selected wavelengths. Similar to our previous findings, 18 an 810-nm illumination appears to provide the highest energy deposition across a wide range of age spans, followed by 1064 and 850 nm. In addition, a strong linear correlation on this log-scaled plot is found between there wavelengths. For extended vmPFC (not included in Fig. 6), the same conclusions can be drawn. For simplicity, we only report results at 810 nm hereinafter. In Fig. 7, we show the top five brain parcellations that have received the highest average energy deposition. The parcellations shown in the legend are roughly ranked in descending order based on energy deposition, although such orders may vary slightly across the age groups. The numbers in parentheses correspond to the numbering shown in Figs. 2(c) and 2(d).
In Fig. 8(a), we show the average energy deposition for dlPFC and vmPFC using both source positions. In Fig. 8(b), we show the exposure duration per session over age. Exposure durations (t in minute) are estimated to ensure that the peak fluence (99th percentile of the target 18 ) at dlPFC and vmPFC reaches an optimal fluence of 3 J∕cm 2 per session for F3-F4 and Fpz positions, respectively. It is noteworthy that, in our earlier study, 18 a treatment session is effective when the upper quartile of the target reaches fluence of 3 J∕cm 2 , whereas the 99th percentile of the target is used in this paper. The revision is made to minimize the light fluence on the skin and reduce the risk of overexposure. Furthermore, the engagement by t-PBM of the most superficial cortex, for any given brain area, is considered sufficient to modulate the emotion regulation circuitry. The expression of t (in seconds) can be written as where ϕ e ðJ∕cm 2 Þ is the effective fluence achieved at the target region of interest (ROI), V ðcm 3 Þ is the volume with energy deposition >99th percentile inside the ROI, μ a ðcm −1 Þ is the ROI absorption coefficient, k ∈ ½0;1 is the percentage of the source energy delivered to the volume with energy deposition >99th percentile inside the ROI, E s ðW∕cm 2 Þ is the skin irradiance of the source, and A ðcm 2 Þ is the size of the illumination area. The skin irradiance is set at 300 or In Fig. 9(a), we plot the ECT thicknesses over age groups for F3, F4, and Fpz positions. The correlations between the ECT thickness and the average energy deposition (in log 10 scale) are demonstrated in Fig. 9(b) in the target region using the 810-nm illumination. The target region is dlPFC for F3/F4 placement and vmPFC/extended vmPFC for Fpz source placement. In our results, due to the low thickness and low absorption, the CSF layer shows minor effects on energy deposition across age compared to the ECT layer. Thus, the CSF layer is not considered in Fig. 9.
Discussions
From the sample energy deposition maps in Fig. 5, we can visually observe that light penetration to the brain decreases as age increases. The increasing brain size and ECT layer thickness can also be observed in Fig. 5, which impedes energy delivery to the desired brain tissues.
In Fig. 6, all wavelengths show an overall decreasing energy deposition as increasing age for both dlPFC and vmPFC. The linear correlation coefficients between 810 nm and other tested wavelengths are found to be >0.99 for both ROIs, suggesting that the age variation has a weak dependency on wavelength. Comparing between wavelengths, 810-nm wavelength delivers the highest energy deposition; 850-and 1064-nm wavelengths deliver more energy than 670-and 980-nm wavelengths in most cases. These findings generally agree with previously published simulation-based studies; 18,58 however, we want to indicate that there is a wide range of brain optical properties in literature, due to diverse measurement techniques and experimental settings. Different choices of literature values could potentially lead to different rankings in wavelength efficiency. 58,59 Figure 7(a) shows that the top five regions in energy deposition are mostly located in the frontal lobe of the brain, which also includes the dlPFC. The rostral middle frontal gyrus and the caudal middle frontal gyrus that compose the dlPFC receive ∼10-fold more energy deposition than the remaining three. This is mainly a result of shorter distances between these two regions and the source when compared to other parcellations. In Fig. 7(b), the energy deposition in the frontal pole region is over 10-fold higher than that of other parcellations. Furthermore, the first four parcellations are not part of vmPFC. This is due to the fact that the vmPFC is located at the bottom of cerebral hemispheres, as shown in Fig. 2(b), while the Fpz source position delivers light directly toward the frontal pole. Across all age groups, the F3-F4 position delivers 0.23% to 2.9% of the total energy into dlPFC, whereas the Fpz position only delivers 0.0066% to 0.09% of the total energy into vmPFC. Based on this result, the Fpz position appears to be less effective in delivering light to the target region in comparison with the F3-F4 position. In both Figs. 7(a) and 7(b), we can observe a decrease in energy deposition, as age increases, similar to the overall trend shown in Fig. 6. Fig. 9 (a) The relationship between age and thickness of ECT. (b) The correlation between thickness of ECT and average energy deposition (in log 10 scale) in the dlPFC and vmPFC. For F3 and F4 positions, the target region is dlPFC, whereas for Fpz position, the target region is vmPFC/ extended vmPFC (ext. vmPFC). The size of the markers represents the age group.
In Fig. 8(a), a decrease in energy deposition can again be found for all source-region pairs as age increases. However, energy deposition at the vmPFC shows a stronger decrease in younger age groups than that at the dlPFC. For example, the energy deposition decreases by 86.97% and 67.98% from 5 to 20 years old for Fpz-vmPFC and (F3-F4)-dlPFC, respectively. In addition, during adulthood, the vmPFC has a slower decay rate in energy deposition along age than the dlPFC. With frontal pole included, the extended vmPFC has an average fourfold increase in energy deposition across age groups compared to the vmPFC. Therefore, the actual energy deposition at the vmPFC may vary depending on the definition of the target region. Furthermore, the energy deposition of Fpz-dlPFC is generally higher than that of Fpz-vmPFC as well as Fpz-(extended vmPFC) across age, which is caused by the location of vmPFC illustrated previously. This suggests that an effective t-PBM treatment targeting at vmPFC/extended vmPFC with an Fpz source also delivers sufficient dosage to dlPFC, but not vice versa.
The plots in Fig. 8(b) show that the desired treatment duration increases with age for both (F3-F4)-dlPFC and Fpz-vmPFC treatments. For (F3-F4)-dlPFC with skin irradiance of 300 mW∕cm 2 , the effective fluence can be achieved in <2 min across all ages. However, the exposure duration is much longer for Fpz-vmPFC and is ∼10 to 20 min after 20 years of age. This is due to insufficient energy deposition at the vmPFC as discussed earlier. The addition of frontal pole to the extended vmPFC reduces the exposure duration by 71% to 88% compared to the vmPFC since the Fpz position mostly concentrates energy at the frontal pole. Caution should be exercised when applying these findings to clinical research or practice. It is in fact unusual to expose the skin for up to 20 min to high irradiance of 300 mW∕cm 2 . Our data also suggest the potential for overexposure of the most superficial brain areas, such as the frontal poles when the light source is positioned near Fpz. Furthermore, there may be other treatment strategies with different skin irradiance. From Eq. (1), we can see that, given a fixed illumination area, the factors that determine the treatment exposure duration are the effective fluence and irradiance. In addition, the exposure duration is inversely proportional to the skin irradiance. Therefore, we can adjust the exposure duration by increasing or decreasing the skin irradiance.
In Fig. 9(a), we notice that for all F3, F4, and Fpz positions, the thickness of the ECT layer increases as age increases. The ECT regions under the F3 and F4 positions have very similar thickness values across the age due to symmetry. In comparison, the ECT thickness for Fpz position is generally larger than the other two, which coincides with the findings reported previously. 60 In Fig. 9(b), the plots between the ECT thickness and the log-scaled energy deposition show a rough linear relationship between the two parameters. Applying linear regressions, we obtained four linear models for (1) Fpz-vmPFC: y ¼ −0.0376x − 4.6061 (R 2 ¼ 0.2437), (2) Fpz-(extended vmPFC): y ¼ −0.0794x − 3.4733 (R 2 ¼ 0.5130), (3) F3-dlPFC: y ¼ −0.0833x − 2.7749 (R 2 ¼ 0.9208), and (4) F4-dlPFC: y ¼ −0.0906x − 2.6388 (R 2 ¼ 0.9589). For both F3 and F4 positions, the ECT thicknesses show a strong negative linear correlation to the average energy deposition in log scale at the dlPFC. The plots in Fig. 9(b) show discernible deviations from a linear fit in younger ages, possible due to the boundary effect in smaller-sized head models. Only a weak linear correlation is found for Fpz position. It is our belief that the relatively weak correlation at the Fpz source is a result of larger separation between the target region (vmPFC) and the source. Nonetheless, an overall decreasing trend is evident for the vmPFC energy deposition from the Fpz-centered source. For Fpz sources targeting the extended vmPFC, the result presents a stronger linear correlation compared to Fpz-vmPFC. The frontal pole merged within the extended vmPFC is closer to the Fpz position and shows a strong linear correlation (R 2 ¼ 0.8660), which raises the overall correlation for the Fpz-(extended vmPFC). These results indicate that the anatomical development of the head, and especially the increase of ECT thickness, is largely responsible for decreased energy deposition in the brain as a result of growth and aging. Furthermore, shorter distances between the source position and target region result in greater correlation between energy deposition and ECT thickness.
In addition, we have repeated our simulations using refined brain anatomical models by further separating the ECT layer into the scalp and skull layers using FSL. Our simulations using such atlases show very similar results (not included) to our above findings. This suggests that the overall ECT tissue thickness plays a more important role in t-PBM treatment compared to the individual scalp/skull thicknesses.
Conclusions and Summary
In this report, we systematically investigated light dosimetry in the t-PBM treatment across a wide span of ages. To ensure accurate quantification, we used anatomically appropriate brain atlases and state-of-the-art MC simulations. For modeling the brain anatomy, we have developed a robust workflow to create multilayered head segmentations from publicly available Neurodevelopmental MRI atlas library data. For each brain segmentation, we have also generated corresponding brain parcellations to facilitate quantitative analysis. A number of conclusions have been found from our simulation results, and many of those are consistent with our previous study. 18 First, the energy deposition decreases across lifespan regardless of the source position and parcellations. However, the decline is faster before adulthood and this is more noticeable for the vmPFC. Second, wavelength selection shows a negligible impact to the trend of energy deposition decay over age, despite that the 810-nm wavelength consistently gives the highest energy deposition compared to four other commonly used wavelengths. Although this result is generally consistent with previous simulation-based studies, 13,61 we would like to highlight that our results are dependent on the choices of brain optical properties. As discussed in the study by Cassano et al., 18 there is no widely accepted set of optical properties for brain tissues. Using values from different literature may lead to different preferences in wavelength. 59 Third, a negative linear correlation is observed between the thickness of ECT layer and the log-scale average energy deposition in two selected brain ROIs, suggesting that the brain anatomical changes are largely responsible for the observed age-dependent variations.
Furthermore, the Fpz source position has longer separation distances from the vmPFC region compared to the separations between the dlPFC and the F3-F4 source, suggesting that higher source intensity (alternatively, longer exposure time) is required when targeting the vmPFC region compared to the dlPFC region. In addition, the frontal pole [label 6 in Fig. 2(d)] shows stronger energy deposition compared to the remaining parts of vmPFC. Thus, the exposure duration for the extended vmPFC (vmPFC + frontal pole) is shortened. In addition, we provide a simple approach to estimate the treatment duration using our simulated results. From this relationship, the exposure duration generally increases as age increases. These quantitative assessments are expected to provide clinicians the guidance in designing personalized PBM treatment plans and ultimately improve the outcome of the procedures.
We would like to mention that there are several known limitations in this study. First, the simulations are performed using averaged brain atlases derived from public datasets. We acknowledge that using subject-specific brain segmentations may lead to more accurate estimations. In fact, our reported processing methods can be directly applied to subject-specific anatomical scans if such data become available, making it suitable for personalized t-PBM treatment planning. Second, the brain optical properties are assumed to be static across age groups. Although optical property measurements of age-dependent brain tissue are generally lacking, earlier studies on breast tissues have observed little or no correlation with age. 62,63 A correlation between dermis tissue absorption over age was reported, 64,65 but the magnitude of such variation is relatively small. Our analyses can be easily extended to include age-dependent optical properties when such data become available in the future. Third, this study is specifically focused on treating MDD by performing t-PBM to target vmPFC and dlPFC brain regions. If modeling other t-PBM source forms and brain ROIs becomes necessary, we can apply the same multilayered brain segmentation and parcellation models to quickly recreate the results for the new targeted regions. Finally, we largely rely on dedicated brain segmentation tools, namely, FSL and FreeSurfer, to create the brain anatomical models used in this study. The choice of neuroanatomical analysis tools may lead to variations in brain segmentations and simulation results. In the next steps, we may extend this study to use more accurate mesh-based MC 66,67 simulations and subject-specific scans.
Disclosures
Dr. Cassano's salary was supported by the Harvard Psychiatry Department (Dupont Warren Fellowship and Livingston Award), the Brain and Behavior Research Foundation (NARSAD Young Investigator Award), and an unrestricted grant from Photothera Inc. Drug donation was received from Teva. Travel reimbursement was received from Pharmacia and Upjohn. He has received consultation fees from Janssen Research and Development. He has also filed several patents related to the use of NIR light in psychiatry. PhotoMedex, Inc. supplied four devices for a clinical study. He has recently received unrestricted funding from LiteCure, Inc., to conduct a study on t-PBM for the treatment of MDD and to conduct a study on healthy subjects. He cofounded a company (Niraxx Light Therapeutics) focused on the development of new modalities of treatment based on NIR light; he is a consultant for the same company. He also received funding from Cerebral Sciences to conduct a study on t-PBM for generalized anxiety disorder. No conflicts of interest, financial or otherwise, are declared by the other authors. | 2020-02-13T09:24:36.140Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "0d94b2111e0f9ac6654b2cbfb39740a1cbffc968",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/neurophotonics/volume-7/issue-1/015009/Transcranial-photobiomodulation-with-near-infrared-light-from-childhood-to-elderliness/10.1117/1.NPh.7.1.015009.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "180c4c734b1bce5f73ad8030a5868730a0bf0460",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Engineering",
"Biology"
]
} |
246334389 | pes2o/s2orc | v3-fos-license | Wearable Physical Activity and Sleep Tracker Based Healthy Lifestyle Intervention in Early Intervention Psychosis (EIP) Service: Patient Experiences
Background: Physical activity, sleep, mental health, physical health, wellbeing, quality of life, cognition, and functioning in people who experience psychosis are interconnected factors. People experiencing psychosis are more likely to have low levels of physical activity, high levels of sedation, and sleep problems. Intervention: An eight-week intervention; including the provision of a Fitbit and its software apps, sleep hygiene and physical activity guidance information, as well as three discussion and feedback sessions with a clini-cian. Participants: Out of a sample of 31 using an early intervention psychosis (EIP) service who took part in the intervention, fifteen participants con-sented to be interviewed—9 (60%) males and 6 (40%) females, age range: 19 51 years, average age: 29 years. Method: In-depth interviews investigating patient experience of the intervention and its impact on sleep, exercise, and wellbeing were undertaken. Thematic analysis was applied to analyse the qualitative data and content analysis was used to analyse questions with a yes/no response. Results: Most of the participants actively used the Fitbit and its software apps to gain information, feedback, and set goals to make changes to their lifestyle and daily routines to improve quality of sleep, level of physical activity, and exercise. Conclusion: The intervention was reported to be bene-ficial, and it is relatively easy and low cost to implement and therefore could be offered by all EIP services. Furthermore, there is potential value for application in services for other psychiatric disorders, where there is often a need to promote healthy lifestyle, physical activity, and effective sleep.
Introduction
First episode or early psychosis is when a person experiences a combination of clinical symptoms, divided into "positive symptoms" (added experiences), including hallucinations (perception in the absence of any stimulus), delusions (fixed or falsely held beliefs), and "negative symptoms" (experience losses), including emotional apathy, lack of motivation, poverty of speech, cognitive deficits, social withdrawal and self-neglect [1]. The weighted average incidence of psychosis in England is 31.7 per 100,000 and psychotic disorders prevalence across all ages is 0.07% [2] [3]. A range of common mental health problems (including anxiety and depression) and coexisting substance misuse may also be present [4].
Experience of first episode psychosis occurs most commonly between late teens and late twenties [2]. In the UK, support and treatment is provided by an early intervention in psychosis (EIP) service as part of free at point of need National Health Service (NHS) [3]. People experiencing first episode psychosis commence a National Institute for Health and Care Excellence (NICE) recommended package of care and treatment within two weeks of referral [3].
Compared to the general population, levels of physical activity and exercise are lower and levels of sedation are higher in people who experience psychosis [5] [6]. These low levels of physical activity are linked to more depression symptoms, lower wellbeing, greater hopelessness, insomnia, lower quality of life, and physical health diseases, such as: cardio vascular disease (CVD), stroke, hypertension, osteoarthritis, diabetes, and chronic obstructive pulmonary disease (COPD) [7] [8]. The National Clinical Audit of Psychosis (NCAP; EIP spotlight 2018/19) identified that 46% of patients required intervention for weight gain or obesity [9]. Lack of physical activity and poor quality of sleep are contributory factors to the reduced life expectancy of people who experience severe mental illness such as psychosis and schizophrenia, with a weighted average of 14.5 years of potential life lost [10].
A person's sense of wellbeing can be enhanced by regular physical activity; in addition, physical activity and doing physical exercise are preventive factors against at least 25 chronic medical conditions [7]. For people with experience of psychosis, engaging in physical exercise is associated with improved quality of life, cognition, functioning, physical health and reduced psychotic symptomatology [11]. To benefit physical and mental health, WHO recommends adults should do at least 150 -300 minutes of moderate-intensity or at least 75 -150 minutes of vigorous-intensity aerobic physical activity, or an equivalent combination, per week [12].
Daily night-time sleep duration outside of the recommended 7 to 9 hours increases risk for mortality, diabetes, cardiovascular disease, stroke, coronary heart disease, and obesity [13] [14]. Sleep disorders in people who experience psychosis are high (rates of 80% reported) and are linked to lifestyle factors and psychosis symptoms [15] [16]. Sleep hygiene advice and support for sleep problems in psychosis may improve quality of sleep [15] [16]. Wearing and using the information provided by a wearable tracker such as a Fitbit can be helpful to increase physical activity, self-awareness of activity, motivation to engage in physical activity, and goal-setting/goal-achievement [17].
This study conducted in-depth interviews with patients in an EIP service who were taking part in an eight-week intervention that incorporated a Fitbit, exercise and sleep hygiene advice, as well as three engagement, feedback and discussion sessions with a member of clinical staff. The aim of this study was to understand peoples' experiences of sleep, physical activity, as well as the impact of the intervention on these experiences within the context of the management of psychotic symptoms. The study is undertaken to inform sleep, physical activity, and lifestyle interventions in psychosis services.
Methodology
The ontological and epistemological assumptions underpinning the methodology were constructivist and interpretivist. The analysis was approached from the perspective of understanding participants' subjective experience, and aligned with constructivism which acknowledges human reality is constructed through individuals' interactions and interpretations of the world and others [18]. Meanings therefore emerge from constructions (or reconstructions) of individuals' experiences of an empirical reality [19]. The resultant knowledge from the qualitative data is constructed by the interviewee and interviewer [20]. In interpretivism, social reality is interpreted by the meanings participants produce and reproduce [21], and interpretive research attempts to understand phenomena through accessing the meanings participants assign to them [22]. The methodology employed placed the onus on the participant's view of the situation [23].
Design
Semi structured interviews were conducted with 15 participants. Interview questions were informed by the research literature and with input of a person with lived experience of psychosis. Semi-structured interviews enabled the participants to share their experiences. All the data were analysed together.
Ethical Approval
Ethical approval was gained from United Kingdom's (UK) Health Research Authority (HRA); Research Ethics Committee (REC) reference: 21/EM/0047. All participants provided informed consent.
Recruitment and Participants
Participants were recruited from an NHS early intervention psychosis service. Forty participants were recruited as part of a mixed methodological project, 31 undertook the intervention. Fifteen participants from the full sample were re-cruited for interview. For inclusion in the study, participants needed to be within the age of 18 to 65, a patient of the early psychosis service, based in the community and able to understand written and oral English. Any participants who for a medical reason could not wear a watch like device on their wrist, or who did not have the capacity to consent were excluded.
Sampling was non-probabilistic, accessible, and purposive to achieve a representative sample of 15 adults in terms of age, gender and ethnicity. The characteristics of the 15 participants are presented in Table 1.
Intervention
The eight-week intervention incorporated: a free to keep Fitbit (including instructions and set up), exercise and sleep hygiene advice sheets, as well as three patient engagement sessions with EIP clinical staff. Each engagement session offered support and encouragement; and facilitated a discussion regarding the use of the Fitbit, the application of exercise and sleep hygiene advice, as well as relevant feedback.
Procedure
Informed written consent was obtained from all the participants. Two researchers working separately carried out one-to-one semi-structured interviews over the phone. A series of closed questions were initially asked regarding the functionality and usability of the Fitbit (e.g., did you have any problems wearing the Fitbit?); these responses were subjected to content analysis. The remainder of the interview was semi-structured, with questions relating to sleep, exercise, and well-being-as well as to participants' experiences of using the Fitbit. One participant (P12) spoke limited English and a translator was used to relay answers to the interviewer. All interviews were recorded, the length of the interviews ranged from 19.57 -56.03 minutes (M = 37.24, SD = 9.42); interviews were transcribed verbatim.
Analysis
Reflexive thematic analysis (TA) was used to analyse the data [24] [25]. Reflexive TA is a systematic process of developing, analysing and interpreting patterns within a qualitative data set to establish a set of themes [24]. Thematic network analysis (TNA), a type of reflexive TA was utilised to develop basic, organising, and global themes [26]. The analytical process undertaken was aligned to six steps: 1) dataset familiarisation; 2) data coding; 3) initial theme generation; 4) theme development and review; 5) theme refining, defining and naming; and 6) writing up [25]. NVivo software v.12 was used to assist analysis. Initial codes were generated by two researchers (one recruited as an advisor with lived experience of psychosis) working independently. Themes were further categorised into subthemes by three researchers who collaborated to refine and relabel the finalised themes. To further strengthen the interpretation of the analysis, the research team discussed and reviewed the findings. To ensure trustworthiness of the data, guidance was followed to promote credibility, transferability, dependability, and confirmability [27]. Group analysis was undertaken, i.e. not split on gender.
Content Analysis
Content analysis of responses regarding usability of the Fitbit, as well as sleep and exercise habits prior to using the Fitbit was undertaken. The numerical results and an illustrative quote are presented in Table 2. Figure 1 presents a theme map of the use of the wearable physical activity and sleep tracker (Fitbit) and the interlink between exercise, sleep and wellbeing.
Use of the Fitbit
This organising theme collates the different ways in which the Fitbit was used in relation to participants' sleep and physical exercise. It is comprised of five sub-themes: Open Journal of Psychiatry
2) To motivate and encourage to exercise
Using the Fitbit motivated and encouraged participants to engage in, do more, and/or sustain exercise. P3: I think it kind of makes you more motivated…It spurs you on to do that… keep on walking or keep on swimming. It encouraged and motivated me to make physical activity, more regular. It's just made me more motivated.
The Fitbit gave a reminder to be physically active, and this encouraged physical activity. P12: He was just checking how many steps he did. And of course, when he did some kind of steps, he was always motivated a little bit to make more and all of that. And also, even when he didn't get enough steps, the watch just made him alert to do another 200 step you know.
The Fitbit modified behaviour in relation to exercise, acting as a motivating factor on occasions when participants felt like they did not want to exercise. P7: I think it helped me to keep, because one of my goals was to lose a bit of weight, and to maintain that weight loss. So, it helped me main, it helped me to, to keep track of the 10,000 steps, helped me to on that trajectory, really. So, it encouraged me in some ways to do more and achieve those goals.
Fitbit feedback and monitoring provided a sense of achievement; further motivating individuals to set and accomplish goals.
P4: I've got this achievement in the App and, that I think that can sort of boost your mental health a little bit, that you know, you are doing stuff, you are being active. It helps my mental health because it's about the achievement and what I'm doing.
P10: I guess it's just kind of, it's interesting to be able to look at your stats and judge on that base and try and reach goals based on that. I would say it motivates me. I am trying to achieve something.
3) Cognisant of physical activity
Participants used the Fitbit to gain awareness and insight of physical activity levels as well as keep a track record of accomplishments.
P2: I was more conscious of my activity, my physical activity. More aware of it, more aware, the how much, doing, like so much physical activity.
P6: When you do the steps and you get, and then this, for some reason, sends a message to the app. And it says you achieve your goals. And you know, you've done this, you've done that. And it's quite, it's quite good to know that.
P5: But the steps, I didn't really-like it made me aware of how many steps I take and want to take more. I was more aware of it and wanting to do it because it would remind me.
Another positive outcome of being cognisant of physical activity was that it provided participants with a deeper comprehension of their physical health, the positive impact of physical activity, and fostered confidence for self-management and control.
P7: The amount of information shared a lot of the information about your activity levels means that you're constantly aware of your health. I am much more concerned about my health. Where I am health wise… it's made me feel more in control of my health as well.
4) Cognisant of sleep and sleep patterns
This theme acknowledges the reassurance and awareness provided via Fitbit's sleep data insights.
P8: I do see it as useful being able to track your sleep as well. I find it intriguing more than anything… and monitoring my sleep out of curiosity.
P10: It was just sheer curiosity and awareness about my sleep. P1: It's been really good to have the Fitbit recording the sleep because you just get to know a little bit more about your patterns and stuff like that… it's not something that you'd normally be able to easily get data on.
P7: I think it's helped in terms of tracking how many hours of sleep I'm getting because when I look at the app, it tells me how many hours and it tells me what the, what the divided is, what the split is between REM, light sleep. Yeah, so REM, light sleep and deep sleep.
P14: There is the element of understanding how you have slept or being able to see how you've slept. Again, it's reassurance. I'm less agitated so is it this reassurance that helps me not be so agitated. it tells your mind that you had a good sleep so you're gonna have a good day as well.
5) Reminder, cue and prompt in order to get enough sleep
The Fitbit was a tool for the participants to achieve better sleep habits and to endeavour to get enough sleep each night. One Fitbit function used to achieve this was to set a reminder or prompt to go to bed at a reasonable time, thus facilitating an effective number of sleep hours.
P7: There's a prompt at 9.07. It tells me I need to get ready for bed. At 9.07 on the dot. So that's when it tells me to go to bed. Most of the time I follow the instruction.
P14: It acts as a reminder and happens and can mean that I go to bed earlier and try and get more sleep.
P15: It does give me notifications at 10, to go to sleep, which I do, I do follow sometimes.
Several participants used the Fitbit to assess quantity of sleep, and thenadapted sleep behavior accordingly to ensure sufficient sleep was achieved the following night.
P3: When I can track and see how long I've actually slept, sometimes if I, if I have not had enough sleep that I know I need to go to bed a bit earlier the next day.
Both the ability to monitor sleep habits and enhanced awareness resulted in participants endeavouring to go to sleep at a reasonable time and get enough sleep.
P5: It was more awareness and like because I knew it was monitoring my sleep so my sleep schedule I would try to go to sleep at reasonable times and stuff like that and get a good fair amount of sleep.
Discussion
Participants from an early intervention psychosis (EIP) service were interviewed and provided information on the links between exercise, sleep, and wellbeing from their perspectives. They provided valuable insights about their experiences of an eight week intervention incorporating a Fitbit; sleep hygiene and exercise advice; as well as three engagement, feedback, and discussion sessions with clinical staff. Consistent with prior psychosis research, most of the sample experienced sleep problems [15] [16]. Within the intervention most participants actively used the Fitbit and its app to improve their quality of sleep, level of physical activity and exercise.
Based on feedback from Fitbit data and software app information participants adopted healthier lifestyle behaviours and daily routines to facilitate more effective sleep. These improvements in behaviour and self-management are connected to health and wellbeing benefits. They may lead to sustained improvement in sleep quality which can benefit mental and physical health in the long term [13] [14]. Furthermore, participants reported increased physical activity and exercise due to the use of the Fitbit and its software apps. It was found that Fitbit/app data and information reflecting objective feedback increased physical activity motivation and awareness. Due to links with physical and mental health, it is especially important in a psychosis population to undertake sufficient physical activity and exercise. The Fitbit facilitated a more active lifestyle promoting physical activity in daily/weekly routines; long term increased physical activity levels can potentially reduce daily adjusted life years (DALY) and improve life expectancy [7] [10] [11].
Limitations
A possible bias may be present, as interviews were only conducted with participants who agreed to be interviewed following the intervention. Thus, it is possible more participants with a positive experience of the intervention and those in a better mental health (e.g., experiencing fewer psychotic symptoms) agreed to interview. In addition, there was a relatively small sample size limiting generalisability. However, the sample size was deemed appropriate for an in-depth interview study, as saturation often occurs at around 12 -15 participants in relatively homogeneous groups [28]. The present sample was a reasonably homogeneous group; although participants had various specific diagnoses related to psychosis and some had additional mental illness diagnosis-this is often the case in EIP service patients.
Conclusion
Any form of regular physical activity of sufficient intensity and duration can prevent many chronic medical conditions and is associated with improved cognition, functioning, and mental and physical health in people who experience symptoms of psychosis [7] [11] [12]. Effective night time sleep duration and quality can reduce risk for mortality, diabetes, cardiovascular disease, stroke, coronary heart disease, and obesity [13] [14]. People with experience of psychosis often have ineffective sleep and insufficient physical activity [5] [6] [15] [16], thus it is important for mental health services to offer interventions which can improve sleep duration and quality and levels of physical activity. EIP service patients benefit from the project's relatively simple and lowcost intervention; therefore, it is recommended for introduction to all EIP services. In addition, the intervention has the potential to be introduced to benefit a range of other mental health services supporting people with various psychiatric disorders. funded by UK Research and Innovation and their support is gratefully acknowledged (Grant reference: ES/S004459/1). Any views expressed here are those of the project investigators and do not necessarily represent the views of the Closing the Gap network or UKRI.
Declaration of Interests
No other authors have any conflicts of interest to declare. | 2022-01-28T16:19:59.672Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "b340bfe0b0af1729d8bccbfb181700c621428298",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=114889",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "5aaac2690d191a31240dc9c66e7d3d890c999bc5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
5005909 | pes2o/s2orc | v3-fos-license | African American elders’ psychological-social-spiritual cultural experiences across serious illness: an integrative literature review through a palliative care lens
Disparities in palliative care for seriously ill African American elders exist because of gaps in knowledge around culturally sensitive psychological, social, and spiritual care. The purpose of this integrative literature review is to summarize the research examining African American elders’ psychological, social, and spiritual illness experiences. Of 108 articles, 60 quantitative, 42 qualitative, and 6 mixed methods studies were reviewed. Negative and positive psychological, social, and spiritual experiences were noted. These experiences impacted both the African American elders’ quality of life and satisfaction with care. Due to the gaps noted around psychological, social, and spiritual healing and suffering for African American elders, palliative care science should continue exploration of seriously ill African American elders’ psychological, social, and spiritual care needs.
Introduction
As the population of African American (AA) elders increases, there is a need to focus on delivery of culturally congruent care (1). In 2010, there were 38.9 million AA elders, and by the year 2050, AA older adults are projected to account for more than 21.5% of the US population, an increase from 10% in 1990s (2). Yet, according to the Agency for Healthcare Research and Quality Health Disparities Report (3), AA elders are less likely than Whites to receive the right amount of support during the time of serious illness. Disparities in seriously ill AA elder care exist because of gaps in knowledge around culturally sensitive physiological, psychological, social, and spiritual palliative care practices (4)(5)(6)(7). To facilitate psychological, social, and spiritual healing for the seriously ill AA elder, palliative care practices must be informed by the perspectives of the seriously ill AA elder. Defined for this study, palliative care's role is "to anticipate, prevent and relieve suffering; to support the best possible quality of life for patients and their families, regardless of the stage of the disease", not just care provided at end-of-life [(8) p. 9]. Serious illness is defined conceptually as "a persistent or recurring condition that adversely affects one's daily functioning or will predictably reduce life expectancy" [(8) p. 8].
A review of the current research into psychological, social, and spiritual experiences of seriously ill AA elders can provide insight into creating culturally sensitive approaches for improving quality of life and overall satisfaction with the healthcare received. Research in this area is growing; however, research examining psychological, social, and spiritual healing experiences remains limited in scope, quantity, and location. Through a culturally congruent framework (1), the integration of psychological, social, and spiritual experiences provides holistic, patient-centered care that "identif[ies], respects and address[es] differences in patient values, preferences and expressed needs" [(9) pg.1]. However, a knowledge gap remains in this area, particularly through a culturally focused framework. A view that encompasses the multidimensional concepts of psychological, social, and spiritual healing must evaluate both culture-specific and culture-universal factors to provide culturally congruent care that is beneficial to the people being served (1). Nurses contribute to the healthcare experiences of AA elders through interactive "transpersonal caring moments" [(10) p. 12]. When inadequate care is given, AA elders have experienced insufficient symptom control, difficult interactions with their healthcare providers, lack of spiritual psychosocial support and the possibility of dying without access to high quality care (11)(12)(13)(14)(15)(16)
Purpose
The purpose of this culturally focused integrative literature review is to summarize the current research examining AA elders' psychological, social, and spiritual experiences during serious illness. The following questions guided this review: What cultural experiences contributed to psychological, social, and spiritual healing for AA elders living with serious illness? What cultural experiences contributed to psychological, social, and spiritual suffering for AA elders living with serious illness? The insights obtained from this literature review can contribute to a framework for guiding future empirical research around the cultural phenomenon of psychological, social, and spiritual healing in seriously ill AA elders, thus guiding culturally sensitive approaches to interventions for patient-centered palliative care.
Key definitions
For this review, the following definitions were used to conceptualize the following terms: sociocultural, serious illness, healing, and suffering. Sociocultural was broadly defined: "the interaction between people and the culture in which they live" (17) Serious illness was limited and operationalized in this review to the top four leading causes of death in African Americans: heart disease, cancer, stroke, and diabetes mellitus (18). Healing was defined as generating a "sense of wholeness as a person" [(19) p. 657] despite one's illness. Healing has also been regarded as a subjective and multidimensional concept (19)(20)(21)(22)(23)(24)(25)(26)(27)(28)(29)(30). For this review, healing in the setting of serious illness was defined as a "life transforming, positive, subjective change"-psychological, social, and spiritual healing-that occurs when one experiences a serious illness [(31) p. 1]. Suffering, on the other hand, was defined as a negative psychological, social, and spiritual experience (32).
Methods
Using Whittemore's (33) method for integrative literature review, an organized and rigorous approach to the literature review process was followed via five steps: problem identification, literature search, data evaluation, data analysis, and presentation of findings (33,34). Through this process, existing evidence, from both qualitative and quantitative methodologies was synthesized.
A computer assisted literature search was conducted during July 2013-September 2013. The following electronic databases were searched: PubMed, CINAHL, EBSCO, and Web of Science. Many different combinations of search terms were used. Initially, zero articles were found when searching the term "psychological-social-spiritual healing." Twenty four articles were found using the terms "psychological healing", "social healing", and "spiritual healing". Of the 24 found, 4 met the inclusion criteria and were retained for this review.
Because of the scarcity of the literature, related concepts to psychological, social, and spiritual healing were searched with the assistance of a reference librarian. Broader search terms were used in an attempt to capture the psychological, social, and spiritual healing/suffering phenomenon of seriously ill AA elders. The broader terms searched were: healing, psychological healing, social healing, spiritual healing, spirituality, faith, wisdom, meaning-focused coping, coping, recovery, subjective well-being, thriving, resilience, and optimism. Each of these terms was joined with the term "African American". Boolean operators were applied to define relationships between keywords like African Americans (and) Blacks. These searches were delimited by the following: samples that included an average age of the sample of their participants age 60 or older; discussed psychological, social and/or spirituality dimensions of AA elders; serious illnesses of cancer, heart disease, stroke or diabetes mellitus; published within the last twenty years; and peer-reviewed primary research reports. Theoretical, commentary and review articles were excluded; however, some of these articles' reference lists were used as secondary sources of primary studies for comparison to the database searches.
Search results
The initial multiple searches, using the above search terms, identified 316 publications. The primary author screened the titles, abstracts, and key words of these 316 publications. Due to duplicates and/or not meeting inclusion/exclusion criteria, 151 articles were removed, leaving 165 publications. The remaining articles were read in their entirety for continued screening with the inclusion/ exclusion criteria, leaving 108 articles for this integrative review. The 57 articles removed after this second screening were excluded for several reasons: articles were literature review only; articles only discussed methodological implications of recruitment of AA elders; articles did not include samples with average age of 60 or older, and/or the sample did not include serious illnesses as defined above. From the final 108 publications, the research design, aim/ purpose, sample and main findings were extracted into a data matrix. The 108 studies remaining were reviewed for quality and findings (see PRISMA flow diagram, Figure 1).
Evaluation of the literature
The sample consisted of 60 quantitative, 42 qualitative, and 6 mixed methods studies. The samples of the quantitative studies ranged from n=17 to n=98,528. Of these, 53 were survey research. The remaining 7 of the 60 quantitative studies incorporated several types of methods. Of the 42 qualitative studies, the sample size ranged from n=6 to n=167. Of these, 4 used focus groups and the remaining used interviews for data collection. There were a variety of methodological designs, yet not all of them explicitly stated a design. Of the 6 mixed methods studies, the sample size ranged from n=30 to n=200. These articles used surveys and interviews. The details of the quantitative, qualitative, and mixed-methods studies are shown in Tables 1,2.
Despite the variety of research methodological approaches, many limitations relevant to the current review were noted. In the quantitative articles, 13 samples were made up of only African Americans, whereas, 47 included multiple ethnicities. For example, in the largest study (n=98,528), a retrospective chart review of Medicare heart failure patients, only 8.5% of the sample was AA (12). Of the quantitative studies, one study sampled African Americans only as part of the "National Survey of American Life" (35,36).
As with the quantitative studies, some of the qualitative studies did not use exclusively AA samples (n=22). However, 20 of the qualitative studies exclusively sampled only AA elders. Joining 3 large narrative analysis studies, the largest qualitative sample, n=167, used only AAs for their sample (37).
Also, there was lack of conceptual clarity around psychological, social, and spiritual concepts. Only 23 of the 108 publications specifically reported a conceptual framework, necessary for providing conceptual clarity. In this survey research, there was no consistency in surveys/instruments or measures employed to measure psychological, social, and spiritual dimensions. For example, the spiritual domain was defined in a variety of ways: spirituality, religiosity, and/or religion practice. Although there was a lack of conceptual clarity of the spiritual domain throughout all the studies, the measurement of the spirituality domain occurred at a much higher frequency than measurements for psychological or social domains.
In fact, in the initial literature searches, "spiritual healing and African American," yielded the largest number of publications (n=29) compared to "social healing and African American" (n=9), and "psychological healing and African American" (n=9).
In the quantitative survey articles, the authors reported difficulty with item non-response, recall bias with selfreported measures and potential selection bias on the part of participants who returned mailed surveys. Large numbers of the survey articles were cross-sectional, and longitudinal studies were frequently recommended by the authors to capture the multi-dimensional psychological, social, and spiritual experiences of serious illness. Most of the 53 survey research studies incorporated only cross-sectional analyses, while only one incorporated a longitudinal approach. Within the survey research, the authors discussed the difficulty of collecting the wide variety of cultural dimensions of AAs elders' psychological, social, and spiritual aspects due to difficulty using instruments that were not developed within the AA culture. In the survey articles, the authors recommended future research should include qualitative approaches to allow for a more descriptive approach to gain knowledge about culturally focused qualities of the psychological, social, and spiritual dimensions.
A variety of qualitative methodological designs were used; however, not all of them explicitly stated a design/ method. However, within the qualitative approaches, specific information such as clinical information, severity of disease, comorbid illnesses or functional status was frequently under-reported. For the mixed methods studies, the authors reported choosing this approach to triangulate the findings of the survey and interviews. All six used surveys and interviews for data collection. Of the largest study (n=200), 200 surveys and 80 ethnographic interviews were conducted. Again, this study's sample was not made up of only AA individuals, but also included European Americans, Korean Americans and Mexican Americans individuals (38).
Finally, many studies only used one geographical location or one healthcare institution, decreasing the ability to collect broader findings across different settings. All studies were completed in the United States except for one in Britain (39).
Psychological experiences
As detailed in Table 3, individual psychological experiences found in these studies included depression, fear, anxiety, worry, psychological distress/stress, and sadness. Despite the multitude of negative experiences found, some positive psychological experiences were noted when cognitive reframing of illness occurred. This reframing was described by terms such as optimism, wishful thinking, positive reappraisals, outlook and coping, resilience, and welladjusted adaptations to one's illness. The review findings do indicate that positive psychological outcomes do occur for seriously ill AA elders if negative experiences are decreased. When negative experiences decrease, perhaps opportunities emerge for psychological, social, and spiritual healing for the seriously ill AA elder. However, multiple components of seriously ill AA elder's psychological experiences are still highly understudied, with conflicting evidence of what and how AA elders' healing/suffering are impacted (see Table 3).
Social experiences
Social support was shown to impact seriously ill AA's experiences (see Table 4). Despite research that has shown the benefits of social support, not all AA elders reported a positive role of social support. Negative experiences occurred for some, such as social isolation, decreased intimacy with others; negative social support from family, Therefore, gaining more knowledge from the perspectives of seriously ill AA elders is necessary to determine how these social interactions provide opportunities for healing (see Table 4).
Spiritual experiences
Significant differences were found among definitions of spirituality, religion, and religious practices among publications due to the complex nature of the term spirituality. The incorporation of a broad view of spirituality was important to fully describe healing/suffering for the seriously ill AA elder. For purposes of this integrative review, the source articles defined spiritual healing in the following ways: existential and/or religious practices, psychological and/or sociocultural constructs of spirituality, and with the following terms: spirituality, religion, religiosity or religious practices. Table 5 depicts the most common definitions. Spirituality has been shown to play important roles for AA elders dealing with serious illness (see Table 6). When experiences were positive, spirituality provided healing for seriously ill AA elders, whether this occurred through existential, psychologically constructed, or sociocultural religious practices. Based on geographic location, gender or illness, there were noted differences in the roles spirituality played in the lives of seriously ill AA elders. Spirituality was strongly linked to the quality of life of seriously ill AA elders. However, spirituality defined as religious practice did not always show a positive effect on the well being of the AA elder. There remains a lack of conceptual clarity regarding what spirituality is and how spirituality affects suffering/healing for seriously ill AA elders (see Table 6).
Psychological, social, and spiritual healing/suffering
AA elders' definitions of "health" incorporated mind, body, and spirit (87), and poor subjective health reports predicted lower levels of personal efficacy and spiritual wellbeing (88). The ability to self manage their illness was connected to relationship with God Higher spirituality and a sense of control were shown to be significantly associated with decreasing depressive symptoms in AA elders (89). If AA elders experienced stressful life events, this seemed to predict lower subjective health ratings, decreased self-esteem, and lower senses of spiritual wellbeing (88). The use of religious practice to promote mental health among AA elders is well documented (79,84,86,90). Cognitive reframing, religious practice, and the ability to express emotions increased psychological healing and, in some instances, physical function (45). AA elders were shown to have resiliency and tenacity despite the seriousness of their illnesses (91). Independence gave meaning to life. A strong faith that God was in control guided them through their illnesses (37). Socially, if the AA elder was in a happy marriage, positive effects were also noted on their spiritual wellbeing (88). AA elders' coping strategies across many illnesses included engaging in life through exercising, seeking information, relying on God, changing dietary patterns, medicating, self-monitoring, and self-advocating (92).
In the studies noted, negative experiences occurred across all three psychological, social, and spiritual dimensions.
The negative psychological experiences reported included depression, fear, anxiety, uncertainty, distress, sadness, and fatalism. Negative social experiences stemmed from the following contributors: decreased social support from family, friends or healthcare providers; concerns about burdening others; isolation; low socioeconomic resources; limited access to care; and overt racism and discrimination within their healthcare interactions. When insensitivities to AA elder's cultural beliefs/values were reported, a concurrent mistrust of the provider was also reported (60). Within the spiritual dimension, negative experiences were not as prevalent. However, a few articles suggested that not all extrinsic religious interactions contributed positive healing effects.
Similarly, positive experiences were reported across all three dimensions. In the psychological dimension, positive experiences included: optimism, resilience, positive coping, and positive outlooks. When cognitive reframing was present, healing could occur. In addition, when individuals had the ability to express their emotions, a social interaction occurred that could also allow for psychological healing. Within the social dimension, quality of life for the seriously ill AA elder is highly linked to positive social support among family and providers, suggesting that positive interactions could lead to less suffering.
For seriously ill AA elders, much overlap occurred in the interactions among culturally relevant psychological, social, and spiritual experiences. Seriously ill AA elders' psychological and social quality of life were related to their spiritual healing, but a fuller understanding of their cultural values, preferences, and spiritual beliefs is still needed. When discussing healing/suffering, it is important to note that all three dimensions-psychological, social, and spiritual-play important roles for the AA elder's overall healing.
AA elders' psychological, social, and spiritual healing within serious illness of cancer
Beliefs based in religiosity were seen in all studies of cancer survivors, but the ways in which religion was expressed in relation to their cancer were culturally determined (39). One study demonstrated that, in cases where breast, prostate, and colorectal AA cancer survivors initially showed poorer physical and mental health quality of life ratings, these ratings changed when adjusted by socio-demographic, clinical, or psychosocial factors, indicating only lower mental health quality of life ratings (93). In another study of cancer survivors, patients reported needing help with overcoming fears, finding hope, finding meaning in life, finding spiritual resources, finding peace, finding meaning to their death and dying, and hoping for someone to talk with about these issues (94). Of these patients, 41% of the AA elders reported needing help with spiritual/existential issues (94). Specifically, in breast cancer, AA women reported positive changes in their faith after diagnosis (95). Finally, in a study of AA lung and colorectal cancer patients, religious behaviors were positively associated with mental health and vitality but had negative associations with depressive symptoms (85).
Breast cancer survivors reported many psychosocial concerns. Other important issues for AA breast cancer survivors included body appearance, social support, health activism, menopause, and learning to live with a chronic illness (96). Breast cancer survivors who had higher coping capacities experienced less psychological distress, higher spiritual wellbeing, and less catastrophizing about their illnesses (97). Coping strategies of breast cancer survivors incorporated all of the following dimensions: relying on prayer; avoiding negative people; developing a positive attitude; having a will to live; and receiving support from family, friends, and support groups (53). Belief in divine control was positively associated across all ethnic groups with not only the positive reframing of illness but also active coping and planning (98).
In AA prostate cancer patients, faith helped patients overcome the fear resulting from initial perceptions of their cancer diagnoses. Faith was placed in God, healthcare providers, self, and family, and these men came to see their prostate cancer as a "new beginning that was achieved through purposeful acceptance or resignation" [(99) p. 470]. This faith was their source of empowerment, and with this empowerment, they became more proactive in their selfcare (99). Beliefs based in religiosity were seen in all cancer survivors, but the ways in which religion was understood and expressed in relation to their cancer were culturally determined (39).
In AA cancer survivors, spiritual transformation came through the recognition of personal mortality (80) and through redemption stories that related positive transformations of initially negative perspectives regarding survivorship (100). These transformations occurred through upholding existing beliefs in God, knowing this God as a directing force, and understanding one's personal strengths (100). The sense of a directing force from God also created a desire to be of service to others (100). Skeath et al. (31) also noted a life transformative experience within a multi-ethnic group of cancer survivors, which impacted all dimensions of their lives. For individuals with serious illness, this positive subjective change impacted the ability to decrease psychological, social, and spiritual suffering, even after a cancer diagnosis (31).
AA's psychological, social, and spiritual healing within cardiac related serious illnesses: heart failure or stroke
For cardiac illnesses, there was significantly less literature. In contrast to cancer survivors, in AA patients with heart failure, spiritual wellbeing was negatively associated with psychological wellbeing (101). For instance, patients reported feeling less meaning and peace and more depression and anxiety in their lives (101). Yet, these same patients reported greater faith, showing a different relationship to quality of life and faith than that experienced by cancer survivors (101). However, as noted in cancer illnesses, some AA elders were able to maintain a strong sense of self even after the life disruptions caused by heart failure by using the culturally relevant coping strategies of resiliency, spirituality, and self-care (102). In stroke patients, acceptance of illness came as a normal part of aging (103). Patient's age, other comorbidities, and knowledge about strokes further impacted their overall levels of acceptance (103).
Conclusions
The quantitative literature contained a large proportion of cross-sectional surveys measuring the multidimensional concepts discussed above; however, the studies did not always include a large portion of AA elders. Of most concern is the dearth of literature incorporating all phenomena of psychological, social, and spiritual healing. Despite the lack of conceptual clarity among spirituality and/or religiosity, the spiritual dimensions have been shown to play an important role in healing for seriously ill AA elders, whether this occurred through intrinsic or extrinsic mechanisms. Because of these complex relationships among the psychological, social, and spiritual dimensions, the literature conveys conflicting evidence of what results in suffering for the seriously ill AA elder.
To decrease distrust among AA elders with serious illness, healthcare practice should incorporate physiological, psychological, social, spiritual, and cultural domains to provide patient-centered care of the seriously ill (3, 8,104,105). These domains are all part of the National Quality Framework for Palliative Care: Clinical Practice Guidelines for Palliative Care (8).Within this framework, approaches to palliative care interventions in AA elders with serious illness integrate cultural beliefs and values (106)(107)(108)(109).
Even with attempts to incorporate psychosocial and cultural concepts into healthcare curricula, inequalities remain (9). "The 21 century brings heightened awareness of how beliefs, values, religion, language and other cultural and socioeconomic factors influence health and help seeking behaviors" [(9) p. 1]. The next generation of healthcare providers, trained through a holistic paradigm (10), will choose to incorporate culture, complexity, and care stemming through relationship-based patient centered care for co-creating a caring and healing environment for AA elders with serious illness (110).
Overall, to facilitate psychological, social, and spiritual healing for the seriously ill AA elder, palliative care practices must be informed by the perspectives of the seriously ill AA elder. When psychological, social, and spiritual dimensions are not incorporated in healthcare delivery, healing can be obstructed and suffering can occur. This integrative review was the first to appraise the state of the science on psychological, social, and spiritual healing in AA elders. The findings identified limitations of the literature and suggested the continued need for healthcare to adopt culturally competent patient centered palliative care. Further research on psychological, social, and spiritual healing is vital to address these limitations and to support culturally focused patient centered palliative care. | 2018-04-03T03:34:41.602Z | 2017-04-17T00:00:00.000 | {
"year": 2017,
"sha1": "46b6220c2559a7df70abeff6b7d51c0692199667",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21037/apm.2017.03.09",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "afe963611c017542ad63b642f461f0e4be1621bc",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
202710672 | pes2o/s2orc | v3-fos-license | Detection and Identification of Allergens from Canadian Mustard Varieties of Sinapis alba and Brassica juncea
Currently, information on the allergens profiles of different mustard varieties is rather scarce. Therefore, the objective of this study was to assess protein profiles and immunoglobulin E (IgE)-binding patterns of selected Canadian mustard varieties. Optimization of a non-denaturing protein extraction from the seeds of selected mustard varieties was first undertaken, and the various extracts were quantitatively and qualitatively analyzed by means of protein recovery determination and protein profiling. The IgE-binding patterns of selected mustard seeds extracts were assessed by immunoblotting using sera from mustard sensitized and allergic individuals. In addition to the known mustard allergens—Sin a 2 (11S globulins), Sin a 1, and Bra j 1 (2S albumins)—the presence of other new IgE-binding protein bands was revealed from both Sinapis alba and Brassica juncea varieties. Mass spectrometry (MS) analysis of the in-gel digested IgE-reactive bands identified the unknown ones as being oleosin, β-glucosidase, enolase, and glutathione-S transferase proteins. A bioinformatic comparison of the amino acid sequence of the new IgE-binding mustard proteins with those of know allergens revealed a number of strong homologies that are highly relevant for potential allergic cross-reactivity. Moreover, it was found that Sin a 1, Bra j 1, and cruciferin polypeptides exhibited a stronger IgE reactivity under non-reducing conditions in comparison to reducing conditions, demonstrating the recognition of conformational epitopes. These results further support the utilization of non-denaturing extraction and analysis conditions, as denaturing conditions may lead to failure in the detection of important immunoreactive epitopes.
Introduction
Mustard is one of the priority food allergens regulated by Canada, the European Union, and the Gulf Cooperation Council (GCC), including the countries of Saudi Arabia, United Arab Emirates (UAE), Kuwait, Bahrain, Oman, Qatar, and Yemen. The inclusion of mustard on the regulatory allergen list of these countries was based on the view that mustard allergy poses a serious problem because of its widespread use and high allergenic potency [1,2]. There are little data available on the prevalence rates of mustard allergy, but it seems to vary around the world and appears to be more common in Europe, accounting for 1-7% of food allergy based on estimated prevalence in France [3,4]. Mustard allergy is also well documented in a number of published clinical studies reporting on severe systemic reactions, including anaphylaxis following exposure to very small amounts of mustard [5][6][7][8].
The international mustard market is led by Canada, which is the world's second largest producer and the first exporter of mustard seed, holding a 57% share of the market [9]. Canada produces food
Optimization of Mustard Seed Protein Extraction
Extractions were conducted at various pH values in order to evaluate the protein solubilization and the extraction efficiency on mustard proteins from the defatted flours. A 3*7 full factorial experimental design (63 combinations) was used to study the effects of three different extraction buffers [phosphate-buffered saline (0.01 M, pH 7.4), borate-buffered saline (0.1 M, pH 8.45), and carbonate buffer (0.05 M, pH 9.6)] on protein recovery of mustard varieties. Minitab Statistical Software (version 16) (Minitab Inc., State College, PA, USA) was used to design the experiments.
Each extraction buffer was used to extract 0.5 g of defatted flour from each mustard variety at a protein/buffer ratio of 1:250 (w/v). All extractions were conducted in 50 mL centrifuge tubes under constant shaking at 45 rpm using a LabRoller Rotator (Labnet International, Woodbridge, NJ, USA) at room temperature for 1 h. The crude extracts were transferred in 70 mL centrifuge bottles and centrifuged in a Beckman JA-18 fixed angle rotor in a Beckman J2-21 centrifuge (Beckman Intruments, Brea, CA, USA) at 16,000× g for 30 min at 4 • C. The supernatant was passed on a filter paper (Whatman filter paper No. 4, Whatman International Ltd., Maidstone, UK) and further filtered on 0.45 µm filters. The pH of the extracts was measured at the beginning and the end of the extraction time to verify its stability. The protein concentration of the mustard extracts was determined using the Bradford protein assay [25]. Clarified extracts were transferred in 2 mL cryogenic vials and stored at −80 • C until use.
All protein extraction experiments were performed in duplicate, and the present results are the average values of four determinations (two experimental × two analytical replicates). Analysis of variance (ANOVA) was carried out using XLSTAT version 2012.4.01 to compare data obtained from different samples. Tukey multiple comparison was used to discriminate among the means of the variables when necessary. Differences at p ≤ 0.05 were considered significant.
Protein Electrophoresis
Mustard seed extracts normalized to equal amounts of protein (10 µg) were subjected to SDS-PAGE under reducing and non-reducing conditions using pre-cast Any KD TGX gels (Bio-Rad Laboratories, Hercules, CA, USA) according to Laemmli [26]. The soluble extracts were mixed with an equal volume of Laemmli sample buffer with 5% (v/v) of β-mercaptoethanol (β-ME) and boiled for 5 min. Alternatively, electrophoresis was performed under non-reducing conditions by omitting the addition of β-ME. The gels were run at a constant voltage of 150 V for 90 min using TGS buffer (25 mM Tris, 192 mM glycine, and 0.1% SDS) in a Criterion cell (Bio-Rad). A molecular weight standard (Precision Plus Protein Standard) was included on each gel. After electrophoresis, gels were stained with Coomassie Brilliant Blue. Images were acquired by scanning stained gels using an Image Scanner III (GE Healthcare, Salt Lake City, UT, USA) operated by LabScan 6.0 software (GE Healtcare). For image and densitometry analysis, the Image Quant TL 7.0 Software (GE Healtcare) was used.
Immunoblotting
Immunoblotting was carried out with human sera obtained from two different sources. Two sera named P1 and P2 were from mustard sensitized and self-declared allergic donors and were purchased from Plasma Lab International (Everett, WA, USA). A third serum named P3 was obtained from a clinically confirmed mustard allergic patient of the Sainte-Justine University Hospital Center (Montreal, QC, Canada). The three sera-P1, P2, and P3-showed a level of sensitization of Class III to mustard with specific IgE antibody levels equal to 5.76, 3.76, and 6.3 [kilo units of antibody per liter (kUA/L)], respectively, following measurement with the Pharmacia ImmunoCAP ® system. Control sera were obtained through Plasma Lab from patients with an allergic history to dust mites but without food allergy. The study was approved by the Sainte-Justine's Hospital Ethics Committee and the Human Research Ethics Committee of Agriculture and Agri-Food Canada.
For western blots, 2 µg of carbonate buffer protein extract from each mustard variety were separated by SDS-PAGE (performed as described above); the separated proteins were then transferred on a polyvinylidene fluoride (PVDF) membrane using a Mini Trans-Blot electrophoretic transfer cell (Bio-Rad) at 100 V for 1 h at 4 • C according to Towbin [27]. The blotted membranes were subsequently blocked in 5% (w/v) skim milk powder in phosphate-buffered saline with 0.1% Tween-20 (PBS-T) for 1 h at room temperature. Membranes were then incubated overnight at 4 • C with 1:2 (v/v) dilutions of the three sera. IgE was detected by using a horseradish peroxidase (HRP) conjugated mouse anti-human antibody (clone B3102E8, Southern Biotech, Birmingham, AL, USA). Immunoreactive bands were visualized using amplified Opti-4CN reagents (Bio-Rad) following manufacturer's recommendations. The immunoblots were scanned and analyzed as previously mentioned for the SDS-PAGE gels.
Indirect ELISA
High-binding 96-well microtiter plates (Costar TM , Corning, Tewksbury, MA, USA) were coated with 0.25 µg/well of protein extracts from each variety of mustard in carbonate-bicarbonate buffer (pH 9.6) and incubated overnight at 4 • C. Plates were blocked with 5% bovine serum albumin (BSA) in PBS-T for 2 h at room temperature followed by washing and incubation with control and mustard sensitive sera samples serially diluted in 1% BSA in PBS-T (dilution buffer) for another 2 h. For IgE detection, plates were washed and incubated 1 h in a 1:1000 (v/v) dilution of mouse anti-human IgE-HRP (clone B3102E8, Southern Biotech, AL, USA) prepared in dilution buffer. Bound peroxidase activity was determined with 3,3',5,5'-tetramethylbenzidine (TMB) (Sigma-Aldrich, St Louis, MO, USA), the reaction was stopped by the addition of 1N sulfuric acid, and absorbance was measured at 450 nm. ELISA measurements were performed in duplicate.
Identification of Protein Bands as Allergens by LC-MS/MS
The protein in-gel digestion and the mass spectrometry experiments were performed by the Proteomics platform of the Eastern Quebec Genomics Center, Quebec, Canada. Detailed experimental parameters for the tryptic digestion, the mass spectrometry conditions, and the data analysis were previously reported by Rioux et al. [28]. Scaffold (Scaffold_3_00_07, Proteome Software Inc., Portland, OR, USA) was used to validate MS/MS-based peptide and protein identifications. Peptide identifications were accepted if they could be established at greater than 95.0% probability, as specified by the Peptide Prophet algorithm [29]. Protein identifications were accepted if they could be established at greater than 95.0% probability and contained at least two identified peptides. Protein probabilities were assigned by the Protein Prophet algorithm [30].
Protein Sequence Comparisons with Known and Putative Allergens
Each MS identified IgE-binding mustard protein sequence was compared to all proven and putative protein allergens sequences included in the Food Allergy Research and Resource Program (FARRP) AllergenOnline.org database version 19 (updated on 10 February 2019) [31]. This version contains a comprehensive list of 2129 protein (amino acid) sequence entries that are categorized into 853 taxonomic-protein groups of unique proven or putative allergens (food, airway, venom/salivary, and contact) from 384 species. All database entries are linked to sequences in the National Center for Biotechnology Information (NCBI) of the National Institute of Health (NIH). Sequence comparison was performed using the FASTA algorithm version 36 with a sliding window of 80 amino acid segments of each protein to find identities greater than 35%, as recommended by the CODEX Alimentarius guidelines [32]. The scoring matrix used on the AllergenOnline website is a BLOSUM 50 [33]. E-values and percent identities [(#identical residues/80 or more amino acids) * 100%)] were evaluated to consider potential cross-reactivity.
Protein Content and Extractability from Different Mustard Varieties
The total seed protein content varied significantly (p < 0.0001) among the different mustard varieties, ranging from 36.07-37.92% for the two varieties of Sinapis alba (AC Pennant and Andante) and 31.38-37.22% for the five varieties of Brassica juncea (Duchess, AC Vulcan, Dahinda, Centennial Brown, and Cutlass), accounting for about 7% difference across varieties ( Figure 1A). These values were generally higher than the mean protein values reported for Canadian mustard by the Canadian grain commission [11].
Regardless of the variety, carbonate buffer was significantly (p < 0.0001) more efficient than phosphate and borate buffers in solubilizing mustard proteins ( Figure 1B). This result is in agreement with a previous study on Brassicaceae oilseeds that showed that solubility varied between species and varieties studied, while the highest value of N solubility was observed at pH 10 [34]. Similar results were also observed for peanut and tree-nut proteins, where carbonate was found to be the most efficient extracting buffer [35]. An alkaline medium allows more protein to be solubilized, and this effect seems to be the result of electrostatic repulsion. The most abundant protein fraction in crucifers is cruciferin [36], and since its isoelectric point (IP) is at 7.25 [37], this should explain why the electrostatic repulsion reaches a higher value in the carbonate buffer, thus giving better protein solubility.
Protein Content and Extractability from Different Mustard Varieties
The total seed protein content varied significantly (p < 0.0001) among the different mustard varieties, ranging from 36.07-37.92% for the two varieties of Sinapis alba (AC Pennant and Andante) and 31.38-37.22% for the five varieties of Brassica juncea (Duchess, AC Vulcan, Dahinda, Centennial Brown, and Cutlass), accounting for about 7% difference across varieties ( Figure 1A). These values were generally higher than the mean protein values reported for Canadian mustard by the Canadian grain commission [11].
Regardless of the variety, carbonate buffer was significantly (p < 0.0001) more efficient than phosphate and borate buffers in solubilizing mustard proteins ( Figure 1B). This result is in agreement with a previous study on Brassicaceae oilseeds that showed that solubility varied between species and varieties studied, while the highest value of N solubility was observed at pH 10 [34]. Similar results were also observed for peanut and tree-nut proteins, where carbonate was found to be the most efficient extracting buffer [35]. An alkaline medium allows more protein to be solubilized, and this effect seems to be the result of electrostatic repulsion. The most abundant protein fraction in crucifers is cruciferin [36], and since its isoelectric point (IP) is at 7.25 [37], this should explain why the electrostatic repulsion reaches a higher value in the carbonate buffer, thus giving better protein solubility. Protein extractability also showed some significant variation according to mustard varieties. As presented in Figure 1C, Brassica juncea variety Dahinda exhibited the highest protein recovery (26.4%, p < 0.0001). Dahinda is a Brassica juncea canola quality variety, an oilseed that was developed with a low glucosinolate content but with oil equivalent to conventional canola species [38]. The breeding of this variety has also resulted in a high cruciferin content [39]. The reported low surface hydrophobicity of cruciferin at basic pH of 10 [40] might explain the higher solubility observed for this variety. On the opposite end, the Sinapis alba variety Andante showed the poorest extractability (19.7%, p < 0.0001), despite its high protein content (38%).
Protein Electrophoretic Profiles of Differents Mustard Varieties as a Function of Extraction Buffer
Polypeptide composition of the mustard varieties as a function of extraction buffer was resolved by gel electrophoresis in both non-reducing ( Figure 2A) and reducing ( Figure 2B) conditions. These figures revealed important qualitative differences in the protein profiles among the different mustard types/classes. In the presence of mercaptoethanol, the polypeptide profiles of the Brassica and the Protein extractability also showed some significant variation according to mustard varieties. As presented in Figure 1C, Brassica juncea variety Dahinda exhibited the highest protein recovery (26.4%, p < 0.0001). Dahinda is a Brassica juncea canola quality variety, an oilseed that was developed with a low glucosinolate content but with oil equivalent to conventional canola species [38]. The breeding of this variety has also resulted in a high cruciferin content [39]. The reported low surface hydrophobicity of cruciferin at basic pH of 10 [40] might explain the higher solubility observed for this variety. On the opposite end, the Sinapis alba variety Andante showed the poorest extractability (19.7%, p < 0.0001), despite its high protein content (38%).
Protein Electrophoretic Profiles of Differents Mustard Varieties as a Function of Extraction Buffer
Polypeptide composition of the mustard varieties as a function of extraction buffer was resolved by gel electrophoresis in both non-reducing ( Figure 2A) and reducing ( Figure 2B) conditions. These figures revealed important qualitative differences in the protein profiles among the different mustard types/classes. In the presence of mercaptoethanol, the polypeptide profiles of the Brassica and the Sinapis varieties were similar to those obtained by Aluko et al. [41]. No major differences were found in the extracts' electrophoretic profiles between the varieties of the same mustard type and for the same extraction buffer used (Figure 2A,B). A similar observation was made by Wanasundara et al. [34] for the solubility profile of cruciferin and napin between pH 2 and 10. However, differences in protein profiles were observed between the different buffer extracts for the same variety, suggesting that the buffer type affected mustard protein extraction not only quantitatively but also qualitatively. From the densitometry analysis of protein electrophoretic profile (Table 1), it was found that phosphate buffer tended to enhance napin protein extractability, particularly for the B. juncea varieties. Since napins have a high degree of polymorphism [36], it could be possible that some isoforms would show different solubility according to pH. In the case of Brassica napus, it was shown that napin was soluble at acidic, neutral, and basic pH, but that only a few isoforms were soluble at a pH of 8.5 [40]. In addition, the polypeptide profiles of borate and carbonate buffers extracts revealed the presence of protein bands at around 15 and 55 kDa, which were absent in the phosphate buffer extract. Moreover, an additional band of about 17 kDa, which was previously identified as an oleosin [42], was observed in borate and carbonate buffers extract under non-reducing conditions for the B. juncea varieties.
Sinapis varieties were similar to those obtained by Aluko et al. [41]. No major differences were found in the extracts' electrophoretic profiles between the varieties of the same mustard type and for the same extraction buffer used (Figure 2A,B). A similar observation was made by Wanasundara et al. [34] for the solubility profile of cruciferin and napin between pH 2 and 10. However, differences in protein profiles were observed between the different buffer extracts for the same variety, suggesting that the buffer type affected mustard protein extraction not only quantitatively but also qualitatively. From the densitometry analysis of protein electrophoretic profile (Table 1), it was found that phosphate buffer tended to enhance napin protein extractability, particularly for the B. juncea varieties. Since napins have a high degree of polymorphism [36], it could be possible that some isoforms would show different solubility according to pH. In the case of Brassica napus, it was shown that napin was soluble at acidic, neutral, and basic pH, but that only a few isoforms were soluble at a pH of 8.5 [40]. In addition, the polypeptide profiles of borate and carbonate buffers extracts revealed the presence of protein bands at around 15 and 55 kDa, which were absent in the phosphate buffer extract. Moreover, an additional band of about 17 kDa, which was previously identified as an oleosin [42], was observed in borate and carbonate buffers extract under non-reducing conditions for the B. juncea varieties. Based on these results, carbonate was identified as the most favorable extracting buffer because of its higher protein extraction capacity and more complete electrophoretic profile. Consequently, carbonate protein extracts were retained for the remainder of the study.
Indirect ELISA for Serum IgE Response to Mustard Varieties
Indirect ELISA was performed with individual serum (P1, P2, and P3) to compare the IgE-binding levels of the different S. alba and B. juncea varieties. As can be seen in Figure 3, all mustard varieties exhibited the highest binding to IgEs from serum P1, followed by serum P3. IgE-binding to serum P2 showed the lowest intensity. Differences in the intensity of mustard sensitive/allergic individuals are common and were reported in previous studies [5,7,8]. Although not statistically significant, the two sera P1 and P3 appeared to be less immunoreactive to S. alba varieties in comparison to the B. juncea varieties. Dust mite sensitized control sera did not bind to any mustard varieties used in this study (data not shown). Based on these results, carbonate was identified as the most favorable extracting buffer because of its higher protein extraction capacity and more complete electrophoretic profile. Consequently, carbonate protein extracts were retained for the remainder of the study.
Indirect ELISA for Serum IgE Response to Mustard Varieties
Indirect ELISA was performed with individual serum (P1, P2, and P3) to compare the IgEbinding levels of the different S. alba and B. juncea varieties. As can be seen in Figure 3, all mustard varieties exhibited the highest binding to IgEs from serum P1, followed by serum P3. IgE-binding to serum P2 showed the lowest intensity. Differences in the intensity of mustard sensitive/allergic individuals are common and were reported in previous studies [5,7,8]. Although not statistically significant, the two sera P1 and P3 appeared to be less immunoreactive to S. alba varieties in comparison to the B. juncea varieties. Dust mite sensitized control sera did not bind to any mustard varieties used in this study (data not shown).
IgE-Binding Profiles of Select Canadian Mustard Varieties
In order to evaluate the varietal effect on the IgE binding profiles of mustard proteins, immunoblotting with carbonate buffer extracts from the seven different mustard varieties was performed using sera from three mustard sensitive/allergic persons individually ( Figure 4). IgEimmunoblotting was conducted for both non-reduced and reduced mustard proteins. The results revealed important differences in protein profile, abundance, and IgE-binding intensity between the S. alba and the B. juncea mustard types as well as in the IgE-reactivity profiles between the three sera. According to Menendez-Arias et al. [13], Sin a 1 (2S albumin) is the major allergen of S. alba seeds and Bra j1 of B. juncea seeds [23]. Under non-reduced conditions ( Figure 4B), the 2S albumins showed intense IgE-binding in the case of sera P1 and P3. However, under reduced electrophoretic conditions ( Figure 4A), these two sera only weakly bound the two napins bands. This observation suggests the recognition of conformational epitopes. A previous study [14] identified two epitopes of Sin a 1-one conformational and one linear. According to Monsalve et al. [24], it is the linear epitope that is considered to be the antigenic determinant. Based on this epitope, an anti-epitope antibody for the quantification of Sin a 1 by a non-competitive enzyme linked immunosorbent assay was developed [43]. This same study also showed that the napin protein fraction of yellow mustard contained proteins devoid of the linear epitope sequence, thereby not contributing to all cases of 2S allergenicity. In the case of serum P2, there was no evidence of IgE-binding to napin proteins under both reduced and non-reduced conditions. A previous study [44] also showed that, even though the majority of
IgE-Binding Profiles of Select Canadian Mustard Varieties
In order to evaluate the varietal effect on the IgE binding profiles of mustard proteins, immunoblotting with carbonate buffer extracts from the seven different mustard varieties was performed using sera from three mustard sensitive/allergic persons individually (Figure 4). IgE-immunoblotting was conducted for both non-reduced and reduced mustard proteins. The results revealed important differences in protein profile, abundance, and IgE-binding intensity between the S. alba and the B. juncea mustard types as well as in the IgE-reactivity profiles between the three sera. According to Menendez-Arias et al. [13], Sin a 1 (2S albumin) is the major allergen of S. alba seeds and Bra j1 of B. juncea seeds [23]. Under non-reduced conditions ( Figure 4B), the 2S albumins showed intense IgE-binding in the case of sera P1 and P3. However, under reduced electrophoretic conditions ( Figure 4A), these two sera only weakly bound the two napins bands. This observation suggests the recognition of conformational epitopes. A previous study [14] identified two epitopes of Sin a 1-one conformational and one linear. According to Monsalve et al. [24], it is the linear epitope that is considered to be the antigenic determinant. Based on this epitope, an anti-epitope antibody for the quantification of Sin a 1 by a non-competitive enzyme linked immunosorbent assay was developed [43]. This same study also showed that the napin protein fraction of yellow mustard contained proteins devoid of the linear epitope sequence, thereby not contributing to all cases of 2S allergenicity. In the case of serum P2, there was no evidence of IgE-binding to napin proteins under both reduced and non-reduced conditions. A previous study [44] also showed that, even though the majority of mustard allergic persons reacted to Sin a 1, some patients' sera did not bind to the 2S albumin. This observation could be explained by the finding that Sin a 1 presents an important polymorphism [45], resulting in significant variability in its allergenic potential.
Identification of Mustard IgE-Binding Proteins by Mass Spectroscopy
To confirm the identity of the IgE-binding bands, electrophoretic analyses of mustard varieties AC Pennant (S. alba) and AC Vulcan (B. juncea) were run again, and the immunoreactive bands were excised and further analyzed by LC/ESI-MS/MS. Figure 5 represents the SDS-PAGE and the immunoblot patterns of the two mustard varieties incubated with sera P1, P2, and P3. The list of MS identified proteins is presented in Table 2. All the allergenic protein bands (as shown in Figure 5A,B) were identified by MS/MS analysis as belonging to the Brassicaceae family. Excised bands S1, RS1, and RS2 of S. alba and bands B1, RB1, and RB2 from B. juncea from both non-reduced and reduced gels were identified as Sin a 1 (S. alba) and Braj 1 (B. juncea), respectively, thereby confirming their allergen identity. As for B2 and RB4 bands, these were identified as oleosin proteins (OLES2_BRANA Oleosin S2-2 and BRANA Oleosin S3-1) from the database (UniProtKB/Swiss-Prot and UniProtKB/TrEMBL; www.uniprot.org). Although this is the first formal evidence of the allergenicity of such oil bodiesassociated proteins as potential allergens from mustard, future studies need to be performed to prove the biological activity of these newly identified allergens. Recent work has shown that two oil bodyassociated proteins [Oleosins, Ses i 4 (17 kDa) and Ses i 5 (15 kDa)] were found to be among the most important sesame allergens [49]. Allergenic oleosins were also reported in peanuts, where five different IgE-binding oleosins with a molecular weight from 14-18 kDa were identified [50][51][52], while two oleosin isoforms of 17 and 14-16 kDa, now designated Cor a12 and Cor a13, were identified as allergens in hazelnut [53]. Oleosin proteins were also identified from B1, B2, B3, and B4, suggesting the presence of multiple isoforms of the proteins. IgE-binding pattern of the mustard sensitized sera revealed the presence of other reactive protein bands from both S. alba and B. juncea varieties. For the latter, sera P1 and P3 exhibited IgE-binding on the non-reduced immunoblots for polypeptide bands between 27 and 48 kDa and on reduced ones for bands between 22 and 34 kDa. These regions corresponded to cruciferin (11S globulin) and, in the case of S. alba seeds, to the allergen Sin a 2 [19]. To date, no reported allergen has been recorded for the 11S of B. juncea seeds. However, a bioinformatics evaluation of the cruciferin of B. juncea, B. napus, and S. alba showed that the cruciferin of B. juncea has a high similarity to the one of S. alba [46]. As a result of this high homology, it would be possible for the 11S globulin of B. juncea to present an allergic potential, but this still has to be clinically demonstrated. The IgE-binding to the 11S globulin was less intense than for the 2S albumin in the case of sera P2 and P3. Similar results were obtained by Menendez-Arias et al. [13]. In addition, for serum P1, intense IgE binding was observed in B. juncea varieties to bands between 27 and 31 kDa on the non-reduced immunoblot, also corresponding to the free polypeptide chains of the cruciferin. In the case of the two S. alba varieties, binding was observed at a polypeptide band around 60 kDa. However, in both cases, the binding to cruciferin was not observed on the reduced immunoblot. This is contrary to what was previously published about the 11S cruciferin of S. alba reporting that, under reducing conditions, the two subunits of the protein were able to bind IgE from the sera [19]. Such a difference is probably due to the variability in the sensitization profile of the used sera; moreover, that study involved the use of pooled sera, and a different result could have been obtained if the sera were used individually. A strong IgE-binding was further observed on both the reduced and the non-reduced immunoblots of serum P2 for a procruciferin band with molecular weight of about 75 kDa. It was reported [36] that it is common to observe polypeptide bands that remain at apparent molecular weight above 54 kDa, presumably from the precursor polypeptide or procruciferins of α-β, which have not undergone regular in vivo processing [47,48]. Finally, sera P1 bound strongly to the B. juncea protein around 17 kDa, while sera P2 and P3 showed binding to a band around 55 kDa on the reduced immunoblot for the S. alba varieties. This last band was not observed on the electrophoretic profile for the phosphate buffer extracts. However, it appeared on the profiles for the borate and the carbonate extracts, thus confirming the importance of choosing a buffer that provides the most complete protein profile so as to increase the probability of revealing as many IgE reactive bands as possible. All IgE-reactive proteins were further subjected to LC-MS analysis for identification.
Identification of Mustard IgE-Binding Proteins by Mass Spectroscopy
To confirm the identity of the IgE-binding bands, electrophoretic analyses of mustard varieties AC Pennant (S. alba) and AC Vulcan (B. juncea) were run again, and the immunoreactive bands were excised and further analyzed by LC/ESI-MS/MS. Figure 5 represents the SDS-PAGE and the immunoblot patterns of the two mustard varieties incubated with sera P1, P2, and P3. The list of MS identified proteins is presented in Table 2. All the allergenic protein bands (as shown in Figure 5A,B) were identified by MS/MS analysis as belonging to the Brassicaceae family. Excised bands S1, RS1, and RS2 of S. alba and bands B1, RB1, and RB2 from B. juncea from both non-reduced and reduced gels were identified as Sin a 1 (S. alba) and Braj 1 (B. juncea), respectively, thereby confirming their allergen identity. As for B2 and RB4 bands, these were identified as oleosin proteins (OLES2_BRANA Oleosin S2-2 and BRANA Oleosin S3-1) from the database (UniProtKB/Swiss-Prot and UniProtKB/TrEMBL; www.uniprot.org). Although this is the first formal evidence of the allergenicity of such oil bodies-associated proteins as potential allergens from mustard, future studies need to be performed to prove the biological activity of these newly identified allergens. Recent work has shown that two oil body-associated proteins [Oleosins, Ses i 4 (17 kDa) and Ses i 5 (15 kDa)] were found to be among the most important sesame allergens [49]. Allergenic oleosins were also reported in peanuts, where five different IgE-binding oleosins with a molecular weight from 14-18 kDa were identified [50][51][52], while two oleosin isoforms of 17 and 14-16 kDa, now designated Cor a12 and Cor a13, were identified as allergens in hazelnut [53]. Oleosin proteins were also identified from B1, B2, B3, and B4, suggesting the presence of multiple isoforms of the proteins. a Band numbers correspond to the IgE-binding bands detected in Figure 5; b only protein identifications with 100% probability were retained; c total percentage of proteins amino acid sequence covered by the identified peptides in MS/MS analysis.
Subunit bands assigned as S4 to S7 and RS3-RS12 from AC Pennant (S. alba) in addition to B3-B8 and RB3, RB5-RB9, and RB11 from AC Vulcan (B. juncea) mustard varieties were identified and confirmed as cruciferin (11S globulin) fragments. The presence of several cruciferin (α-β) polypeptides that form the subunits (protomers) of the 11S globulin molecule in mustard has been reported [36]. However, not all of them have been characterized yet as potential mustard allergens within the 11S globulin family. To date, the only 11S globulin storage protein that has been identified as an important mustard allergen is Sin a 2 [19]. Future studies need to be performed to characterize these newly identified allergens and name them in accordance with the World Health Organization/International Union of Immunological Societies (WHO/IUIS) Allergen Nomenclature Subcommittee [54].
Furthermore, the protein bands S7 and RS9 from S. alba as well as B9 and RB10 from B. juncea were identified as β-glucosidase precursors (BRANA, accession No. Q42618) showing 44-50% sequence coverage. Indeed, a β-glucosidase was previously purified from seeds of B. napus (oilseed rape) as reported by Falk and Rask [55]. The 130 kDa native enzyme consisted of a disulfide linked dimer of 64 kDa monomers. Evidence was previously reported about the potential allergenicity of a β-glucosidase from wheat [56]. The protein band RB9 was also identified with 64% coverage and 21 unique peptides as an enolase (BRACM, accession No. Q6W7E8). Enolase is an essential glycolytic enzyme that catalyzes the interconversion of 2-phosphoglycerate and phosphoenolpyruvate [57]. It has been recognized as an important allergen from various molds and some plants [58][59][60]. Finally, in addition to oleosin, the RB4 band was also identified as a glutathione-S transferase (GST) (accession No. Q7XZT2). Members of the GST family have been reported as relevant allergens in cockroach [61], fungi [62], and wheat [63]. The allergenicity of these new identified IgE-binding proteins from mustard would request further investigation and should be carefully evaluated not only by in vitro IgE tests but also by in vivo and clinical tests.
Bioinformatic Assessment of Potential Cross-Reactivity of Identified Mustard IgE-Binding Proteins with Known Allergens
The purpose of this analysis was to identify relevant homology in amino acid sequences between the identified mustard IgE-binding proteins and proven or putative allergens, which could help identify proteins that may share immunologic or allergic cross-reactivity. The Food Allergy Research and Resource Program (FARRP) AllergenOnline.org database version 19 (updated on 10 February 2019; http://www.allergenonline.org/) was used for the primary comparisons to allergens. This public database only shows sequences of proteins with sufficient published evidence of allergy at a minimum-specific IgE binding from sera of subjects allergic to the source [31]. Based on the recommendation of the CODEX Alimentarius guidelines [32], the FASTA3 algorithm with the criteria of >35% identity over any segment of 80 or more amino acids as an indication of possible cross-reactivity for allergens was used to compare all possible contiguous amino acid segments of each of the identified mustard IgE-binding proteins against all sequences listed in the AllergenOnline Database. Research has, however, shown that proteins with greater than 70% identical primary amino acid sequences throughout the length of the protein are commonly cross-reactive, while those with less than 50% identity are unlikely to be cross-reactive [64]. For increased confidence, only the best scoring matches (>35% identity) with E-values smaller than 1e-7 are displayed in Table 3, as it has been reported that larger E-values are unlikely to identify relevant matches, while matches with E-values smaller than 1e-30 are much more likely to be cross-reactive in at least some allergic individuals [65]. The complete search results are presented in a supplemental document (Appendix 1). The 80mer FASTA search confirmed the extensive homology and the high cross-reactivity between the 2S albumins from Sinapis alba (Sin a 1) and Brassica species (Bra j 1) with a percentage of identity (ID) over 87% and very small E-value. Interestingly, FASTA identified highly significant alignments (>55% identity) of the IgE-binding 11S globulins (cruciferins) from both Sinapis and Brassica species (accession No. Q2TLWO, Q7XB53, P33525, Q2TLV9) with 11S globulins allergens from tree-nut species, most notably with black walnut (Juglans nigra), cashew (Anacardium occidentale), hazelnut (Corylus avellana), almond (Prunus dulcis), and pecan (Carya illinoinensis), suggesting a strong possibility of cross-reactivity, which would be worth testing using sera from individuals with clinical reactivity to those species.
In addition, the search identified probable homology for 11S globulin proteins from Sinapis alba (accession No. Q2TLWO and Q2TLV9) with high molecular weight (HMW) glutenin from wheat (Triticum aestivum) based on 60% and 68% best identity and low E-values of 1.3e-12 and 3.5e-18, respectively. Noteworthy, sequence alignments for the newly identified IgE-binding mustard oleosins (accession No. C3S7F1 and C3S7F8) found highly significant matches with oleosin allergens from hazelnut (Corylus avellana) with ID over 70% and E-values smaller than 1e-30. These results are highly relevant for potential cross-reactivity. Significant scores were also found with oleosins from peanut (Arachis hypogaea) and sesame (Sesamum indicum). Scoring results for the identified enolase protein from Brassica juncea (accession No. Q6W7E8) resulted in the best alignments (over 80% ID and very small E-value) to enolase allergens from latex tree (Hevea brasiliensis), yellowfin tuna (Thunnus albacares), Atlantic salmon (Salmo salar), chicken (Gallus gallus), and fungi (Candida albicans and Rhodotorula mucilaginosa). This finding adds to the existing knowledge that enolase is one of the most conserved glycolytic enzyme protein across eukaryotes (animal, plant, and fungi) [58,59]. This further suggests a strong possibility of cross-reactivity. Besides, the identified mustard Glutathione S-transferase 3 (GST) enzyme (accession No. Q7XZT2) also significantly matched GST allergen Per a 5 from the insect American cockroach (Periplaneta americana), with 40% ID and an E-value of 1.2e-6. Finally, other identified mustard proteins with accession No. Q42618 (β-glucosidase), O23733 (Cysteine synthase), P13244 (malate synthase, glyoxysomal), and C3S7H5 (caleosin) resulted in no matches greater than 35% identity over 80 amino acids. a gid: allergen group id number in the AllergenOnline database, which links to detailed information on the allergenicity references for the group, the type of allergen, other sequences belonging to the same group, and more on the allergenonline.org website. b Highest scoring identity for FASTA3 alignments of every possible 80 amino acid segment. The Food and Agriculture Organization/World Health Organization (FAO/WHO) 2001 expert panel recommended using a criteria of >35% identity over any segment of 80 or more amino acids as an indication of possible cross-reactivity for allergens, which was adopted by the Codex Alimentarius Commission (2003). c The E-value (expectation value) is a calculated value that reflects the degree of similarity of the query protein to its corresponding matches. The size of the E-value is inversely related to similarity of two proteins, meaning a very low E-value (e.g., 10e-30) indicates a high degree of similarity between the query sequence and the matching sequence from the database, while a value of 1 or higher indicates the proteins are not likely to be related in evolution or structure. d Overall percent identity (ID) (percentage of amino acids with a direct match in the alignment). e Length of amino sequence alignment. f Link to the unique assigned protein identity (gi number) in the NCBI (National Centre for Biotechnology Information) Protein Database.
Conclusions
In this study, carbonate buffer was found as an efficient non-denaturing buffer for the extraction of mustard seed proteins. Protein and IgE-binding patterns revealed important differences between S. alba and B. juncea types of mustard, but no major differences were observed between the varieties of the same mustard type. The presence of both napins (2S albumins) and cruciferins (11S globulins) allergenic polypeptides under both non-reducing and reducing electrophoretic conditions was confirmed. Sin a 1, Bra j 1, and cruciferin polypeptides exhibited a stronger IgE reactivity under non-reducing conditions in comparison to reducing conditions, demonstrating the presence of conformational allergenic epitopes in Sin a 1, Bra j 1, and other cruciferin subunits. Therefore, the use of denaturing protein extraction and analysis conditions may lead to failure to detect important immuno-reactive epitopes due to protein modification. Results also revealed the presence, in both S. alba and B. juncea types of mustard, of a wide range of IgE binding cruciferin (11S globulin) polypeptides/fragments with different molecular weight, indicating the existence of multiple isoforms in all types of mustard seeds. We also reported, for the first time, new mustard IgE binding polypeptides/proteins identified as oil body-associated proteins (oleosins) and enolase enzyme from B. juncea type. A bioinformatics analysis to identify relevant homology in amino acid sequences between the identified mustard IgE-binding proteins and known allergens revealed strong possible cross-reactivity between mustard 11S globulin and equivalent allergen proteins from tree-nut species and wheat. Strong cross-reactivity between mustard oleosins with those of hazelnut, peanut, and sesame was also suggested. In addition, a highly significant homology between mustard enolase and that of other eukaryotes (animal, plant, and fungi) was found, confirming the highly conserved structure of this glycolytic enzyme and its high potential for allergic cross-reactions. The new putative mustard allergens revealed in this study would request further biological and structural characterization. | 2019-09-19T09:15:28.986Z | 2019-09-01T00:00:00.000 | {
"year": 2019,
"sha1": "a90a92213b6c76fe8b7ef3a7a2304c71c4d97054",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/biom9090489",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8e91527c1a76848ef5610c4a5a66de1f26d92cb8",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
} |
25404443 | pes2o/s2orc | v3-fos-license | “Without a mother”: caregivers and community members’ views about the impacts of maternal mortality on families in KwaZulu-Natal, South Africa
Background Maternal mortality in South Africa is high and a cause for concern especially because the bulk of deaths from maternal causes are preventable. One of the proposed reasons for persistently high maternal mortality is HIV which causes death both indirectly and directly. While there is some evidence for the impact of maternal death on children and families in South Africa, few studies have explored the impacts of maternal mortality on the well-being of the surviving infants, older children and family. This study provides qualitative insight into the consequences of maternal mortality for child and family well-being throughout the life-course. Methods This qualitative study was conducted in rural and peri-urban communities in Vulindlela, KwaZulu-Natal. The sample included 22 families directly affected by maternal mortality, 15 community stakeholders and 7 community focus group discussions. These provided unique and diverse perspectives about the causes, experiences and impacts of maternal mortality. Results and discussion Children left behind were primarily cared for by female family members, even where a father was alive and involved. The financial burden for care and children’s basic needs were largely met through government grants (direct and indirectly targeted at children) and/or through an obligation for the father or his family to assist. The repercussions of losing a mother were felt more by older children for whom it was harder for caregivers to provide educational supervision and emotional or psychological support. Respondents expressed concerns about adolescent’s educational attainment, general behaviour and particularly girl’s sexual risk. Conclusion These results illuminate the high costs to surviving children and their families of failing to reduce maternal mortality in South Africa. Ensuring social protection and community support is important for remaining children and families. Additional qualitative evidence is needed to explore differential effects for children by gender and to guide future research and inform policies and programs aimed at supporting maternal orphans and other vulnerable children throughout their development.
Background
South Africa has policies of free maternity care and legal abortions, almost universal antenatal care coverage (97%) and most deliveries are facility-based [1,2]. Despite this the maternal mortality ratio (MMR) in 2010 was an estimated 300 deaths per 100,000 live births [3].
While debate exists over the accuracy of this figure it seems that there is consensus that it is high considering the socio-economic and policy context in South Africa [4][5][6]. While high, recent reductions in the MMR [5,6] suggest improvements in reproductive and obstetric services but these are slow and relatively small. The results suggest that although most maternal mortality is preventable the risk for South African women is still large and necessitates action at multiple levels to reduce rates of mortality and morbidity [1,7,8]. Experts agree that HIV is likely to contribute greatly to this MMR; 2010 estimates suggest that 60% of deaths from maternal causes were attributed to HIV [3] with HIV-infected mothers significantly more likely to die during childbirth than HIV-negative mothers [9]. While HIV may result in an overall increase in adult mortality [4], evidence suggests positive women are at both direct and indirect risk of maternal mortality, [7,10] and that antiretroviral treatment (ART) may also contribute to increased risk during pregnancy [11][12][13].
The primary motivation to date for improving maternal health is the prevention of maternal mortality as supported by a human rights based approach, which in South Africa is supported by a strong constitution and legislation about rights to accessing health care [14,15]. This paper expands on this to explore the consequential impacts that maternal mortality has for the children and families of women within the South African context. The paper aims to provide an in-depth qualitative exploration of the family level impacts resulting from a failure to stem, what is largely preventable, maternal mortality within this context, including evidence for the way in which families cope with these impacts [10]. The evidence from this study begins to suggest that in addition to the human rights argument for investment in the prevention of maternal mortality that in assessing the socio-economic impacts the need to address maternal health is framed within the broader issues of development [16].
In a very high prevalence context such as South Africa the role of HIV introduces complexities because it becomes difficult to differentiate between the impacts of maternal death, likely from HIV-related causes, at any stage and maternal mortality as defined as a death during pregnancy and childbirth [17,18]. Despite these difficulties we argue that there is still value in this study which provides evidence for the impacts of maternal mortality for the family even if it only proves to confirm existing evidence for the impacts of a maternal death at other times.
An analysis of child survival in a similar site in rural South African highlights the extreme risks that surviving children whose mother's die of maternal causes face. Infant death after maternal mortality shows the relative significance of a healthy mother for children's survival. Surviving infants of women who died of maternal causes (WHO defined) were 15 times more likely to die than those whose mothers survived and were also more likely to die than those whose mothers died late maternal deaths or death at other times [19]. The results also highlight the fact that the survival rates of infants born to mothers who die are remarkably high (83%) and this points to the potential long-term impacts of orphaning during infancy for the child and their family [19].
The impacts of HIV-related adult mortality were well researched within the early 2000's and a developing literature within South Africa about the long-term impacts of orphaning as a result of HIV exists [20][21][22]. Few studies have explored the impacts of maternal mortality specifically, in a high prevalence context where HIV is a likely cause of death, on the well-being of the surviving infant, older children and the family. This paper aims to provide a detailed, context-specific account of the impacts of maternal mortality on families within rural KwaZulu-Natal. In so doing we hope to highlight the need to refocus on investment in the prevention of maternal mortality, particularly the large burden of preventable causes, but also assess the potential policies and interventions which may work to both support and assist families in coping with the impacts of maternal mortality in the South African context.
Methods
This qualitative research conducted in South Africa is part of a four-country mixed methods study (including Tanzania, Ethiopia and Malawi) on the impacts of maternal mortality on living children. This study was conducted in Vulindlela, KwaZulu-Natal province outside of the city of Pietermaritzburg. The community includes both rural and peri-urban settlement types and the population is primarily Zulu (the largest ethnic group in South Africa). KwaZulu-Natal is the most populous province in South Africa with 10.3 million people [23]. It also has the highest prevalence of HIV, approximately 40% of women testing at antenatal clinics in the surrounding district were positive in 2011 [24]. The population within this community experiences high levels of poverty and unemployment; but these are comparable to similar rural or peri-urban communities in the rest of the country [25].
The study adopted the WHO definition of maternal mortality as the death of a woman during pregnancy, childbirth, or within 42 days of pregnancy termination, from a pregnancy related cause. The study employed an experienced and resident community liaison officer, to use snowball sampling to locate families affected by maternal mortality. Families were screened to verify that death resulted from maternal causes. Enrolled families included at least one child orphaned after the mother's death. An attempt was made to include families with orphaned children of varying ages and time since the mother's death, to reflect a range of impacts.
The study sample included 22 families of women who were identified as dying from maternal causes (one family lost two sisters to maternal mortality). The respondents were mostly grandmothers (40%), and except for a partner, a brother, and a husband, all respondents were women including mothers-in-law, sisters and nieces. The families captured in the study were observed by field staff to be as poor as or poorer than other households within their neighbourhood and many noted that they survived on government social welfare and informal work. Over half of the deaths from maternal causes occurred amongst women who were between the ages of 20-29 years old (57%). Family member interviews addressed the general characteristics of the family for socioeconomic context; circumstances surrounding maternal mortality, the impacts on the children and family; and availability and accessibility of services for maternal orphans.
In-depth semi-structured interviews were conducted with key community stakeholders with likely insights into maternal mortality or working with affected families. Stakeholders included teaching and nursing staff, community councillors, community care workers, church officials, representatives from NGOs and social workers. In addition, 7 focus group discussions (N=60) were conducted to examine community perceptions about maternal mortality, implications for orphans and available services. Data collection occurred between March and August 2013.
An experienced research team conducted the interviews in isiZulu; these were then audio-recorded, transcribed and translated. Quality assurance was provided through independent review of a sample of translations to ensure accuracy and standardisation. Each interview and focus group took between 1.5 and two hours to complete. The data was coded utilizing an iterative framework analysis approach, guided by an initial set of codes, based upon the study research questions, which were expanded upon as themes emerged in repeated analyses and readings of the data. Two research staff coded each of the transcripts, discussing and editing themes as they emerged. All analyses were conducted in NVivo 10.
Study protocols were approved by the Harvard School of Public Health Institutional Review Board and the Human Sciences Research Council Research Ethics Committee in South Africa. Informed consent was read verbatim by the research coordinator and all participants indicated consent through a signature. Family member and focus group participants also received ZAR 110 (10.50 USD) for their participation.
Results
The results for this study are presented at two key levels. The impacts at the family-level, even if these may be felt by an individual such as the caregiver, with social and possibly economic repercussions for the household. The impacts for the children directly, the child-level impacts are presented in terms of the potential impacts over the child's life-course from infancy into childhood and finally to adolescence, drawing on the life course model to assess the impact at these various life stages.
Family level impacts of maternal mortality
Caregiving for children left behind The primary impact of maternal mortality was the orphaning of her children, either the surviving infant or older children, or in many cases both. Orphaned children, whether single (maternal orphans) or double orphans (maternal and paternal orphans), were almost all cared for by female family members. Childcare in this context is traditionally seen as the responsibility of women and participants confirmed that the role was often taken on by grandmothers. The general inadequacy of men in providing care and taking primary responsibility for raising children was noted in family and stakeholder interviews, and focus group discussions. Men find raising a child very difficult. He now has to be in the mother's shoes and play the role of a mother, and that can be very difficult for a man. (Antenatal clinic sister) Even where a father was present the expectation was not for him to provide care and he was supported by women who took on this responsibility.
Many family members felt obligated to provide care following a death from maternal causes; and in addition to taking on the roles of caregivers, some family members took on a social parenting role.
[The child] thinks her uncle is her father and the wife of her uncle is her mother. [The aunt] doesn't have children of her own, [the child] just calls them mommy and daddy. (Grandmother respondent) Respondents felt that despite facing difficulties the care provided to children was adequate. Nevertheless there was universal recognition that the presence of the mother would have been better for all concerned, especially the children left behind.
There is a huge difference because in our culture it is said that a person that has a mother is better than the one that doesn't have. (Community focus group) The responsibility for caring was extended throughout the life course of the child and because of the maternal nature of the death in almost all cases within the study the responsibility started in very early infancy for the index child.
Moreover, the responsibility of caring for children, particularly infants, had potential subsequent impacts for the women who took it on. Some had to take on new employment or informal work in order to care for the child/ren. Others were limited in their ability to look for or take on new work as a result of caring responsibilities. A number even had to give up employment or informal work to care for children, particularly infants who required intensive care.
I am no longer working… I had to stop working because I was told to fetch this child from the hospital [, when the mother died]… and I lost my job.
[Now] I work one day [a week]. (Grandmother caregiver) For older women, grandmothers or great-grandmothers, caregiving responsibilities bore potential repercussions for physical health. As this woman describes, the obligation and desire to provide care to her infant grandchild outweigh the challenges.
I was supposed to carry him on my back and be up and down with the child when he is not well…some people thought of help[ing] me by taking the child to an orphanage. They said I'm too old, that I am not able to raise a child, and I said, I'm still alive, I will try to look after this baby. (Grandmother caregiver) Despite fulfilling this role as well as possible, caregivers, like the one above, recognized their own inadequacies and were not always prepared for this long-term responsibility, though they were acutely aware of the gap left in the child's life by their mother's death.
Complex family arrangements and responding to maternal mortality South African families are characterised by complexity in the form of fluid membership because of a history of economic and political upheaval [26,27]. This complexity is compounded by low rates of marriage or co-habitation and high rates of extra-marital fertility [28]. This means that besides men being perceived as inadequate caregivers, fathers were sometimes missing from the families in which women died.
Absent fathers were not necessarily missing completely; limited co-habitation or marriage before death meant that men were often not present or residing with their children. Traditional norms dictate that maternal orphans be cared for by the woman's family, especially if this was where they had been living before her death [29].
They all stay here at home, which is something we agreed on, that after her death we would like them all to be here at home, because they were staying with her here while she was still alive. We decided we will use what we have to give them life… So nothing should be impossible to us. (Deceased's brother) Families with multiple orphaned siblings may also have multiple fathers, increasing the complexity in orphaned children's care and relationships with their fathers. Some fathers were completely uninvolved either because the child's family did not know who or where he was or because he chose not to be involved. In a number of the cases within the study the mother was very young when she died and had not yet provided information about the father to her relatives, this increased the complication of the situation by limiting the number of people responsible for surviving infants and children.
I don't know the father of this kid my child. I thought I was still going to get time to sit down with her and ask her. Kids can get pregnant and hide it from you and you only see when they go to deliver the child… To find out who was responsible for this. (Grandmother caregiver) Regardless of the relationship and complexities in the family, and where they were known, men were obligated to provide financial support for the orphaned child or children. This support, when received, was often very important both for the child and the household in which the child resided.
It means I can say that his father is good…he helps me, although he is far because he lives [away] but he helps me. (Grandmother caregiver) In lieu of the father, the paternal family may be called on to provide financial or in-kind support for the children, although this cannot always be guaranteed.
On this point of families and the feeding of orphans, by rights the family of the father has to help, but in most cases it becomes the problem of the mother's family, because this is where the child is born… Usually fathers stay very far away. It's as if when the person who actually connected the two family's dies that's the end of their responsibility. (Older women's focus group) In such cases, even the paternal family may provide primary support and care for the children; this was often the case if the parents were married before the death or where the paternal family were in a better position to provide this support. Even when children lived and were cared for by the maternal family, fathers or the paternal family often remained involved, as this maternal grandmother describes.
[the paternal] grandmother, they used to come in December and take the child… every [summer holiday], she would take the child and spend quality time with him without a problem. She sends money on [a] monthly basis. (Grandmother caregiver) The extended family more generally also provided financial and other support for the child(ren) and assistance to the children's primary caregiver.
I would say everybody [in the family] contributes because even my brother, if I inform him the granny has not collected her pension and the baby needs this and that he normally come with it on Friday when he comes home for the weekend, his [new] wife also contributes. (Aunt caregiver) Household coping with death from maternal causes Care and support of orphaned children is potentially burdensome and was costly for the already vulnerable families who were all surviving with limited income and employment.
…because now [the orphaned children] take from us. I can't even afford the fruits now because of them.
In addition, the women's loss was felt in many ways within her household. Not just the fact that she was lost as primary caregiver of her children but also that she was in almost all cases an important financial contributor to the household, supporting herself, her children and others within the household. In addition, women provided support to their families and were responsible for any number of small household chores and activities.
She was a good person. You would have found this place amazingly clean. Dusted. Even if the house is not beautiful but it would be cleaned… she would do the washing and clean. (Grandmother caregiver) Despite struggling to do so, most of the study families managed to rally both human and financial resources from the extended family network to absorb the extra burden of care. Families were also supported by the community.
…I would say there was really not a big problem in the way the children were provided for with their needs from both [their mother and father's] families. From the neighbours, I would say I have received some help from them as well. I've seen sometimes [the children] have some money and when I ask where they got it from they say it was given by the granny from next door. (Father caregiver) In addition to support from the family and community, access to social grants was very important in helping families support themselves and the orphaned children. Although targeted at orphaned children, the foster care grant provided by the South African government [30] was not received by many of the children in the study. Reasons included having a living father (an exclusion criteria), as explained below.
We even tried social workers nothing worked. I only managed to register the child in March. They told me there that I wouldn't get foster care because their father is alive. (Aunt caregiver) The complicated application process was another reason; particularly the paperwork and proof of death requirements that were barriers for certain families.
You see the granny will try [to apply for the foster care grant] and there will be some hiccups that will make her stop… sometimes they will ask for the letter from the councillor [as proof of father's absence] and she will not have it at that time… she doesn't have all the documents with her. (Aunt caregiver) Social grants targeted generally to children or older people within vulnerable households were particularly important for families and children affected by maternal mortality as illustrated by the quote below, which also highlights the consequences of the loss of the mother and her income for this family.
[The mother] was the one who was taking care of the children. She was working, [when] she was paid I would call her and inform her that we don't have food and she would give us money and I would go to town and buy the needs… The impact [of her death] has been big, [the children] are suffering; they have to wait for the [child] support grant to get their needs [met] or else they should wait for the granny to get her pension money, sometimes the granny will have debts to pay and she won't have enough [money]. (Cousin caregiver) The following quote from a grandmother carer of surviving children highlights the relative importance of grant income for supporting both the household and the children.
In fact my child I did get support because I sometimes ask myself what would I be doing if I did not get grant from the government to support all these children, because from the R1000 I'm get for pension it was not going to be easy, I think God help me to get support so that it can be easy for me to raise these children. (Grandmother caregiver) Another grandmother carer had similar arguments about the role of the grant.
I survive through my pension money…They got birth certificate from the hospital so that we can be able to apply for the child support grant…That young daughter of mine is the one who collect their grants. She then buys school needs, after getting the money she will make enquiries of what do they need and she will buy that. (Grandmother caregiver) Funds from grants were particularly important for the care of infants who had special needs that were an additional expense for the family. In some cases families were lucky enough to benefit from the receipt of formula milk support from the clinic.
We bought a large formula milk and nappies with our pension money. I took [the child] to the clinic [for a fever], I told the nursing sister that she has lost her mother, then sister wrote me a letter ensuring I will get formula milk from the clinic for 6 months. So after 6 months when they stop giving me formula, I will be able to buy it using the child support grant. (Grandmother caregiver) Access to social grants was very important in terms of helping the household cope financially-both with the death of a mother, and to facilitate adequate care for the child or children left behind.
Impact of death on children through the life-course
Maternal mortality has differential affects for the surviving children depending on the age (immediate effects) and life-stage (longer-term effects) of children at the time of death. For example, orphaned infants failed to receive the long-term benefits of bonding with and being breastfed by their mother [31]. Despite this, and possibly because they are helpless and require comprehensive care, infants' families report relatively good care, despite financial difficulties and possible opportunity cost to carers.
As children progressed from infancy to early childhood a number became sick. While some were normal childhood illnesses, a few children were diagnosed with HIV after their mother's death. In many of these cases the mother's HIV status was not known to her family; consequently neither was the risk to the child until he or she became ill.
When he was sick I took him to [the clinic] and explained everything to them … they created a [clinic] card for me that I will use always when the child is sick. It was difficult because [they found that] the child is HIV-positive. He goes to the hospital, and takes the pills now. (Grandmother caregiver) Access to treatment for prevention of vertical transmission meant that not all children of HIV-positive mothers were infected. In other cases the diagnosis of the child preceded the mother's death. Chronic longterm illness has implications for the child and their development, including potential stigma as well as for the family and particularly the primary caregivers who become responsible for young children or infants who require lifetime treatment, adequate nutrition and regular contact with health services. Failure to diagnose illness at birth meant that in many cases the child was already unwell and symptomatic at diagnosis increasing pressure and potentially influencing longer-term morbidity and mortality of the child.
The emotional impacts of a mother's death seem to be felt more acutely for older children who had known their mother in life.
…it's very important because they lose someone that they have perhaps stayed with for years and someone that they know very well. So they are very saddened. So they would need to go for counselling, so they can get assistance. (Pastor at local church) Some of the respondents also noted that sometimes family members were not equipped to support children in dealing with their mother's death, because they were simultaneously dealing with the loss themselves and in many cases also caring for infants or younger children who require significant inputs in terms of time.
Which means the child will not get the love of the mother. Yes the granny may give him love but it will never be the same as that of the mother, because even the granny herself has been badly affected by this. (Grandmother caregiver) These emotional difficulties sometimes manifested themselves in difficult behaviour or poor schooling outcomes which caregiving family members sometimes struggled to control or know how to deal with.
…her child had a problem he would just scream loud and cry, even at night, if we ask what is wrong he won't give you an answer. He was a child who has been doing well at school but since his mother passed he is failing at school. You will help him with his schoolwork and you will be comfortable that he is clear but at school if they ask him he will get it wrong and if we ask what happened he will tell you that he forgot. We do realize that he misses his mother… (Aunt caregiver) Regardless of age at death, the respondents felt that children as they reach school-going age and interact with peers becoming increasingly aware of their circumstances and that those people caring for them were not their mother. At this key stage children felt their mother's absence and lack of guidance and support acutely.
They do attend school though their performance is not the same and that sometimes prompts you to follow up on the problem of the child … (Deputy school principal) Older children and those whose mothers died during adolescence were of the greatest concern to caregivers, as the emotional adjustment to life without a mother was perceived to be greater for them. As noted by this stakeholder: … for the one who is fourteen and knew her mother, the pain will hit her the most… the young one can be adopted by a family member and never have to miss the mother's love. (Primary school educator) It was acknowledged by others that even those who had been brought up by someone else and never really known their mother may experience difficulties once they reach adolescence. Adolescence is observed as a time where rebellion and behavioural problems may become particularly problematic for children orphaned through maternal mortality.
The one who is a teenager may even be rebellious because they do things they were not doing while the mother was still alive and the aunt does not tolerate that… (Mixed community focus group) Older female children were of particular concern because of the fact that they were perceived to be at increased risk of abuse, early sexual debut, teenage pregnancy and HIV acquisition.
…she has a boyfriend now and her ears are closed, she no longer listens. I am trying but I cannot reach her… she is stubborn you see. I want her to learn and finish [standard] 10 (final year of schooling, grade 12) but now since she has a boyfriend… (Grandmother caregiver) It is important to note that the risks noted above for female adolescents also increases their risk of maternal morbidity or mortality and therefore result in a cycle of impacts from a failure to address maternal health issues.
Discussion
These findings highlight that a death from maternal causes has potentially complex and multi-layered impacts on both their surviving children throughout the life course and for the families and individuals tasked with their care, within a rural and peri-urban context in KwaZulu-Natal, South Africa. Despite being relatively poor, families absorbed orphaned children and while women are expected to fulfil traditional gender roles and provide care the contribution of men cannot be discounted. The priority of all caregivers is satisfactory care of the child, and while cultural practices may dictate otherwise, traditions that determine the placement of children were adapted to ensure the best care of the child. While the role of the family, and to a lesser extent community, is vital to coping, it is access to social protection in this context that is fundamental to helping families deal with the burdens caused by maternal mortality. Despite families rallying resources and ensuring care in the short-term, orphaned infants in this high HIV prevalence were at risk of vertically transmitted HIV. Families' also had difficulty responding to the emotional and psychological needs of children, particularly those who were older at the time of death or as they matured and this had repercussions for the children's development and longer-term outcomes.
These results illustrate that the extended family provides crucial support and care for children orphaned by deaths from maternal causes. Existing South African research notes that systems of obligation and traditional norms underpin familial support in black South African families [32,33] and the evidence suggests that this research confirms that in a similar way to enabling resilience in households affected by AIDS-related death [34][35][36][37][38] this is also at work in household's affected by maternal mortality. Historical patterns of adult migration mean that social parenting and caregiving by extended family members are established practices in this context with or without the death of a mother [33,[39][40][41]. Women have a particularly crucial role to play as primary caregivers but face potential consequences as a result. For example, this and other South African research point to the potential for physical implications, for older women especially, but also repercussions for women's participation in the labour market [42]. This is particularly marked in terms of the impacts of maternal mortality because the most urgent need for care is likely to be for an infant that requires intensive individual and financial commitment and is where adequate care can ensure survival [19,31]. Although the obligation to support and care for family is guided by gender and cultural norms for female care and the placement of children with family based on the mother's marital status, these results suggest that these norms are adaptable and care is organised for the best interest of children rather than as a result of tradition.
The consequences of maternal mortality are felt broadly within the family, by non-resident members of the child's extended family (aunts, uncles and paternal grandparents), with implications for the household's livelihood. The findings indicate that the well-being of the child/ren is the priority of the family and that in most cases traditions are eschewed in favour of the child's best interest. For example, according to tradition the role of the father and the paternal family is mainly concerned with financial assistance but in this context of high extra-marital fertility paternal families may take on primary care. The importance of fathers' role in children's lives is increasingly recognized [43]. Research into the impacts of a general maternal death confirm the findings here in relation to maternal mortality specifically and highlight the need to consider the specific role that father's play in families affected by maternal death and that despite gender norms, men can be possible caregivers and supportive contributing adults [44][45][46].
Maternal mortality had complex repercussions at both individual and household levels with the potential to create a financial burden for the wider family. While it is not possible in this analysis to unpack whether the effect of a death from maternal causes specifically would have differential impacts to a maternal death at any other time it is still relevant to note that the impact of maternal mortality would be similar to this affect in the instance of other deaths. The increased, and almost always unexpected, economic and social cost associated with the care of an infant and or children was compounded by the loss of the person who would have been responsible and assisted with this care-the mother.
The system of social protection in South Africa is well developed and it is clear from the families studied here that access to social grants was crucial to their ability to respond to both the needs of the child/ren and to cope with the other expenses associated with a death from maternal causes. While the grant system has a mechanism to directly support orphaned children, it is notable that rather than the foster care grant it was general grants targeted to children and older people in vulnerable families that enabled household coping and were redirected to directly provide for the children at risk. The household-level redistribution and relative importance of social grant income and benefits to those most at risk have been documented in other research in the South African context [32,[47][48][49]. This finding is in contrast to similar research in other contexts where social welfare support were not nearly as robust and where families required greater support [50,51]. Again it is not necessarily possible to differentiate here between the impacts we observed in the instance of a death from maternal causes and maternal death at other times. It does seem fair though to assume the delays and problems, often associated with proving need and eligibility, with accessing a child support or foster care grant for the infant child, documented in this study, are likely to have long-term implications for both families and the surviving infants. The results show that despite systems to support the care of needy infants such as subsidised formula milk, the bulk of respondents did not seem to be aware of these services. Despite the relative importance of access to grants within affected families, they only enabled survival and provided for the most immediate and essential needs of the household and children; the studied families remained relatively poor and vulnerable. It is therefore important that families are adequately informed about and supported to access available support specifically social grants, specifically foster care grants.
In addition to HIV as a potential cause of and risk factor for maternal mortality the results show that children born to HIV-positive mothers who die of maternal causes where transmission is not prevented may be at risk of becoming positive themselves. The findings indicate the need for a close relationship between obstetric and paediatric services to ensure that at risk orphaned infants are screened for HIV after 6 weeks of age and if necessary are directly linked to care and treatment, not only on presentation with symptoms or illness, which appears to be the case in many of the examples from the households enrolled in this study. Quantitative evidence from the Agincourt demographic surveillance site in South Africa suggests that infants whose mothers suffer a death from maternal causes are less likely to survive than those whose mothers live and those whose mothers die from HIV and TB-related causes are 29 time more likely to die [19]. Infants and children qualify for free public provided treatment and care in South Africa and if tested according to policy after six weeks of age, children should therefore be diagnosed and treated and not be diagnosed late or die from HIV-related causes in this context [52,53]. The findings also highlight the need to improve sexual and reproductive health services for HIV-positive women to raise awareness of and reduce the risks involved in pregnancy and childbirth for both the health of the mother and child.
Mothers play an essential role in children's lives and our findings confirm existing evidence that suggests that South African children without a mother have poorer health, educational [54][55][56], nutritional [57] and sexual health outcomes, particularly girls [58], compared to children with living mothers. In terms of the stages of the life course the results show the importance of the impact of caring for infants left behind as a result of maternal mortality has implications for the household and caregiver. While families rally to support infants in particular the financial, physical and emotional impacts of a death from maternal causes may result in long-term implications for children orphaned as very small infants. Our findings show that caregivers struggle with providing children with emotional support and care in the period after the death. This is particularly relevant for older children and adolescents in the short-term and as children age into adolescence in the longer term. While assistance from the extended family and access to social grants enables families to cope, their inability to or difficulty with providing sufficient emotional and psychological support to orphaned children was notable. Our findings show that maternal mortality can have ongoing repercussions for children's development outcomes regardless of life stage.
The study is limited because the generalisation of these qualitative results is limited to residents of this community, although it is possible that certain aspects of the findings may be relevant within other similar contexts in South Africa. It is also important to note that in many ways these results prove to confirm research already conducted within similar communities to explore the implications of maternal death, from HIV at any time. Despite these limitations, these findings provide very valuable and in-depth insights into the cumulative and complex implications of a maternal mortality for infants, children and families within the South African context.
Conclusion
These results reveal the high costs to surviving infants, children and their families of failing to reduce maternal mortality in South Africa. Despite being relatively wellresourced with very good access to antenatal and skilled birth attendance at health facilities, high levels of maternal mortality persist in South Africa [1]. The response needs to urgently address the preventable causes of maternal mortality by ensuring adequate family planning, antenatal and emergency obstetric care. In particular in a high HIV prevalence context such as South Africa where HIV-positive women are potentially at an increased risk or maternal morbidity and mortality and where prevalence has contributed to slowing progress towards Millennium Development Goal 5 [7,8,17,18,59] joint investment in interventions that address issues of maternal health and HIV are necessary [18]. In addition to highlighting the need to invest to end preventable maternal mortality, [60] these findings indicate that an investment in preventing death from maternal causes is necessary to prevent the economic and social impacts that maternal mortality can have for families. The evidence for these impacts and the way in which households are affected by and respond to maternal mortality suggests that the responses to this issue should not just be about health but also investments in other sectors such as welfare and familial support that enable them to respond.
While families and particularly female family members seem to rally resources in order to provide children with necessary care in this context it is important to ensure that these systems of family support are adequately sustained. Ensuring satisfactory access to and knowledge of social protection is crucial for these children and families. Additional qualitative evidence is needed to explore differential effects for children by gender and to guide future research and inform policies and programs aimed at supporting maternal orphans and other vulnerable children throughout their development.
collection and analysis, decision to publish, or preparation of the manuscript. The authors acknowledge the willing participants from the Vulindlela community and the valuable support of the field researchers Zakhona Ndolvu and Sanelisiwe Jali. The research was conducted while LK was based at the Human Sciences Research Council, South Africa and the HSRC provided important administrative and logistical support for the project.
Declarations
This article has been published as part of Reproductive Health Volume 12 Supplement 1, 2015: True costs of maternal death. The full contents of the supplement are available online at http://www.reproductive-health-journal. com/supplements/12/S1. Publication charges for this supplement were funded by Family Care International and the FXB Center for Health and Human Rights. | 2017-07-11T08:16:42.325Z | 2015-05-06T00:00:00.000 | {
"year": 2015,
"sha1": "b0ec4ca44feb60637539815f6f65eea64cc85c14",
"oa_license": "CCBY",
"oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/1742-4755-12-S1-S5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "001a7817763d8ba5ffe15245cec249212646b86f",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
249331356 | pes2o/s2orc | v3-fos-license | Three-dimensional printing of orbital computed tomography scan images for use in ophthalmology teaching
Introduction: The use of tridimensional (3D) printing in healthcare has contributed to the development of instruments and implants. The 3D printing has also been used for teaching future professionals. In order to have a good 3D printed piece, it is necessary to have high quality images, such as the ones from Computerized Tomography (CT scan) exam, which shows the anatomy from different cuts and allows for a good image reconstruction. Purpose: To propose a protocol
INTRODUCTION
The field of radiology has undergone a major evolution in the last century. The development of digital technology equipment for diagnostic procedures has improved the image quality, providing greater precision in diagnostics and, consequently, in treatment and life expectancy. (1) It is important for the health care professional to analyze images that contain the anatomical or pathological structures and metabolic activity of the region under study. The purpose of medical images is to assist in the diagnosis and to provide material to monitor treatments. (1)(2)(3)(4)(5) The acquisition of medical images has to follow protocols depending on how the image is acquired and the processing it will undergo after its reconstruction. After acquiring the digital image (raw image), the acquisition software automatically processes it. Thus, each region of the image can be edited and have a different grey tone value.
The computed tomography (CT) image formation process is divided into two phases, data acquisition and image reconstruction. The first phase comprises the basic operation of the CT scan, while the second is the conversion of the captured data into an image. (6) The standard Digital Imaging and Communications in Medicine (DICOM) was created to homogenize the processing and formatting of images and printing. (7) The DICOM standardizes the images of all types of exams (CT, magnetic resonance, radiography, ultrasound), storing them in a single format. This enables the exchange of information.
The images from the CT scan have more details when compared to conventional radiography because the CT scan ones can be reconstructed in several planes. The CT scan images allow contrasting any lesions with the other structures of the orbit, for example. (8) The CT scan has been considered the basic method of orbital imaging semiology due to its ability to display the bone structure in detail and provide accurate information about all orbital structures.
With the increasing development of technology, it would not be unexpected to witness the development of a printer that prints objects in three dimensions (3D). (9) Three D printing enabled the creation of custom objects from a virtual project. This technique consists of the automated construction of solid objects, layer by layer, with a certain type of material, from a digital file with the 3D image of the object. (3,10,11) Medical applications for 3D printing have expanded in recent years and it is expected to revolutionize health by providing many benefits, such as customization of medical products, medicines, and equipment, increasing the effectiveness of known procedures, and increasing reproduction of innovative techniques. (10,12) Examples of medical uses of 3D printing are the manufacture of living tissues and organs, the creation and customization of prostheses, implants, and anatomical models for pharmaceutical use 12 .
The materials used to print in 3D are called filaments, and, among these, the most known and used is the polylactic acid (PLA), a bioplastic polymer that has good malleability, lightness, ease of processing, and a variety of colors. (13) The 3D printer prints from a 3D digital file that must be in the stereolithography (STL) format, which is a format that is compatible with various digital design software and widely used for rapid prototyping or any other form of computerized manufacturing.
When the image is in STL format, it can be processed by Cura slicing software (Ultimaker), which is responsible for dividing the solid into layers, leading it to be printed layer by layer until it shapes the final object. For that, it is necessary to set up the software in advance with the information about the maximum printing area, the thickness of the filament, the desired thickness for each layer, the printing speed and the codes to start and stop the machine. By combining and using the correct parameters, it is possible to obtain a piece printed in 3D that is faithful to the original image. (13) As additive manufacturing is being used in the medical field, several ideas and projects have emerged in different areas, including the visual sciences field, where prototypes of instruments, implants, and educational models have been created at an accessible cost.
Based on this, we propose a protocol for using 3D printing to enhance teaching and promote the 3D printing technology in health care.
METHODS
This study was approved by the ethics committee of the Universidade Federal de São Paulo number 1225/2019.
Computed tomography scan images of the orbits in the DICOM format were acquired from two exams performed at the Department of Diagnostic Imaging at Hospital São Paulo of the Escola Paulista de Medicina of the Universidade Federal de São Paulo.
The software InVesalius, version 3.1.1, developed at the Renato Archer Technology and Information Center (CTI Renato Archer), and the software Blender, version 2.80, both free and licensed by the General Public License, were used to edit the images and convert planar images into three-dimensional images.
The 3D models created from the CT scan images were saved in the STL format to be transferred between the software. The 3D printer Hadron Lite and PLA plastic filaments were used to print the anatomical parts.
The protocol followed to elaborate the virtual orbit model for 3D printing was divided in four phases.
Quality and acquisition of computed tomography scan orbit images
The quality of the image acquisition will influence the quality of the 3D reconstruction of the piece. It is ideal that during the exam there should be a minimum of noise and movement artifacts, as this directly influences the manipulation and printing of the image in three-dimensional models.
Evaluation of anatomical structures
The exam should be performed using the highest image resolution and the thinnest cut available, usually 1mm. The thickest the cut, the worse will be the image quality, making it difficult to produce a good 3D image to be printed and possibly missing some important anatomical structures of the orbit.
Delimiting and cleaning the virtual model using the software InVesalius and Blender
The CT images were imported to the InVesalius software. Before selecting the regions of interest, it was important to understand the grey scale representation of the exam. The lighter shades of grey represent the denser tissues, while the darker shades represent the less dense tissues.
Masks are created to select different regions, which represent structures by their density, making it possible to select bone, muscular or other tissues. In this scenario, we selected the bone structure that can be seen layer by layer in the axial, coronal and sagittal planes, as seen in the figure 1.
After choosing which anatomical structure will be worked on, we start delimiting the regions that are relevant to the project and excluding the other regions. After the virtual model is completed, the export feature is used to save the file in STL format so that further editing can be performed in the Blender software, such as eliminating image-damaging artifacts and refining the virtual model for better reproduction by the 3D printer.
Preparation for 3D printing
The software Repetier-Host is responsible for converting the STL file into an archive with instructions for the 3D printer to perform as programmed. Features such as speed, type of support and thickness are defined using this software.
In this project, we used the thickness of 0,2mm, a layer slightly thicker than a strand of hair. These features vary according to the type of material and the 3D printer in use.
The 3D printer we used was the 3D Hadron Lite. The extruding part of this printer is 0,2-mm thick, which results in a good printing quality. We used a temperature of 200ºC, adequate for the PLA material.
The support structure was created as lines, since it facilitates the removal afterwards and reduces the risk of damaging the final piece. However, line support structures also have a smaller contact area with the piece, reducing support and consequently causing a small quality loss in structures with more angles.
During the printing process, it was necessary to use active forced ventilation to enable faster solidification of the material after extrusion, avoiding deformations and speeding up the printing process. shown in figure 2. However, there were some problems during printing, such as the loss of small bone structures that are surrounded by muscles and lack other support. These parts ended up being lost when the line supports were removed.
The support used for printing is slightly less resistant compared to the printed parts, facilitating its removal. Even so, there is some loss of structures that are very close together, and there are some remains of the line support left because they are in very narrow places that prevent their removal, as seen in figure 3. Figure 4 shows that the quality of the anatomical models printed can vary according to the software used for its refinement, even when using the same type of material.
The use of the Blender software to refine the virtual model significantly improves the quality of printing, because it polishes the piece, bringing the piece a little closer to the real anatomical structure.
RESULTS
The printed anatomical pieces reproduced most structures, both bone and soft structures, satisfactorily, as Since the CT scan makes 1mm slices, it produces a kind of "step" in the model during the transition from one cut to the next, making it look rougher and coarse.
The thinnest slice provided by the CT scan is 1mm, and the printer has a 0.2mm extruder nozzle, which enabled the creation of 5 layers for each cut of the CT scan. The Blender software helps to smooth the structures of the CT scan images so that it is possible to recover part of the real structure, and then generate an image with higher definition, as shown in figure 5.
DISCUSSION
The image acquiring method of the CT scan will influence the quality of the 3D image created to be printed. It is important to have thin cuts, of 1mm, to allow for a better-quality 3D reconstruction of the image. (6) During the CT scan examination, it is important that the patient does not move, because the movement can generate artifacts in the images acquired, making the images blurry and not adequate to be used for the 3D image reconstruction process.
The InVesalius is a software especially used to work with medical images. Its use enables the refining of a predetermined region into a three-dimensional image to be printed. This is made through the selection of darker and lighter pixels, following a grey-scale and reconstructing the image. It is worth noting that the selection of regions that are not of interest also occurs, and that is why it is important to use the Blender software afterward. This software will virtually clean the image, allowing for a printing process without a significant quality loss.
The process of delimiting and cleaning the image requires time, experience, and knowledge about the anatomy of the region being studied. The orbital region is very complex and composed of many structures, so it is important for the person who is working with the images to have enough knowledge to identify the small structures that could otherwise be excluded inadvertently. The printing speed was reduced to increase the quality of printing in order to maintain resemblance with the medical image, preserving the details and getting the most out of the 3D printer capacity. The 3D printer and the material we used in this project have an accessible cost, which explains a more modest quality of printing.
The challenges encountered during the printing process were the misalignment of some layers, the fall of supports, the fall of regions lacking enough support and "stringing", which is a known process in 3D printing. Stringing is the creation of filament fillets during the movement of the 3D printer through transverse parts of the piece. This leaves imperfections on the surfaces and also makes it difficult to clean.
Some problems are commonly faced while working with 3D printed pieces. One of the problems is the difficulty in removing the support structures from the piece. The support structure is necessary when printing pieces that have a slope closer to the horizontal line or that are suspended, unconnected to the main piece. The removal of the support structure may cause damage and loss of parts if the support firmly adheres to the piece.
Another difficulty may be the opposite reason, the lack of supports due to avoiding the placement of more supports to prevent damage during their removal. In this case, structures may collapse, especially in areas with more horizontal inclinations, which can cause the loss of relevant parts of the printed piece.
The use of water-soluble material for the support parts would be ideal because it makes it easier to remove the support parts, with less risk of loss of small structures and damage to the piece. The water-soluble material has a higher cost and requires a 3D printer that supports the use of two filaments simultaneously, which makes this option less accessible. The support removal in this project was manual with the aid of tools due to the material resistance during the removal.
The quality and resistance of the printed piece were considered adequate. The good resistance was due to the configuration implemented for printing, increasing the density of the piece. These configurations also impacted the printing time, varying from 3 to 8 hours of printing time depending on the structure bring printed, being soft tissues faster and bone structures longer.
Despite the problems encountered during the printing process and the removal of the supports, the anatomical piece printed in 3D managed to reproduce well most of the essential structures of the orbit needed for the student's learning.
CONCLUSION
The medical images printed in three dimensions resembled well the actual orbital structure. This indicates the viability of this protocol to produce more three-dimensional printed pieces from computed tomography scan images with a didactic goal.
Using this same protocol, it will likely be possible to print different pieces from other anatomical areas. It is important to recognize that it is still not possible to print in three dimensions print a piece that is 100% equal to the actual anatomical structure, but the resemblance is probably enough to use it as an additional resource for teaching purposes. | 2022-06-04T15:10:11.394Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "10e914524b4a27788c46cd2a9412d385e395500c",
"oa_license": "CCBY",
"oa_url": "https://www.rbojournal.org/wp-content/uploads/articles_xml/0034-7280-rbof-81-e0042/0034-7280-rbof-81-e0042.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "02f7cf7f45776bd54ee387e4c2b036f1b0daf28a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
261934212 | pes2o/s2orc | v3-fos-license | Is the Pain killing you? Could Pain interference be a warning signal for midlife mortality?
Although prior studies have documented an association between various measures of pain and mortality, none of those studies has evaluated whether the association between pain and mortality varies significantly by age. We suspect that pain—particularly pain that interferes with the ability to lead a normal life—could be an early warning sign that may portend increased risk of physical impairment and mortality later in life. In this paper, we investigated whether pain was associated with increased mortality risk, particularly in midlife. Data came from the Midlife in the US study, which sampled non-institutionalized, English-speaking adults aged 25–74 in the contiguous United States in 1995-96. Our analysis included 4041 respondents who completed a follow-up self-administered questionnaire in 2004-05, 2703 of whom completed another self-administered questionnaire in 2013-14. We modeled mortality through December 31, 2021. In demographic-adjusted models, pain interference was more strongly associated with mortality than other pain measures, and the association was stronger at younger ages. The hazard ratio for pain interference declined from 1.39 per SD (95% CI 1.26–1.54) at age 60 to 1.14 (95% CI 1.04–1.24) at age 90. Although potential confounders accounted for more than 60% of the association with premature mortality, pain interference remained significantly associated with increased mortality rates (HR = 1.13 at age 60, 95% CI 1.02–1.26). We found no evidence that the association between pain and mortality was driven by cancer. If anything, pain interference was more strongly associated with cardiovascular than cancer mortality. At the oldest ages, physical function is likely to be a better predictor of mortality than pain. Yet, pain interference may be a useful warning sign at younger ages, when there are fewer physical limitations and mortality rates are low. It may be particularly helpful in identifying risk of premature mortality in midlife, before the emergence of severe physical limitations.
Introduction
Physical function is one of the best prognostic markers of short-term mortality risk among adults (Glei et al., 2016;Goldman, Glei, & Weinstein, 2016, 2017).Pain could also have prognostic value if it represents a precursor to physical limitations that occur later in life.In light of the strong relationship between pain and physical limitations, the National Pain Strategy proposed a new construct that combines pain and physical function: high-impact chronic pain is defined as persistent pain with substantial restriction of life activities for six months or more (Interagency Pain Research Coordinating Committee, 2016;Dahlhamer, 2018;Interagency Pain Research Coordinating;Von Korff et al., 2016).
We suspect that pain-in particular pain that interferes with the ability to lead a normal life-could be an early warning sign that may appear in midlife but portends increased risk of physical impairment and mortality later in life.Such an early warning sign would be valuable because it may provide an opportunity for interventions that stave off more severe health consequences that are difficult to reverse.
Although the magnitude was somewhat stronger for widespread pain, the pooled estimate was still not significant (Smith et al., 2014).A subsequent meta-analysis reported a significant association between widespread pain and mortality (Macfarlane et al., 2017).Later research evaluated the association between various measures of pain and mortality in data from two English studies; they found that the relationship with mortality was strongest for extreme pain interference, whereas the associations with any pain, widespread pain, and number of pain sites were not significant net of confounders (Smith, Wilkie, Croft, & McBeth, 2018).Smith et al. (2018, p. 242) concluded: "The impact of pain was more important than the presence or extent of pain in the relationship between pain and mortality." To our knowledge, no prior study has evaluated whether the association between pain and mortality varies significantly by age.That is, no one has demonstrated a significant interaction between age and pain on the risk of mortality.However, one prior study (Andersson, 2009) stratified the sample into younger (25-64) versus older (65-74) persons at baseline.They found that the association between widespread chronic pain and mortality appeared to be stronger in the younger group than in the older group, but there was no indication that they tested whether that difference was significant.More importantly, their analysis did not appear to account for age as a time-varying covariate in order to evaluate the effect on mortality by age at the time of death.
Here, we use measures of pain frequency by type (headaches, backaches, joint pain/stiffness, pain in extremities), prevalence of chronic pain, and the severity of pain interference to predict mortality over 17 years among a US national sample observed at ages 30-93.We expect the association with mortality will be strongest for pain that interferes with normal activities.We evaluate whether the association between pain and mortality is stronger at younger ages, when few people exhibit physical limitations and the mortality rate is low.In addition, the association between pain and mortality is compared with the corresponding magnitude for physical limitations, which is likely to be one of the best-albeit more proximate-predictors of mortality.Finally, we investigate the extent to which the association may result from confounding with other factors that affect both pain and mortality.Is pain merely a warning signal or could it have a causal effect on mortality?
Data
The Midlife in the United States (MIDUS) study targeted noninstitutionalized, English-speaking adults aged 25-74 in the contiguous United States (Brim et al., 2020).Details regarding the sampling strategy and response rates are provided in Text S1.At Wave 1, the original cohort included 7108 participants who completed a phone interview (fielded January 1995-;September 1996), 6325 of whom also completed a mail-in self-administered questionnaire (SAQ).At Wave 2, 4963 completed a follow-up interview (fielded January 2004-;August 2005) and 4041 completed the SAQ (61% of 6628 survivors from the Wave 1 cohort).At Wave 3, 3294 (55% of survivors) completed the main phone interview (fielded May 2013-;April 2014) and 2924 completed the SAQ.
Because the measures of chronic pain and pain interference were not asked at Wave 1, we restricted our analysis to those who completed the SAQ at Wave 2 (N = 4041 respondents aged 30-84).Among those, 2703 also completed the SAQ at Wave 3 (when they were aged 39-93), yielding a total of 6744 observations.
The MIDUS study was approved by the Educational and Social/ Behavioral Science institutional review board at University of Wisconsin, Madison [#SE-2011-0350].Informed consent was obtained from all participants.
Mortality
Vital status was ascertained through searches of the National Death Index, survey fieldwork, and longitudinal sample maintenance (Ryff et al., 2022).To ensure the completeness of mortality follow-up, we analyzed deaths only through December 31, 2021 (see Text S2 for details).Among the analysis sample, there were 860 deaths after Wave 2; the youngest death occurred at age 42 and the oldest at age 97.
Given the total number of deaths among our analysis sample, we had limited statistical power to model cause-specific mortality.Nonetheless, we estimated auxiliary models for broad groups of causes (see Text S2 for detailed ICD-10 codes): 1) cancers (228 deaths); 2) cardiovascular disease (273 deaths); and 3) a residual category of all other causes (345 deaths).
Predictors
Pain measures, physical limitations, age, and chronic conditions were specified as time-varying covariates, measured first at baseline (Wave 2) and updated at Wave 3. The remaining analysis variables were measured only at baseline.Table S1 shows descriptive statistics by survey wave for all the covariates included in the analysis.
Pain of various types.
Respondents were asked how often, during the past 30 days, they experienced four types of pain: 1) headaches; 2) backaches; 3) aches or stiffness in joints; and 4) pain or aches in extremities (arms/hands/legs/feet).The six response categories for each those three questions ranged from "not at all" to "almost every day."2.2.2.2.Chronic pain and pain interference.They were also asked: "Do you have chronic pain, that is do you have pain that persists beyond the time of normal healing and has lasted anywhere from a few months to many years?"Those who reported any chronic pain were asked about the extent to which pain interferes with various activities during the past week: 1) general activity; 2) mood; 3) relations with other people; 4) sleep; and 5) enjoyment of life.Each item was rated on an ordinal scale ranging from 0 ("did not interfere") to 10 ("completely interfered").These questions represent 5 of the 7 items in the Brief Pain Inventory Short Form (Cleeland, 2009a;2009b).The severity of pain interference index was computed as the average across the 5 items (α = 0.95 at both waves).
Index of physical limitations.
Respondents were asked, "How much does your health limit you in doing each of the following?Lifting or carrying groceries; climbing several flights of stairs; bending, kneeling, or stooping; walking more than a mile; walking several blocks; walking one block; vigorous activity (e.g., running, lifting heavy objects); moderate activity (e.g., bowling, vacuuming)."The response categories for each of the 8 physical tasks were coded on a four-point scale (0 = not at all, 1 = a little, 2 = some, 3 = a lot).Based on Long and Pavalko (2004), we constructed an index by summing the 8 items (potential range 0-24), adding a constant (0.5), and taking the logarithm of the result, which allows for relative rather than absolute effects.
Demographic characteristics.
All models controlled for age, sex, and race/ethnicity.Age was measured at the time of the phone interview.Race and ethnicity were based on self-identification and recoded into the following categories: non-Hispanic White, non-Hispanic Black or African American, non-Hispanic other race (including American Indian or Alaska Native, Asian, Native Hawaiian, or Pacific Islander), and Hispanic.
Other potential confounders.
The other potential confounders comprised marital status, a composite measure of relative socioeconomic status (SES), smoking, obesity, and various chronic conditions.D.A. Glei and M. Weinstein Most of the confounders were measured only at baseline (Wave 2) to avoid potential endogeneity (e.g., pain at Wave 2 may have exacerbated obesity at Wave 3).The exceptions were chronic conditions, which were treated as time-varying covariates because they were likely to be a cause rather than a consequence of pain/limitations.See Text S3 for details regarding the construction of the measures.
Analytic strategy
We used standard practices of multiple imputation to handle missing data (Rubin, 1996;Schafer, 1999); see Text S4 for details.We began by examining the age pattern across various measures of pain as well as the index of physical limitations.Next, we investigated the bivariate association between pain interference and physical limitations.
Then, we fitted Cox hazard models to test the association between pain measures and mortality, using age as the time metric to estimate age-specific mortality.A robust variance estimator was used to correct for family-level clustering.In addition to age, all models controlled for sex and race/ethnicity.For comparison, we estimated the corresponding association between physical limitations and mortality, adjusted for the same demographic characteristics.In subsequent models, we adjusted sequentially for potential confounders of the association between pain interference and mortality: marital status; SES; smoking; obesity; and chronic conditions.
We tested the proportionality assumption for each of the covariates and found evidence that the hazard ratio (HR) varied significantly by age for the following covariates: non-Hispanic Black, socioeconomic status, backaches, joint pain, extremity pain, pain interference, diabetes, and heart trouble.Thus, the final models included interactions between age and those covariates.
Results
The prevalence of pain followed different age patterns depending on the type (Fig. 1).The percentage that reported any headaches declined with age, whereas backaches decreased only slightly with age and the other two types of pain (i.e., joint, extremities) increased with age, particularly between ages 40 and 50.As shown in Fig. 2, chronic pain and pain interference also rose with age, but the age-related increase was much steeper for the prevalence of a physical limitation.That is, the association with age was much weaker for pain interference than for physical limitations.For example, the correlation between age and the continuous measures of pain interference and physical limitation was only 0.02 for pain interference versus 0.40 for physical limitations.
Relationship between pain interference and physical limitations
One-fifth of the sample reported neither pain interference nor physical limitations (Table S2), but it was higher for those aged 30-49 (36%) than for those aged 70 and older (7%).The largest share (33%) reported no pain interference but low physical limitations.
Fewer than 7% reported high levels of both pain interference and physical limitations, but that percentage increased with age: 4% at ages 30-49 versus 9% at ages 70 and older.Very few (<1%) reported high pain interference but no limitations.
Only 4% reported high limitations but no pain interference, although that value increased with age: 1% at ages 30-49 versus 9% at ages 70 and older.Among those aged 70 and older, 44% reported medium or high physical limitations, but low or no pain interference.Among that group, the percentage with at least "some" limitation was highest for vigorous activity such as running (92%), walking more than one mile (79%), bending/kneeling/stooping (65%), and climbing several flights of stairs (59%); only 40% reported at least some limitation walking one block.
Hazard models adjusted for demographic characteristics
In the demographic-adjusted model, frequency of headaches was weakly associated with mortality (HR = 1.09 per SD, 95% CI 1.01-1.17;Table S3, Model 1).As demonstrated in Fig. 3, the associations with other types of pain diminished with age.For example, the HR for extremity pain was 1.35 per SD (95% CI 1.18-1.55)at age 60, but declined to 0.98 (95% CI 0.88-1.09)by age 90.
Prevalence of chronic pain also exhibited a modest significant association (HR = 1.16 standardized effect size, 95% CI 1.09-1.24;Table S4, Model 5).The index for the severity of pain interference was most strongly associated with mortality at younger ages (Model 6).As shown in Fig. 4, the demographic-adjusted HR for pain interference declined from 1.39 per SD (95% CI 1.26-1.54)at age 60 to 1.14 (95% CI (Nadaraya, 1964;Watson, 1964).This graph is restricted to the age range 34-90 because we have very few observations below age 34 and above age 90.Note: We plotted the percentage reporting chronic pain, pain interference, and physical limitations across age for the pooled sample of observations at Waves 2 and 3 using the "lpoly" command in Stata 16.1 to perform local mean smoothing-also known as the Nadaraya-Watson estimator (Nadaraya, 1964;Watson, 1964).This graph is restricted to the age range 34-90 because we have very few observations below age 34 and above age 90.
1.04-1.24)at age 90.Among the pain measures, interference demonstrated the strongest relationship with mortality.Nonetheless, the association with the index of physical limitations was, by far, the strongest (HR = 1.90 per SD, 95% CI 1.69-2.12,Model 7).
Hazard models adjusted for additional confounders
Although pain interference predicted mortality, the relationship was not necessarily causal.The association could be spurious, resulting from confounding with other factors that affected both pain and mortality.
In Table 1, we adjusted sequentially for additional confounders.Model 1 controlled for marital status, which resulted in little change in the HR for pain interference.Model 2 further adjusted for SES, which substantially weakened the HR pain interference (1.28 at age 60, 95% CI 1.15-1.43); it also weakened the age interaction, which was no longer significant.Thus, part of the reason that pain interference was associated with premature mortality appears to be because socioeconomically disadvantaged Americans were more likely to suffer pain interference as well as higher risk of midlife mortality.
Finally, Model 5 adjusted for chronic conditions.Net of all these potential confounders, the HR for pain interference was substantially reduced (1.13 at age 60, 95% CI 1.02-1.26).As shown in Fig. 4, there was virtually no change over age in the HR for pain interference based on the fully-adjusted model.These results suggest that more than 60% of the association between pain interference and mortality at younger ages was a result of confounding with other factors (i.e., SES, smoking, obesity, and chronic conditions).
Cause-specific mortality
In auxiliary analyses, we investigated the association between pain interference and mortality from broad causes of death.In the demographic-adjusted model (Table S5, Model 1), the association with pain interference appeared to be slightly stronger for cardiovascular (HR = 1.30 per SD, 95% CI 1.17-1.44)than for cancer (HR = 1.22,95% CI 1.08-1.39)or the residual category (HR = 1.22,95% CI 1.11-1.34).In the fully-adjusted model (Model 2), the HR for pain interference was substantially reduced and no longer significant for cancer and the residual category.However, the association remained significant for cardiovascular mortality (HR = 1.18, 95% CI 1.05-1.32).Confounders accounted for more than half of the association with cancer mortality and nearly half of the association with mortality from the residual set of causes, but only about one-third of the association with cardiovascular mortality.
Discussion
Pain interference was a notable warning signal of heightened mortality risk, particularly in midlife.The association was stronger than the corresponding relationship with the frequency of different types of pain or a binary indicator of chronic pain.
Some researchers have suggested that the association between pain and mortality may be driven by cancer (Smith et al., 2014), but we found no evidence of that.If anything, pain interference was more strongly associated with cardiovascular deaths than cancer mortality.This result was consistent with prior work suggesting that severe chronic pain was more strongly associated with cardiovascular mortality-particularly for deaths resulting from ischemic heart disease-than all-cause mortality (Torrance et al., 2010).
The weak relationship between age and pain interference may reflect age-related changes in people's expectations for pain and normal activity.At older ages, people may have adjusted their activity levels to accommodate pain and physical limitations.Someone in their 80s is likely to have some difficulty running, kneeling, or climbing stairs, but it may not interfere with their normal daily life because they avoid those activities.
Compared with pain interference, physical limitations were more strongly associated with mortality, probably because they were more proximate.However, pain interference could represent a precursor to physical limitations that do not emerge until later in life.The risk of mortality is low in midlife, but those early deaths can have an undue influence on life expectancy.An early warning signal could be valuable if that information can be used to identify modifiable factors that might delay mortality.Fig. 3. Hazard ratios for the relative increase in mortality associated with measures of pain frequency for selected ages adjusted for demographic factors.Note: The hazard ratios are based on the models shown in Table S3 and are plotted on the log scale.The error bars represent the 95% confidence intervals.
We do not show the hazard ratios for mortality below age 60 because only 63 (7%) decedents died below that age.S4, while the fully-adjusted HRs are based on Model 5 (Table 1).The error bars represent the 95% confidence intervals.We do not show the hazard ratios for mortality below age 60 because only 63 (7%) decedents died below that age.
Warning signal vs. causal factor?
There is a difference between treating pain as a warning signal versus trying to establish that pain has a causal effect on mortality.Our results suggested that more 60% the association between pain interference and premature mortality resulted from confounding with SES, smoking (which is potentially modifiable), obesity, and chronic conditions.Nonetheless, in the fully-adjusted model, pain interference remained significantly associated with higher mortality rates.It may be particularly helpful in identifying those at risk of premature mortality in midlife, before the emergence of severe physical limitations that are difficult to reverse.
Pain interference may contribute not only to premature mortality, but also to other adverse outcomes.One study found that pain interference at age 29 increased hazards of subsequent labor force exit and health-related work limitation (Pooleri, Yeduri, Horne, Frech, & Tumin, 2023).Another study found pain interference predicted injurious falls (Cai, Leveille, Shi, Chen, & You, 2020).
Table 1
Hazard ratios for pain interference from Cox models predicting all-cause mortality adjusted for potential confounders.Note: The 95% confidence intervals are shown in parentheses below the hazard ratio.In cases where there was evidence of non-proportional hazards, we interacted the relevant variable with Age-60 so that main effect represents the hazard ratio (HR) at age 60.For example, in Model 6, the HR for pain interference at age 60 is 1.13 per SD.The corresponding HR for age x can be obtained as follows: HR Interference × (HR Age×Interference ) (x− 60) , where HR Interference is the HR for the main effect and HR Age×Interference is the HR for the interaction with age.For example, the HR for pain interference at age 85 based on Model 5 is: 1.131*0.9999 25= 1.13.Abbreviation: NA, not applicable.a Standardized (based on the pooled distribution of observations from both waves).
If the effect is causal
, what is the possible mechanism?Zajacova et al. (2021) suggested that the mechanisms linking pain and mortality could include use of pain medication (Inoue, Ritz, & Arah, 2022) as well as the effects of pain on allostatic load, immune system suppression, and impairment of growth and tissue repair (Gatchel, Peng, Peters, Fuchs, & Turk, 2007).Torrance et al. (2010) argued that the stress could be the mechanism linking severe pain causes with mortality from ischemic heart disease: severe chronic pain induces elevated cortisol and other manifestations of the stress response, which accelerate the atheroscelerotic process.
Limitations
There are other factors that may be associated with both pain and mortality risk, but it is unclear whether they represent confounders or mediators.For example, physical activity, perceived stress, social activity, drug/alcohol abuse, and sleep quality are likely to be influenced by pain levels, which would make them mediators.However, the association could also be bidirectional (e.g., stress exposure and drug abuse could heighten pain sensitivity).Unfortunately, we could not disentangle the direction of the effects.If we wanted to adjust for these variables as confounders, we would need measures much earlier in life before pain and physical limitations were evident.If we adjusted for these variables at baseline, they would probably attenuate the association between pain interference and mortality, but we would have no way of determining whether that was because of confounding or because they acted as mediators.Pain could represent a root cause that leads to other problems, which might be more proximate determinants of mortality.
Conclusion
At the oldest ages, physical function is likely to be a better predictor of mortality than pain.Yet, pain interference may be a useful warning sign at younger ages, when there are fewer physical limitations.It could signal the need to look for other underlying problems that heighten mortality risk.Feel like the pain is killing you?It just might be.
Declaration of interests
The authors declare the following financial interests/personal relationships which may be considered as potential competing interests: Provided consulting services to the University of California-Berkeley, the University of California-Riverside, and Rose Li Associates (DAG).
Fig. 1 .
Fig. 1.Smoothed plot of pain prevalence (by type) across age.Note: We plotted the percentage reporting any pain of the specified type across age for the pooled sample of observations at Waves 2 and 3 using the "lpoly" command in Stata 16.1 (StataCorp, 2019) to perform local mean smoothing-also known as the Nadaraya-Watson estimator(Nadaraya, 1964;Watson, 1964).This graph is restricted to the age range 34-90 because we have very few observations below age 34 and above age 90.
Fig. 2 .
Fig.2.Smoothed plot of the prevalence of chronic pain, pain interference, and physical limitations across age.Note: We plotted the percentage reporting chronic pain, pain interference, and physical limitations across age for the pooled sample of observations at Waves 2 and 3 using the "lpoly" command in Stata 16.1 to perform local mean smoothing-also known as the Nadaraya-Watson estimator(Nadaraya, 1964;Watson, 1964).This graph is restricted to the age range 34-90 because we have very few observations below age 34 and above age 90.
Fig. 4 .
Fig. 4. Hazard ratios for pain interference by age Note: The hazard ratios (HRs) indicate the relative increase in mortality rates per SD of pain interference and are plotted on the log scale.The demographicadjusted HRs are based on Model 6 from TableS4, while the fully-adjusted HRs are based on Model 5 (Table1).The error bars represent the 95% confidence intervals.We do not show the hazard ratios for mortality below age 60 because only 63 (7%) decedents died below that age. | 2023-09-16T15:14:25.931Z | 2023-09-01T00:00:00.000 | {
"year": 2023,
"sha1": "a202dca6f81580681dc877ed6b393636ad917d68",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ssmph.2023.101513",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c84615999c55f2eda7fd3d8e77aed85a9c4ff6de",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
235658045 | pes2o/s2orc | v3-fos-license | Schottky diode temperature sensor for pressure sensor
The small silicon chip of Schottky diode (0.8x0.8x0.4 mm3) with planar arrangement of electrodes (chip PSD) as temperature sensor, which functions under the operating conditions of pressure sensor, was developed. The forward I-V characteristic of chip PSD is determined by potential barrier between Mo and n-Si (ND = 3x1015 cm-3). Forward voltage UF = 208 mV and temperature coefficient TC = -1.635 mV/C (with linearity kT<0.4% for temperature range of -65 to +85 C) at supply current IF = 1 mA is achieved. The reverse I-V characteristic has high breakdown voltage UBR>85 V and low leakage current IL<5 {\mu}A at 25 C and IL<130 {\mu}A at 85 C (UR = 20 V) because chip PSD contains the structure of two p-type guard rings along the anode perimeter. The application of PSD chip for wider temperature range from -65 to +115 C is proved. The separate chip PSD of temperature sensor located at a distance of less than 1.5 mm from the pressure sensor chip. The PSD chip transmits input data for temperature compensation of pressure sensor errors by ASIC and for direct temperature measurement.
Introduction
Semiconductor temperature sensors are among the most applicable elements for analysis of physical and chemical processes in a wide range of industries and research. Today temperature sensors are used in automotive, medical, aviation, space, nuclear, metallurgical and many other industries [1][2][3][4][5][6][7][8]. These elements find solutions in IoT and customers application (smart devices and smart home, HVAC, AI mechanisms) [9,10], are created together with microprocessors and electronic devices using CMOS technologies [1,[10][11][12][13], measure temperature for scientific research (biophysics and biochemistry, robotics, metrology, etc.) [2,14,15]. The conditions are determined by the field of application with following factors: temperature range, temperature coefficient, measurement error, overall dimensions or size, power consumption [13,[16][17][18][19], state of aggregation and aggressiveness of measured environment (radiation, high temperatures and corrosion) [11,16,17,20,21]. Temperature sensors are created by various types of semiconductor structures: thermistors [1-3, 22, 23], p-n junction diodes [12,24,25], JFET [1], fiber optic [26] and capacitive sensors [4,6]. Schottky diode (SD) temperature sensors should be noted separately, because their constructions are extensive. SDs use epitaxial silicon [19,25,[27][28][29] and polysilicon structures [11] of n -or p-type conductivity with array of different metal, silicon carbide 4H-SiC with barrier metal of Ni [16,20], Ti [30,31], Pt [32] and V2O5 [18], as well as working layers of GaN [17,33] or graphene [34,35] and many other combinations. Semiconductor structures and sapphire [17] or diamond [19] insulators were chosen as substrate for developments. Additionally, SDs were created on SOI wafers [12,13,34,36]. SD temperature sensors were shown as separate components or within various electrical connections, for example: CTAT and PTAT [10] or Wheatstone bridge circuits [3]. Today one of the relevant directions for sensor development is the measurement of several properties for environment in a single device [37]. For example, there are methods for simultaneous measurement of temperature and pressure by a single element [4,6,26] or various elements within a single chip [12,23]. The use of similar methods for one chip or individual chips in a single case is determined by operating conditions, but on the other hand, by the ratio between: 1) chip price, 2) choice of wafer structure and capabilities of fabrication technology, 3) complexity of technological route, especially in the conditions of combining between CMOS and MEMS processes, 4) yield of each element, 5) design features of assembly [38].
Based on all conditions above, the structure of separate temperature sensor chip in the form of SD, developed for joint use with pressure sensor, was chosen. SD temperature sensor was formed by planar technology with arrangement of anode and cathode on the same wafer surface (PSD chip). Further, PSD chips were placed in a single case at a distance of less than 1.5 mm from the pressure sensor chip. The creation of temperature sensor as separate chip allows elements to be independent from each other regarding the choice of initial semiconductor material, combination of technological processes and, most importantly, methods of pressure and temperature measurement. Pressure sensor can operate on the piezoresistive effect, using single sensitive element [39], classical Wheatstone bridge electrical circuit [40][41][42][43][44][45] or new development utilizing piezosensitive differential amplifier with negative feedback loop (PDA-NFL) circuit [46][47][48][49][50], or any other effects [4,6,23,26,37]. An additional advantage of temperature sensor creating as a separate chip is no effect of residual mechanical stresses from applying pressure to pressure sensor membrane. The mechanical stress could significantly influence the current-voltage (I-V) characteristics of a temperature sensor [51][52][53].
Development of PSD chip
The reasons for using SD temperature sensor are determined by its properties, which guarantee the following achievement of operational needs: • Low forward voltage UF at supply current IF = 1 mA, which declared by ASIC for pressure sensor.
• Low leakage currents (IL < 10 µA at UR = 20 V) and high breakdown voltage (VBD > 80 V) at Troom on reverse branch of I-V characteristic required for linearity of temperature coefficient (TC) at elevated temperatures [12,13,54] and combining SD cathode with "ground" contact of pressure sensor circuit. • High TC values (|TC| > 1.6 mV/⁰C) with low linearity error (dTC < 0.5%) in temperature range from -65 to 85 ⁰C. • Small dimensions of chip for use with pressure sensor chip in a single case or other small-sized devices for future developments.
As it is known, the physical principle of SD operation is based on a barrier potential difference, for example, between semiconductor of n-type conductivity with low concentration and metal with a large work function [54,55], which blocked the free emission of electrons from the metal. SDs can be created on both n-type and p-type conductivity semiconductors, but n-type conductivity is preferable due to the higher electron mobility [56]. The current through the SD thus obtained was analysed in terms of the thermionic emission diffusion equation or Richardson equation [17,18,34,[57][58][59]: where Aanode area, A* -Richardson constant equal to 112 A/cm 2 /K 2 for n-type conductivity and 32 A/cm 2 /K 2 for p-type conductivity, Тtemperature, q -electron charge equal to 1,6·10 -19 C, φB0 -Schottky barrier height, k -Boltzmann constant equal to 8,62·10 -6 eV/K, nideality factor, Vexternal voltage. It should be noted that I-V characteristic of forward bias is significantly affected by the surface state of semiconductor, which is determined by the purity of preparation before the barrier metal deposition in production process. TC for forward bias voltage of SD in the open mode (like the current) depends on ideality factor n and barrier height φB0, which is determined by the choice of metal and concentration of carriers in semiconductor ND [27][28][29]36]: The current for reverse branch of I-V characteristic can be calculated [44,45,52,56]: where εS is the permittivity (dielectric constant) equal to 11.8·ε0 = 1.04·10 -12 F/cm and the value of E is defined as: where NDconcentration of carriers in n-type semiconductor, Vbipotential of built-in charge. The reason for low SD breakdown voltage is the edge leakage currents along the surface. High doped p + -type region, which is formed of guard ring (GR) along the perimeter of the contact window for SD anode, can reduce electric field in Schottky barrier area and as a result increase the breakdown voltage [31,[60][61][62][63]. There are known methods for redistribution of charge carriers, when the space charge regions (SCR) of two or more neighboring GRs intersect between each other at a time close to their single avalanche breakdown at p-n junction. It helps to increase breakdown voltage of SD relative to structure with one GR. The increase of breakdown voltage reduces sharp growth of leakage current by temperature increasing and, therefore, expand a temperature range of PSD chip while maintaining low errors.
The PSD chip was created on epitaxial silicon wafers (epitaxial layer of n-type conductivity (100): Wep = 15 µm, ρep ≈ 2.2 Ohm·cm, ND ep = 3·10 15 cm -3 ; substrate n + -type of conductivity (100): Wsub = 380 µm, ρsub ≈ 0.01 Ohm·cm). The overall dimensions of PSD chip are 0.8x0.8x0.4 mm 3 (anode area is 0.24 mm 2 ), which are determined by conditions of chip location on a case pin with pressure sensor and application with Isup/F = 1 mA. One wafer with diameter of 3 inches has more than 1150 simples of PSD chip. The barrier metals for development are Al and Mo layers. The structures of PSD chip with one or two GRs were used for analysis of breakdown voltage increasing. The technology route has followed process sequence (relevant photolithography (PL) steps): • Creation of deep high-doped region with n + -type conductivity (xn+ = 7,6 µm, RS p+ = 1,2 Ohm/cm 2 , NS p+ = 3·10 18 cm -3 ) in epitaxial layer to form a ohmic contact to substrate. The equipotential location of this region on the front wafer side creates possibility to form a SD cathode and anode on single surface.
• Formation of high-doped region with p + -type conductivity (xp+ = 1.3 µm, RSp + = 71 Ohm/cm 2 ) as GRs. The development uses the creation of two types of PSD chip, which is implemented on the one plate: the PSD chip No. 1 has one GR, which is located along a perimeter of anode contact window; the PSD chip No. 2 has two GRs, where the additional structure of RD has a wider radius. • Etching of SiO2 for contact windows in the areas where SD electrodes are located. • Deposition of Al or Mo barrier layers (WMo = 0.2 µm) on the anode area.
The final deposition process of relatively thicker Al layer (WAl = 0.8 µm) for two SD electrode areas. It is necessary for application of ultrasonic welding for chip contact pads. The previous iteration (PL step) is accordingly unnecessary in case of using Al as a barrier metal. Annealing mode in vacuum for each of the barrier metals was found [20,64]. After that process two types of metal have the same low leakage currents (IL < 5 μA at UR = 20 V) and breakdown voltages (UBR > 70 V). Annealing for Al at T = 480 ⁰C and for Mo at T = 510 ⁰C during t = 15 min and P = 10 -3 Pa was done. The reason for no effect of double GRs is the initial uncorrected choice of distance between GRs by PL. The gap between GRs by PL was reduced from 11 to 8 μm. It was necessary to take into account the lateral diffusion of p + -type regions (≈ 1.1 μm), SiO2 etching wedge (≈ 0.2 μm) after PL and potential SCR propagation of RGs at UR ≈ 60 V (≈ 2.7 μm) [60]. The PSD chip No. 2 structure was analyzed by TCAD software (Fig. 3). Samples of the corrected PSD chip No. 2 demonstrated the achievement of breakdown voltage increase UBR > 85 V. The PSD chip No. 2 with Mo barrier metal is used for further studies of temperature sensor. Group studies for PSD chip No. 2 as a temperature sensor were carried out using the measuring complex National Instruments PXI-1044 and thermal chamber Espec MC-811 ( Fig. 4) with temperature imbalance over the full volume of less than 0.2 ⁰C at temperature change rate VT = 0.6 ⁰C/min. All measurements are taken in volume of dry air. Fig. 5 shows the temperature sensor dependences of forward voltage change on temperature (color bar -average values, dark area -spread) for two nominal supply currents IF = 1 and 10 mA. Additionally, this research has a wider range (from -65 to 165 ⁰C) than the operating range (from -65 to 85 ⁰C) for using with pressure sensor. The small jump of characteristics at temperature point of T ≈ 60 ⁰C is caused by the shutdown of thermal chamber refrigerator. Fig. 6 shows the dependences of nominal value and linearity error for TC (at IF = 1 mA) on various temperature ranges for measurement, where the low temperature limit remains constant (Tlow = -65 ⁰C) and high temperature limit Thigh varies from 75 to 165 ⁰C with temperature step ∆Tstep = 10 ⁰C. The temperature sensor has average values TK1mA = (-1.640 ± 0.015) mV/⁰С (or (-7786 ± 71) ppm/⁰С) and linearity error dTK = 0.3% in the operating temperature range of pressure sensor (from -65 to 85 ⁰C). The high limit of temperature range for this sensor can be increased up to 115 ⁰C without significant changes in parameters. The sharp changes of linear characteristic at elevated temperatures T > 115 ⁰C are presented in Fig. 5 and 6: the nominal value decreases and the linearity error increases with the changing of the sign. The average TC values decrease by 9.5% (TC10mA = (-1.450 ± 0.013) mV/⁰C) with similar linearity error dTC < 0.4% after increasing of supply current from IF = 1 mA to IF = 10 mA in the temperature range from -65 to 115 ⁰C. So the PSD chip can be used as a temperature sensor in wider temperature range than the operating ones due to improved reverse bias I-V characteristic. Table I. More than 100 samples took part in statistical data (except for less than 4% samples with a critical defect identified during testing at Troom). The temperature sensor analysis of error for forward bias voltage (supply current IF = 1 mA) was carried out according to:
Experimental research of PSD chip
• The temperature hysteresis dTH for two operating temperature sub ranges from -65 to 10 ⁰C and from 10 to 85 ⁰C (Fig.7a). • The long-term stability dUst for 9 hours at T = 30 ⁰C.
• The effect of thermal cycling influence dUc after 5 cycles during 22 hours in temperature range from -65 to +165 ⁰C. • The effect of all-round compression influence dUp by pressure Pcom = 10 MPa (Fig. 7b). The last type of the test of the effect of all-round compression influence should be noted separately. These studies are based on the application and their method includes mass pumping of compressed dry air into a limited volume (about 6000 cm 3 ). The PSD chip responds to all-round pressure and, additionally, dynamically changing temperature by external air, which clearly presented in the sharp dependences in Fig. 7b. So this temperature effect is also the reason for rather high error dUp < 1.8%. The studies of PSD chip for reverse bias of I-V characteristic were also done. The dependencies of leakage currents IL (at UR = 20 V) on rising temperature from 25 to 85 ⁰C are measured and showed in Fig. 7c for 10
Conclusion
The temperature sensor for providing of input data for temperature compensation of pressure sensor errors by ASIC and for direct temperature measurement has been developed. This chip is located at the distance of less than 1.5 mm from pressure sensor chip in the single case. The temperature is used in operating conditions of pressure sensor. The small-sized temperature sensor (0.8x0.8x0.4 mm 3 ) in the form of PSD chip based on the physical properties of Schottky barrier between metal of Mo and semiconductor Si n-type conductivity (ND = 3·10 15 cm -3 ). Low values of forward bias voltage UF = (208 ± 6) mV at IF = 1 mA are achieved by the optimal mode of metal annealing in vacuum. The breakdown voltage is UBR > 85 V and the leakage current IL < 5 μA at T = 25 ⁰C and IL < 130 μA at T = 85 ⁰C (at UR = 20 V) for PSD chip. These parameters were achieved by the structure of two GRs p + -type conductivity, which located at the sufficient distance to intersect their SCR. The SCRs combine at the moment close to a single breakdown each GR. The PSD chip has temperature coefficient TC = (-1.635 ± 0.015) mV/⁰С (or (-7786 ± 71) ppm/⁰С) with low linearity error dTC < 0.4% and temperature hysteresis dTH < 0.3% in the operating temperature range from -65 to 85 ⁰C and supply current IF = 1 mA. Additional studies for higher temperatures demonstrated possibility of PSD chip functioning for wider range from -65 to 115 ⁰C without significant deviations. The increase of supply current from 1 mA to 10 mA reduces TC by 9.5% (TC10mA = (-1.450 ± 0.013) mV/⁰С). The error under the all-round compression influence by pressure Pcom = 10 MPa has high boundaries of dUp < 1.8% because this parameter contains two parts: pressure and additional temperature change in the measuring equipment. Also, the PSD chip showed low error for long-term stability for 9 hours dUst < 0.05% at T = 30 ⁰C and after thermal cycling dUc < 0.4% (5 cycles during 22 hours in the range from -65 to +165 ⁰C). The developed temperature sensor in the form of PSD chip with small size, low power consumption, high breakdown voltage and high linear TC can be used in many of the previously mentioned industries or together with elements other than pressure sensors. | 2021-06-29T01:15:36.893Z | 2021-06-28T00:00:00.000 | {
"year": 2021,
"sha1": "aef6d56bb729b9b56cffe1334e73952be285b96d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2106.14746",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "aef6d56bb729b9b56cffe1334e73952be285b96d",
"s2fieldsofstudy": [
"Engineering",
"Materials Science",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
250360317 | pes2o/s2orc | v3-fos-license | The COVID-19 pandemic and energy transitions: Evidence from low-carbon power generation in China
The Corona Virus Disease 2019 (COVID-19) has led to a decline in carbon emissions or an improvement in air quality. Yet little is known about how the pandemic has affected the “low-carbon” energy transition. Here, using difference-in-differences (DID) models with historical controls, this study analyzed the overall impact of COVID-19 on China's low-carbon power generation and examined the COVID-19 effect on the direction of the energy transition with a monthly province-specific, source-specific dataset. It was found that the COVID-19 pandemic increased the low-carbon power generation by 4.59% (0.0648 billion kWh), mainly driven by solar and wind power generation, especially solar power generation. Heterogeneous effects indicate that the pandemic has accelerated the transition of the power generation mix and the primary energy mix from carbon-intensive energy to modern renewables (such as solar and wind power). Finally, this study put forward several policy implications, including the need to promote the long-term development of renewables, green recovery, and so on.
Introduction
The Corona Virus Disease 2019 and the resulting strict containment measures have resulted in huge economic contraction and social welfare losses for many countries or regions (Baker et al., 2020;Ding et al., 2020;Nicola et al., 2020). Most governments have called on people to self-isolate for the required period, forced businesses to reduce their activity, and implemented city-wide lockdowns during the pandemic (Fang et al., 2020;Liu et al., 2020). The year 2020 witnessed the sharpest economic contraction since the great depression of the 1930s (IMF, 2020).
To prevent the spread of the virus, China has taken strong prevention and control measures. These measures include but are not limited to the extension of the Spring Festival holiday (from January 24 to February 10), maintaining social distance, delaying the factory commencement dates, traffic control, and even blocking cities (Kraemer et al., 2020;Tian et al., 2020;Li et al., 2021). No doubt that the outbreak and spread of the virus are a tragedy and have exerted a tremendous impact on China's economy and society.
This study filled the gap by investigating the COVID-19 effect on energy transitions using the decarbonization of China's power generation sector as an example. China is a distinguished case study due to its status as the world's largest emitter of carbon emissions and thus faces unprecedented pressure to advance energy transitions (Zhang and Chen, 2021). China committed to achieving the carbon peak by 2030 and carbon neutrality by 2060 ("Dual Carbon") . Like others, the power sector will be key to helping China meet its aggressive low-carbon generation targets as well as the broader dual carbon target (Zhao et al., 2020. The information on how the COVID-19 shock has affected the energy transition is a piece of critical information for China to make its dual carbon policy. However, the question is how to quantitatively analyze the COVID-19 effect on energy transitions from the perspective of low-carbon power generations. Moreover, any attempt to combat global warming depends critically on China's energy transition trajectory, and the direction of China's energy transition has a leading impact globally (Jiang et al., 2019). Therefore, from the perspectives of both academic research and industrial practice, it is necessary to discuss in a timely manner how COVID-19 has affected the direction of the energy transition under the current setting and how the energy industry can find a path to rapid recovery during and after this crisis.
The present research is different from the relevant literature in at least two aspects. To the best of our current knowledge, this is among the first empirical studies that estimate the changes in low-carbon power generation levels before and during the pandemic period relative to the previous period, which contributes to previous empirical literature concentrating on economic variables and emission reductions (Bekkers and Koopman, 2020;Dang and Trinh, 2021;Oskoui, 2020). Then, based on the stacked data of solar power, wind power, nuclear power, and hydropower, this study used a difference-in-differences (DID) model with historical controls to quantitatively identify the overall effect of the COVID-19 pandemic on the energy transition from low-carbon power generations. The method has recently been applied in a few estimations of the COVID-19 impact Chen et al., 2020;Wang et al., 2021). Second, this study assessed the heterogeneous impacts of the COVID-19 shock on energy production and the energy mix of different types of energy sources. In the literature, little emphasis has been placed on comparing impacts across different types of power generation or primary energy sources even though such work is essential for investigating the implications of the COVID-19 crisis on the direction of energy transitions . In contrast, this study analyzed how the crisis has affected the progress in expanding low-carbon or carbon-neutral energy sources.
The remainder of the study is organized as follows: In Section 2, we focused on the literature review, and we introduced the data and statistical methodology in Section 3. The overall results were presented in Section 4, which was followed by a further discussion of the heterogeneous results in Section 5. Section 6 concluded and provided some relevant policy implications.
Literature review
The shock of COVID-19 has stimulated intensive research activities. The majority of these studies focused on investigating the economic effects of COVID-19 from multiple perspectives, such as economic output (Morgan et al., 2021;Gharehgozli et al., 2020), household consumption (Martin et al., 2020), labor employment (Hershbein and Holzer, 2021), supply chain and financial market (Ali et al., 2020;Baker et al., 2020;Ding et al., 2020). The COVID-19 effect on carbon emissions or air quality (i.e., PM2.5, PM10, and SO 2 ) has also been a hot topic. Recent studies have empirically discussed the reductions in global CO 2 emissions (e.g., Liu et al., 2020;Forster et al., 2020;Le Quéré et al., 2020) and the changes in China's urban air quality (e.g., Shi and Brasseur, 2020;Huang et al., 2021;Chang et al., 2020) due to COVID-19. Most studies have found that the COVID-19 crisis has lowered carbon emissions or improved air quality.
Despite the proliferation of studies, how COVID-19 has affected energy transitions is still not clear. On the one hand, COVID-19 could have slowed down energy transitions. The COVID-19 crisis and the related containment measures have significantly reduced energy consumption in many countries, which in turn has influenced the deployment of renewables (IEA, 2020;Chiaramonti and Maniatis, 2020;Zhong et al., 2020). Disruptions caused by the crisis have taken a big toll on the investment and construction of renewable energy projects. In several countries, the pandemic has made an already challenging investment environment worse, specifically with regard to renewables (Selmi et al., 2021;Ivanov and Dolgui, 2021). From an economic perspective, the crisis has exacerbated the financing challenges that also slowed the support and dampened the enthusiasm of investors for energy transitions (Karmaker et al., 2021;Mastropietro et al., 2020). Especially in countries with a strong dependence on fossil fuel industries, the governments were likely to transfer the funds originally used for the energy transition into the fields of health care and social welfare, further slowing down the switching to low-carbon or carbon-neutral energy sources (Birol, 2020;Emma, 2020).
On the other hand, COVID-19 may have accelerated energy transitions. In today's world, a dramatic fall in the costs of renewable energy has speeded up the large-scale utilization of renewable energy sources in power generation (Kåberger, 2018). During this pandemic, the power demand in various countries has generally decreased (IEA, 2020; Ghenai and Bettayeb, 2021). As a result, the power generation capacity has exceeded the demand. Grid operators may have prioritized cheap, clean, and environmentally friendly non-fossil energy. In addition, the deglobalization caused by COVID-19 isolation measures has prompted some countries to enhance the localization of supply chains or seek flexible solutions for resource development (Quitzow et al., 2021;Ba and Bai. 2020). Especially, many European countries were continuing to deploy renewable energy sources, while continuous divestment trends in the fossil fuel industries were accelerating in the wake of the crisis (European Commission, 2020; Council of the European Union, 2020).
It can be seen from the above literature that the COVID-19 effect on energy transitions is still controversial. However, the future of the energy system is going to be in a more complex, diversified, and uncertain situation. Considering that the transition from high-carbon energy to low-carbon energy sources is a fundamental way of accelerating the power sector transformation (Wei et al., 2021), we used the low-carbon power generations as the key indicator for this study. These low-carbon generation sources include renewable energy, mainly solar and wind power, and nuclear and hydropower, which are also actively promoted by the Chinese government. Through the use of modified DID models, this study analyzed the overall impact of COVID-19 on low-carbon power generations with a monthly province-specific, source-specific dataset. Then, the study compared the productions of different power generation and primary energy sources before and during the pandemic and assessed how the recent COVID-19 pandemic has affected the direction of the energy transition by fuel type.
Data
This study used monthly power generations, energy production, and weather conditions in China's 30 provinces from July 2018 to June 2020. The province-level data for the generation of low-carbon power and the supply of other energy sources were obtained from the National Bureau of Statistics of China (NBS). In this study, low-carbon power mainly includes solar power, wind power, nuclear power, and hydropower. 1 Monthly meteorological data (average temperature, precipitation, average relative humidity, and sunshine hours) for the 30 provinces were collected from China statistical yearbooks and the National Meteorological Information Center. In addition, this study measured the energy mix by calculating the ratio of specific energy sources to the total energy supply and then examined the effects of the COVID-19 pandemic on the direction of the energy transition. In measuring the primary energy mix, the physical quantity of all primary energy sources has been converted into standard coal equivalent. 2 Table 1 presents the summary statistics of our key variables.
The data show that renewable energy development initially had a certain ability to resist external shocks. In the first half of 2020, the 1 This is because the monthly power generation data from biomass, geothermal, or other renewables are not available. In addition, compared to wind and solar power generation, the power generated from the combined category for biomass, geothermal, and other renewables is at a negligible level. For example, in the first half of 2020 in China, the power generated from the combined category accounted for 0.0012% of the total power generation.
2 The primary energy supply was calculated by multiplying the activity data (i.e., energy production) and the conversion factors by energy types. Here, we used the standard coal conversion factor by different energy sources from the China energy statistical yearbooks to assess the total primary energy quantity. For example, the conversion factors of various low-carbon power generations are the same, namely, 10000 kWh of low-carbon power is equal to the power produced by burning 1.229 tons of standard coal.
global wind and solar power generation accounted for 9.8% of the total power generation, an increase of 14% over the same period in 2019 (IEA, 2020). Also, the total installed capacity of global coal power decreased for the first time in history. In China, the most impressive progress has occurred in the power generation sector, where modern renewables (such as solar and wind power) have advanced significantly. When the total power, thermal power, and hydropower generation decreased by 0.08%, 0.59%, and 7.17%, respectively, year-on-year in the first half of 2020, the generation of domestic wind power and solar power increased by 12.65% and 23.20%, respectively (see Fig. 1).
The modified DID models with historical controls
The study aimed to quantitatively identify the COVID-19 effect on energy transitions from the perspective of low-carbon power generations. As the COVID-19 shock was a major public health emergency and the resulting containment measures were highly exogenous, the impacts on the energy supply and energy transition also met the main assumptions of a quasi-natural experimental design (Kanda and Kivimaa, 2020). In this study, the DID model, using Stata software, version 15.1, was then applied to quantify power generation changes due to the pandemic.
However, the standard DID model needs to be modified for studying the COVID-19 pandemic. All Chinese provinces were in some degree of lockdown during the pandemic period, meaning that observational data at the province level provided no contemporary untreated controls, and it was difficult to estimate an average treatment effect according to the standard DID model. The literature proposed to identify a comparable group that could not receive treatment, e.g., historical controls prior to its availability (Newsome et al., 2021;He et al., 2020). With reference to Wang et al. (2021), how the COVID-19 or national-level pandemic-related measures have affected low-carbon generation relative to the trends in previous periods was examined and the first modified DID model with historical controls was as follows.
where s, i, and t denote low-carbon power sources (solar power, wind power, nuclear power, or hydropower), provinces, and months, respectively. This study set the low-carbon power generations from July 2019 to June 2020 as a treatment group. This group was compared to a historical control group from July 2018 to June 2019. "Treat" is a grouping dummy variable, the value of which is set as 1 if it is in the period July 2019 to June 2020, and set to 0 for July 2018 to June 2019. The value of "post" is set as 1 if it is a month during the pandemic period (March 2019 to June 2019, or March 2020 to June 2020) within our study period. 3 "Controls" describes the monthly weather condition variables (average temperature, precipitation, average relative humidity, and sunshine hours).
To capture the overall effect of the pandemic on the energy transition from the low-carbon power supply, this study followed the approach of Duflo et al. (2013) and Li et al. (2020) and used the stacked low-carbon Notes: This study used data that include monthly power generations, energy production, and weather conditions in China's 30 provinces (excluding Hong Kong, Macao, Taiwan, and Tibet autonomous region), from July 2018 to June 2020 (excluding January and February). Source: Author's own conception. Due to data availability, we defined four major low-carbon power sources: hydro, nuclear, wind, and solar in this study.
3 Because the power generation data for January and February were missing, this paper defined the pandemic period (the treatment period) as March to June (2019, 2020), and the period before the pandemic as July to December (2018December ( , 2019. Also, based on existing evidence, excluding the Chinese Spring Festival holidays (from January to February) could avoid any power generation changes unrelated to the pandemic (Chen et al., 2020).
power generations as the explained variable (lcp). 4 The parameter of interest is α 1 , which reflects the COVID-19 effect on low-carbon generation. Specifically, we calculated the changes in low-carbon generation during the pandemic versus before the pandemic period, from 2019 to 2020, and compared these findings with corresponding changes in the same periods from 2018 to 2019. γ s is the set of power source fixed effects, controlling for any time-invariant source heterogeneity. μ i is the set of province fixed effects, controlling for time-invariant, unobserved province characteristics across provinces, such as geographic features. δ t is the set of month fixed effects, controlling for the monthly shocks common to all provinces, such as business cycles. ε sit is an error item. We estimated Eq. (1) allowing for province-level clustering of the errors. The baseline DID model identifies the average differences in lowcarbon generations between the treatment and control groups. On this basis, the monthly differences in low-carbon generation measures between the two groups were further compared. Based on Eq.
(2), this study performed whether the DID model met the parallel trend requirements during the pre-pandemic period, and dynamic analysis of the COVID-19 effect. The test model is set as: where d t is a series of month dummy variables. In Eq. (2), the dummy variable indicating one month before the treatment (December) was omitted from the regression, the focus was on the month-to-month changes in the coefficients β t within the event window. More importantly, the conditions under which the outcome variable follows a common trend are as follows: the coefficients β t (from July to November) were nonsignificant. During the treatment period, by comparing the changes in β t (from March to June), it is possible to analyze the dynamic effect of the COVID-19 shock on low-carbon generation.
Next, to explore whether the COVID-19 effect varies across different types of power sources or energy sources, this study tested for the existence and direction of causality between the COVID-19 pandemic and energy supply in China at disaggregated levels, like solar power, wind power, nuclear power, hydropower, and so on. Note that the heterogeneity analyses help us to understand what drives the overall effects (Nicolli and Vona, 2016) and to compare the influence on the production of various energy sources. In this study, the heterogeneity analysis is based on Eq. (3) below: where the explained variable prod is one of the energy production indexes in province i at month t, including low-carbon power sources and other primary energy sources (such as raw coal, crude oil, and natural gas). Province and month fixed effects are included in all specifications in order to control for time-unvarying province attributes and nationwide common time shocks, respectively.
Each energy source type is associated with a bundle of environmental effects. Moving further upstream in the energy supply chain, the transition toward low-carbon or carbon-neutral energy sources involves the gradual reduction of the exploitation of fossil fuel resources (Davidson, 2019;York and Bell, 2019). To better understand the impacts on the direction of the energy transition, this study measured the energy mix by calculating the ratio of specific energy sources to the total energy supply. Then, the heterogeneous effects of COVID-19 on the energy mix were examined. The specification for the energy mix of each type of energy is: where the dependent variable mix is either the share of a certain type of power source in the total electricity generation or the share of a certain type of primary energy in the total primary energy supply in province i at month t. Each regression implements model (4) and controls for the weather condition variables, province and month fixed effect.
Baseline estimation
The DID model (Eq. (1)) was used to estimate the changes in lowcarbon power generation levels before and during the pandemic period, relative to the previous period, and to quantitatively assess the overall effect of COVID-19 on energy transition from the perspective of low-carbon power generations. Column (1) of Table 2 shows the effect of the COVID-19 pandemic on low-carbon power generations through the stacked data of solar and wind power. Using the stacked data of two different combinations of the three low-carbon power sources, the estimation results were reported in columns (2) and (3) of Table 2. When all four low-carbon power sources are pooled together, column (4) presents the benchmark results for the overall effect of COVID-19 on low-carbon power generations. All regressions include controls for province fixed effects, month fixed effects, source-specific fixed effects, and weather conditions. However, only the coefficients of the interaction term (treat×post) were discussed here, due to limited space.
The results show that the interaction term was significantly positive when considering weather condition variables and the fixed effects of the three dimensions. This finding means that the COVID-19 crisis had a significant promotion effect on the low-carbon energy supply, compared with the same period in 2018-2019. The benchmark estimate in column (4) of Table 2 demonstrates that, across the four measures of low-carbon energy supply, the COVID-19 pandemic on average increased the low- Table 2 Overall effects of COVID-19 on low-carbon generations.
Column
(1) Notes: This table presents estimates of DID regressions of the energy transition on the COVID-19 pandemic and weather condition variables. The dependent variable is the stacked low-carbon power generations (lcp) for all columns (1)-(4) with different power source types. The weather condition controls are the monthly average temperature (temp), monthly precipitation (preci), monthly average relative humidity (humid), and monthly sunshine hours (sun) for each province. All the specifications control for province fixed effects, month fixed effects, and source-specific fixed effects. The estimates of weather variables, fixed effects dummies, and constant terms are suppressed for brevity. Reported in parentheses are robust standard errors clustered by province. ***p < 0.01, **p < 0.05, *p < 0.1. 4 In unstacked data, each power sample is in a separate column. Alternatively, all the data can be stacked in one column, that is, the four power sources are pooled together. Of course, we also added a column of grouping indicators (numbers or text) that define each power sample. carbon power generation by 0.0648 billion kWh (by 4.59%). 5 These positive impacts of COVID-19 on low carbon generation could be due to the following factors. First, the output of low-carbon power is largely unaffected by the weak demand, because low-carbon power generation has low operating costs and priority dispatch (Quitzow et al., 2021;Liu et al., 2021). Moreover, the installed capacity of wind and solar power generation continues to expand in China, further increasing the advantages of variable renewable energy sources. Therefore, low-carbon energy has ushered in an unconventional development opportunity (Hoang et al., 2021).
Parallel trend hypothesis test and dynamic effect analysis
When applying the DID model, one validity test commonly used involves examining whether the treatment and control groups exhibit parallel pre-treatment trends. This study adopted the event study approach by estimating a series of coefficients for each month to investigate how the trends in the low-carbon generation between the two groups evolved before and during the pandemic period.
The estimated coefficients for each month within the event window, along with the 95% confidence intervals, were presented in Fig. 2. The dummy variable for December (one month before the treatment) was omitted from the regression. After introducing the interactions of month dummy variables and the term treat, all the estimates for the five months before the treatment were statistically insignificant at the 5% level. The results suggest that the trends in the low-carbon generation before the pandemic period were similar to those in 2018. This finding inspires confidence that the historical control group (2018.7-2019.6) provided a good counterfactual for the treatment group (2019.7-2020.6). Meanwhile, the interactive term after the treatment (treat×d Mar ) was significantly positive, with the low-carbon generation increasing by 0.1260 billion kWh (Column (1) of Table 3). Despite an abnormal two or three months down after the spring festival, the value quickly becomes positive. These results confirm the conclusion that the COVID-19 pandemic significantly increased low-carbon generation (Supplementary Note).
Province-month trend and province-energy effects
The province-month trend terms were added to the regression model to control some of the provincial factors that may have been omitted or changed over time (Liu and Qiu, 2016). After introducing the crossovers of the province dummy variables and the monthly trend term, the COVID-19 effect in column (2) of Table 3 was still significant. thereby confirming the robustness of the baseline results. In column (3), in addition to the fixed effects considered in the baseline scenario, this study controlled for province-source fixed effects and thus rules out any bias from unobserved changes affecting specific power generations in each province. The key findings regarding the COVID-19 effect on low-carbon generations were broadly consistent.
Adding the square terms of weather variables
To verify whether a non-linear relationship exists between weather variables and power generations, referring to Zheng et al. (2019), column (4) added the square term of temperature to the model. The results show that the square term was not significant, and the interaction term was significantly positive. Column (5) further added the square terms of temperature and precipitation to the model. The direction and magnitude of the interaction term coefficient were consistent with those in Table 3.
Adding additional control variables
The commissioning of new renewable energy facilities and energy market fluctuations during the sample period could lead to estimation errors. We therefore included the renewable power commissioning indicator (measured by the "newly added renewable power capacity") and the energy price indicator (measured by the "fuel and power price index" at 2018 constant prices) in the regression to control for the potential impact of these variables. The estimation results provided in columns (1-2) of Table S1 reveal that, adding additional control variables did not alter our conclusions of the baseline regression.
Fig. 2. Parallel trend hypothesis test and dynamic effect analysis
Source: Author's own conception based on Stata software. Low-carbon generation levels are compared between 2018.7-2019.6 and 2019.7-2020.6. The dummy variable for December (one month before the treatment) is omitted from the regression. Also, excluding the Chinese Spring Festival holidays (from January to February) could avoid any changes in power generation that were unrelated to the pandemic. Each estimate shows the difference in low-carbon generation relative to the difference one month before the treatment. The red and dashed lines represent the estimated coefficients and 95% confidence intervals, respectively. (1)-(6) with four energy types. Other notes as Table 2. 5 The most important thing of causal identification is to ensure the consistent estimation of causal effects (Cinelli et al., 2021). In this study, the values of R 2 in Table 2 are acceptable after considering a series of robust tests that followed.
Sample adjustment
In light of the extent and pace of the expansion of the COVID-19 outbreak in various provinces, an infection index was applied that allows taking into account the magnitude of the pandemic (Zhu et al., 2020). This index was constructed as the natural logarithm of one plus the number of accumulated confirmed cases each month. 6 The corresponding results reported in Column (1) of Table 4 indicate that the estimated coefficient for the interaction term between the treatment group and the infection index was significantly positive. This finding confirms that the severity of the pandemic has tended to impact the low-carbon energy supply positively.
Hubei province, where the new virus was first detected and strict epidemic prevention measures were imposed in China, has also been excluded from this study. It can be seen from column (2) of Table 4 that the results were not dominated by the province that was most affected by the virus. In addition, there are some "0" values in the data. Especially, this applies to marginal power generation technologies, such as nuclear power. After deleting the samples with "0" values, the regression results shown in column (3) of Table 4 suggest that the basic conclusions were not affected obviously.
We used a different starting sample month to check the sensitivity, i. e., we dropped 2 months at the head and changed the start of the sample period to September. After deleting data for July and August, the results shown in column (4) of Table 4 were consistent with the benchmark results, i.e., the level of low-carbon generations increased substantially due to the pandemic.
To mitigate potential outliers, the baseline tests were repeated with the natural logarithm of one plus the total low-carbon generation as the dependent variable. The logarithm transformation allows one to capture the percentage change in total low-carbon generation. Similar estimation results were found after the inclusion of this relative measure (column (5)), i.e., the estimated parameter for the interaction term was significantly positive.
Heterogeneous effects on the energy production by primary energy sources
Despite the significance of the COVID-19 pandemic related to overall low-carbon generation, it hides significant heterogeneity across lowcarbon power sources. To better understand the evolution of lowcarbon power and other primary energy sources, this study took a step forward and compared the influence of the COVID-19 pandemic on energy production by different primary energy sources. Fig. 3 displays the regression results of Eq. (3) for seven different primary energies (raw coal, crude oil, natural gas, solar power, wind power, hydropower, and nuclear power). The standardized regression coefficient was reported for each primary energy source by employing a pooled panel with weather variables and fixed effects dummies. The change in energy production level was estimated before and during the pandemic period, relative to the previous period.
In Fig. 3, the dependent variables are the energy production indices. Among the four electricity generation sources, the coefficients of the interaction term between the treatment group and pandemic period were significantly positive for solar power and wind power. This finding indicates that the COVID-19 pandemic improved solar and wind power generation compared with the same period in 2018-2019. Moreover, it should be pointed out that the overall results were mainly driven by solar and wind power. Especially, the pandemic had the most significant effect on solar power, with a standardized estimated coefficient of 0.103. The pandemic or the pandemic-related measures appear to have had a major driving effect on renewable project development in China.
In fact, the operation of renewable power generation was less affected by fluctuations in raw materials and manpower and has had apparent advantages during the COVID-19 pandemic (Kelvin and Brindley, 2020). The technological advancement and electricity market reform have substantially reduced the costs and affordability of renewable energy. Thus, the competitiveness of modern renewable energy sources (such as solar and wind power) has increased significantly (IRENA, 2021; Amir and Khan, 2021). However, no significant effect was observed for hydropower and nuclear power. For technologies with a long lead time for development, such as hydropower and nuclear power, electricity generation may not be significantly affected by the outbreak.
For other primary energy sources (fossil fuels), the pandemic significantly increased the supply of natural gas, at a significance of 5% and a standardized estimated coefficient of 0.02. Yet, the production of raw coal and crude oil that remain China's base energy sources have not Table 4 Robustness tests based on sample adjustment.
Column
(1) changed significantly during the COVID-19 period. This finding at least shows that the pandemic has been more inclined to push the development of clean and low-carbon energy.
Heterogeneous effects on the energy mix
The COVID-19 crisis has already had significant effects on lowcarbon power generations, but how has it influenced the direction of the energy transition? As the electricity sector is an important contributor to carbon dioxide emissions (Li et al., 2017), this study additionally considered a relative power generation indicator, instead of the absolute amount of energy production, i.e., the ratio of specific power sources to total power generation was used. Through variables transformation, the COVID-19 effect on the direction of the energy transition was examined.
On power generation mix
Given that the same set of weather control variables and fixed effects dummies are included in each regression, Table 5 presents the heterogeneous results of the COVID-19 effect on the electricity generation mix by fuel type. Specifically, the pandemic has led to a rise in the proportions of solar and wind power, while there has been a decline in the proportion of hydropower (significant at the 5% level). This finding implies that the direction of the electricity generation mix transition has shifted from hydropower to solar and wind power. From the power supply side, the decline in demand is intensifying the competition among various power generation technologies and fuels. The nondispatching ability of modern renewable energy (including wind and solar) and renewable energy's priority in China's power system have enabled it to buck the trend and become a beneficiary in the increasingly fierce competition among various power sources. The impact of the pandemic has revealed an important message, namely that renewable energy power generation is becoming the baseload supply of electricity, due to the low marginal cost and priority grid access.
Although hydropower accounts for a large proportion of non-fossil energy generation in China, the creation of new hydropower generation has shown a downward trend in the past few years. The estimated coefficient on the interaction term of − 0.011 in the hydro regression was likely due to low precipitation in hydropower regions in the first half of 2020. In addition, the estimated COVID-19 effect on the thermal and nuclear power shares of the power generation mix has been statistically insignificant. Compared with modern renewable energy power generation with a low marginal cost, fossil fuel energy power generation has experienced more frequent start-up/shutdown and has not had economic advantages during the pandemic. However, thermal power has strong flexibility, continuous production, and strong overall anti-risk ability. Nuclear energy cannot compete with renewable energy in terms of cost and construction speed and has been unaffected by the pandemic.
The regression results provide strong evidence that COVID-19 has advanced the transition of the power generation mix. Specifically, due to the pandemic, the power generation mix is likely to move, in relative terms, from hydropower (generated using domestic resources) toward modern, capital-intensive renewables. From the current situation, the COVID-19 crisis did not necessarily crowd out decarbonization efforts in the power industry, instead, it accelerated the electricity transition (Pianta et al., 2021).
On primary energy mix
To further understand the impacts of the COVID-19 pandemic on the primary energy mix by fuel type, this study measured the primary energy mix by calculating the ratio of specific energy sources to the total primary energy supply (10000 tons of standard coal). From the empirical results shown in Table 6, the COVID-19 effect on the transition of the primary energy mix away from carbon-intensive energy was significant. Specifically, the estimated COVID-19 effect was negative for the shares of raw coal and crude oil in the primary energy mix during the study period and was positive for solar and wind power. The expansion of solar and wind power was closely linked to a concurrent decline in the shares of raw coal and crude oil, the most carbon-intensive forms of primary energy supply. This finding demonstrates that the primary energy mix tended to switch from raw coal and crude oil to solar and wind power. The estimates indicate that the pandemic's impacts on the shares of natural gas, hydropower, and nuclear power have been insignificant. In a word, the heterogeneous results reveal that the pandemic has accelerated the transition of the primary energy mix from high-carbon energy (i.e., raw coal and crude oil) to modern renewables, such as solar and wind power.
The results of this study are consistent findings from the literature. The previous studies did not quantitatively estimate the changes in lowcarbon power generations induced by the COVID-19 pandemic, although they reached a near consensus that China's energy transition has been altered by the pandemic to a great extent (Quitzow et al., 2021;Liu et al., 2021;Hoang et al., 2021). For example, Quitzow et al. (2021) and Hoang et al. (2021) showed that the crisis caused unprecedented decarbonization of the power system. Similarly, we found that the COVID-19 shock significantly increased low-carbon power generation. Meanwhile, several studies argued that the crisis might have tremendous consequences on the direction of the energy transition (European Commission, 2020;Pianta et al., 2021;Kuzemko et al., 2020). In a similar vein, this study further revealed that COVID-19 has promoted the adoption of low-carbon power sources on the upper rungs of the electricity ladder (modern renewables such as solar and wind power). The results of this study provided direct empirical evidence on the COVID-19 effect on China's low-carbon energy transition, as well as important cross-cutting insights not only for China but also for other large and emerging economies.
Conclusions and policy implications
COVID-19 has profoundly changed the economy, society, and people's lives worldwide. As a crucial part of the economy, China's energy sector should have also been altered by the pandemic. Understanding the effects of COVID-19 on low-carbon energy transitions in China is necessary for China to make its plan toward "Dual Carbon" targets. However, while there are quite a few studies on the COVID-19, no one has investigated how it affected energy transitions.
On the one hand, investigating the epidemic's treatment effect on energy transitions can enrich the main contents of the impact assessment of the epidemic, without limiting the analysis to the economy and human well-being. On the other hand, when assessing a major public safety and health event such as COVID-19, it is necessary to consider the possible deductions caused by the virus in terms of welfare losses. To achieve more accurate and comprehensive evaluation results.
Table 5
Heterogeneous effects of COVID-19 on the power generation mix.
consideration is also given in this study to the impact on the low-carbon power supply and the direction of the energy transition. It was found that, by using the stacked low-carbon power generations (we defined four major low-carbon power sources: solar, wind, nuclear, and hydro), the COVID-19 pandemic had a significant promotion effect on low-carbon power generations, compared with the same period in 2018-2019. In terms of economic magnitude, the COVID-19 pandemic on average, increased the low-carbon power generation by 4.59% (0.0648 billion kWh). This result was robust when considering the parallel trend hypothesis test, dynamic effects, province-month trend, province-energy effects, other model specifications, and changes in sample adjustment.
The heterogeneous analysis of the effect on energy production indicates that the COVID-19 pandemic improved solar and wind power generation. It is also worth noting that the overall results were mainly driven by solar and wind power generation, especially solar power generation. The heterogeneous analysis of the effect on the energy mix indicates that the pandemic has fostered the transition of the power generation mix and the primary energy mix from high-carbon energy to modern renewables (such as solar and wind power).
Our results have the following policy implications. China needs to seize the momentum to promote the low-carbon energy transition during the COVID-19 crisis. While the pandemic disrupted the world from all aspects, our results suggest that it accelerated decarbonization efforts in the power industry, and promoted the power mix toward renewable energy sources. Since renewables will play a vital role in advancing lowcarbon energy transition and achieving dual carbon targets, they require a continued medium-term and long-term policy vision. Accordingly, the development strategy of the next round of the energy industry should be scientifically planned.
In addition, promoting energy transitions should be a part of the recovery plan. In order to realize the dual carbon goals, China's postpandemic economic stimulus measures should be closely combined with long-term low-carbon development and climate policies, such as market-oriented reform and energy transitions, so as to promote green recovery. Investment in energy transitions may not only achieve economic recovery in the short term (after COVID-19) but could also contribute to long-term social development .
This study concluded by proposing several directions for future research. The short-term effects of COVID-19 on the energy transition were only considered in the present work, and it is still unclear whether the impacts were just a one-time shock or have permanently altered the development model of the power system. As the COVID-19 pandemic is still spreading all over the world, the long-term effects of COVID-19 on the low-carbon power generation and the transition to renewables remains to be seen, which is an important field of energy transition research (Zhong et al., 2020). Also, while monthly source-specific data do provide a knowledge base for assessing the decarbonization efforts of the power sector, information on day-to-day energy production and generation patterns induced by COVID-19 is unfortunately omitted. Therefore, a dataset on source-specific power generations with high time frequency is urgently needed to understand how the pandemic has affected the low-carbon power supply and generation patterns. Finally, the present study only focused on energy production and energy transition in the context of China, where the government sticks to the dynamic zero-covid policy in stopping the large-scale spread of the virus, which is quite different from most other countries. Future studies could continue to explore emerging generation patterns and cross-country differences, which can help provide additional insight to understanding the COVID-19 effects on global efforts to address energy transition.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. Notes: This table presents the estimation results for the heterogeneous effects of COVID-19 on the primary energy mix by fuel type. The dependent variable is the primary energy mix for all columns (1)-(7) with different energy types. Other notes as Table 2. | 2022-07-09T13:02:25.306Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "a3f66d0ea061c8581bd3a4390f3fb4772b738022",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9270063",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "9e8f9d8a392c97085c5d7e2d2e5c48930a56a757",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": []
} |
17457127 | pes2o/s2orc | v3-fos-license | Influence of an Oral Supplementation Based on Orthosilicic Acid Choline-Stabilized on Skin, Hair and Nails: A Clinical Study with Objective Approach
In recent years, various dietary supplementations have been released in the market with the promise of health benefits and functional properties. New trends involving nutrition have been highlighted due to the potential effects of certain food ingredients in the cutaneous aging process, the healthy appearance of skin, hair and nails. Looking at the growing demand for improvement in the appearance of hair, skin and nails and the increased interest of the population by use of nutricosmetics, the aim of this study was to evaluate an oral supplementation with orthossilicic acid choline-stabilized. For this, it was done a randomized, placebo-controlled clinical test. After the approval by the ethics committee, were selected 60 women, aged 40 and 65 were selected and they were divided into two groups (treatment and placebo). The daily dose of the supplement for evaluation was 400 mg of orthossilicic acid choline-stabilized for a period of 3 months. Analyzes were done before the treatment (basal-T0) and after 30, 60 and 90 days of treatment and it were evaluated the structural characteristics of the dermis, the mechanical properties of hair and the structural characteristics of the dermis, the mechanical properties of hair and the perception of effectiveness were evaluated by the volunteers. According to the results, it was observed for the treatment group an increase in echogenicity of the dermis after 90 days of treatment. Thus, it can be concluded that the treatment increased the density of the skin. In addition, the high resolution images showed an improvement in skin micro relief and an improvement in skin roughness. There was also increased resistance of the hair and volunteers of treatment group reported improvement in skin, hair and nails. Finally, the oral supplementation of the nutricosmetic with orthossilicic acid choline-stabilized can be suggested as an effective product to increase skin density and to improve hair and nails conditions, complementary to topical treatments.
Introduction
Silicon is a trace element that plays an important structural role by continuous deposition in bones and connective tissue proteins such as elastin, collagen and proteoglycan [1,2]. It seems to have some role in modulating immune and inflammatory response and be associated with mental health by reducing the deposition of heavy metals [3].
Several chemical forms of silicon are found in nature, since it is the second most prevalent chemical element, after oxygen. The knowledge of their chemical structure is critical to differentiate the type of silicon that contributes to the health of that organic xenobiotics and which acts as a potent toxin [4].
Silicon is usually found in the human diet, but do not always have good bioavailability due to enzymatic action that results in silica or silicate [5]. It has been demonstrated that the bioavailability increases when the silicon is in the form of orthossilicic acid stabilized with choline [1] or collagen, water soluble chemical forms which can be found in food, drinks or dietary supplements [6].
In the organic structures, silicon is deposited on macromolecules such as lipids, carbohydrates and proteins and can be found in the form of orthosilicic acid in bones, tendons, aorta, liver and kidneys [2]. Silicon deficiency may be associated with deterioration of cartilage and collagen, and the imbalance between the minerals contributes to osteoporosis [4].
Silicon has long been considered an agent that is capable of improving the quality of hair, skin and nails. It has been reported topical use on the hair for cosmeceuticals, together with other ingredients, such as: Fat, hydrolyzed proteins, cationic polymers or cationic quaternized derivatives. These products are designed to provide softness, shine, strengthen the wires and detangle easily [7].
In the mechanism of intrinsic and extrinsic skin aging occurs by reduction of dermal fibroblast collagen synthesis and increased degradation by metalloproteinases leading to reduction of skin elasticity, sagging and wrinkles appear to alter the micro relief [8].
Silicon appears to play some influence on the dermis structure by mechanisms: 1) Modulates the action of the enzyme responsible for hydroxylation required for cross-linking of collagen fibers [9]; 2) Participation in the activity of the enzyme proline hydroxylase responsible for the synthesis of proline [10]; 3) Activation of ornithine aminotransferase enzyme that participates in collagen synthesis, action demonstrated in private silicon animals that had a decrease of this enzyme in the liver, reducing the hydroxyproline concentration in the tibia [11]; 4) Connection to the hydroxyl group of polyols, interfering with the binding of glycosaminoglycans to water, mucopolysaccharides and collagen production [12]; 5) The neutralization of free radicals and decrease in collagen glycation reactions. A study has demonstrated that silicon plus vitamin C stimulated the synthesis of hyaluronic acid and proteoglycans, reducing the disruption of the dermal matrix [13]; 6) Anti-inflammatory action, demonstrated in vitro by a reduction in interleukin production and in vivo by the reduction of erythema and edema [14]; 7) Decreased inhibitory activity of cyclooxygenase I (COX1) [15].
In recent years, various dietary supplements have been launched in the market with the promise of health benefits and functional properties. New trends involving nutrition, skin, hair and nails have been highlighted due to the potential beneficial effects of certain ingredients in the cutaneous aging process, the healthy appearance of skin, hair and nails. Clinical studies have demonstrated positive effects of the ingredients in the biomechanical properties of the skin as well as the barrier functions [1,4,5,12,14]. Parallel to this, in vitro and in animal experiments reveal the mechanism of action of food ingredients in cellular and molecular biology of skin cells. Healthy skin is a manifestation of general health and, as such, may be influenced by the consumption of food ingredients, including vitamins, minerals, antioxidants and bioactive peptides.
Looking at the growing demand for improvement in the appearance of hair, skin and nails and the increased interest of the population by use of nutricosmetics, this study aims to promote clinical efficacy test by biophysical and skin imaging techniques to analyze the use of oral supplementation of orthossilicic acid stabilized with choline, product already available in the Brazilian market by new clinical study that shows more objective tests (ultrasound skin, hair resistance test, for example) and high resolution images to evaluate changes in the hair, skin and nails and make more visible and measurable results for consumers.
Materials and Methods
The study was placebo-controlled, randomized, double-blind, 60 women aged between 40 and 65 years were recruited, with the primary objective of evaluating the effect of nutricosmetic intake containing orthossilicic acid stabilized with choline in the skin tissue in the hair shaft and the nail. The study was approved by the Ethics Committee of the Faculty of Pharmaceutical Sciences of Ribeirão Preto/SP (CEP/ FCFRP 339) and followed current Good Clinical Practice regulations [16]. The study duration was 90 days, with 4 reviews at the periods 0, 30, 60 and 90 days. Participants signed the Informed Consent before accepting participation in the study.
After evaluating the terms of inclusion and exclusion, acceptance to participate in the study and signing the consent form, participants were divided into two groups: Group 1 which was the test group, which made the intake of 400 mg/day of orthossilicic acid stabilized with choline and group 2 which was the placebo group that did intake of 400 mg/day placebo with maltodextrin. The duration was 3 months for each participant, with a total of 4 visits: Visit 1 (week 0 and D0), visit 2 (week 4 or D30 ± 3), visit 3 (week 8 or D60 ± 3), visit 4 (week 12 or D90 ± 3).
Inclusion criteria
The inclusion criteria were as follows: Healthy females ranging in age from 40 to 65 years (homogeneous distribution between treatment groups); general good health and mental condition; personal informed consent to participate in the study; personal presence on the predefined days at the institute, and willingness and capability to follow the study rules and a fixed schedule, know that the data could be used to share the project. It was also instructed to the volunteers to not use other oral supplements or cosmetic products (except sunscreen) and to not change their alimentary habits during the study period [16]. All inclusion criteria were evaluated at every visit by means of a questionnaire that assessed whether there was no criterion that was unrequited.
Exclusion criteria
The exclusion criteria was as follows: treatment with topical retinoids, alpha hydroxy acids, poly hydroxy acids, β hydroxy acids and ascorbic acid less than three months; Topical treatment of the facial skin with anti-aging products; Pre-treatment with oral retinoids for less than 6 months; Pre-treatment with nutraceuticals for less than 3 months; Skin Treatment with superficial chemical peels, microdermabrasion and/or laser ablative not less than three months; Treatment with oral products and/or threads to changes of hair shaft for less than 3 months; Stomach diseases such as gastritis and ulcers; Use of antacids and medicines such as omeprazole; Hypothyroidism; Current smoking; Chronic use of corticosteroids (systemic or topical); Chronic Kidney Diseases; Chronic Liver Diseases; Diabetes Mellitus; transplanted patients; photodermatoses Presence; Presence of inflammatory or infectious skin disease on the face; Chemotherapy for less than 3 months; Clinical evidence of immunosuppression; Women on hormone replacement therapy [16].
Randomization
A simple randomization was performed based on the table of random numbers was used to scramble the participants in the treatment group (n = 30) and placebo (n = 30). It was taken into account only the age and color of the skin of volunteers and it was not taken into account the menstrual status (pre/post-menopause) nor recorded the weight of volunteers (Table 1).
Study limitations
The study depended on voluntary, since it is prohibited by the ethics committee any payment or bonus to participants in clinical studies. Thus, the visits were scheduled according to the availability of participants and their discontinuation in the project was free.
Another limitation of the study was that the criteria for inclusion and exclusion were conducted through questionnaires, which was fully dependent on voluntary responses to our acceptance.
Test areas
The test areas were the frontal, periorbital and nasolabial regions of the face, being the periobital and nasolabial sides randomly chosen. On every measurement day, the subjects had to expose their uncovered test areas to the indoor climate conditions (21.5 ± 1 º C and 50 ± 5% relative
Biophysics techniques and skin image analysis
Evaluation of dermal characteristics: For the evaluation of thickness and dermis echogenicity was used a 20 MHz ultrasound equipment (Dermascan ® C, Cortex Technology). The thickness of the skin was determined by an image analyzer. The echogenicity was analyzed per unit area, in pixels determined with the aid of software, which is connected with the water retention between the collagen fibers and with aging and photoaging [17].
Evaluation by high resolution image of the skin: For this study we used the Visioface ® equipment which analyzes the visible spots and only visible with UV light by means of image analysis of skin illuminated by white light-emitting diodes (white LEDs) and light emitting diodes similar to UV (UV-like LED), respectively [18]. The equipment has software which can compare images of the same face region and quantify the wrinkles and pores.
Hair analysis techniques
Evaluation of the mechanical properties: The mechanical properties (tensile strength) were tested with the Texturômetro ® TA.XT Plus device, operating according to the method described for tensile strength. The hairs used were provided by the volunteers of the study and the measurements were made in triplicate only at 0 and 90 days. The strands were removed intact and the part used was the 5 cm plus near to hair root. The tensile strength was calculated using the maximum force in breaking the wires (measured in Newton).
Subjective evaluation of skin, hair and nails:
The volunteers were submitted to sensory analysis to such a questionnaire was applied before the use of nutraceutical and after the end of use, on days 0 and 90 of treatment. The volunteers were asked about the condition of their skin, hair and nails before treatment, as well as the views change after using the nutraceutical.
Statistical evaluation:
The experimental data obtained in clinical trials efficacy assessment were submitted to statistical analysis for their interpretation, when the distribution was normal, the indicated test was the analysis of variance, and when the distribution was not normal, the statistics were nonparametric, applying the Kruskal-Wallis test for data not linked [16]. The age of the volunteers were not used as co-variate in this study. The results were presented in form of graphs, tables and figures and discussed across the data provided by the literature. Differences were accepted as statistically significant at p < 0.05.
Evaluation of dermal characteristics
The high-frequency ultrasound provide measures of parameters related to the skin histology, analyzing the skin aging process and the echogenicity of the dermis-which is important as it varies with the chronological aging (intrinsic) and photoaging (extrinsic). This equipment also quantifies and qualifies the collagen and elastin fibers in the skin, being this way, an important contribution to clinical efficacy studies of dermocosmetic formulations [19].
This study showed the increase in the face of the skin echogenicity in the treatment group compared to baseline values (initial time), showing an improvement in skin condition (Figure 1). In the placebo group, there was no increase in echogenicity and some areas had decreased echogenicity as compared with the initial measurement (baseline). This way, the product acted on the dermis, increasing of fibroblast density, enhancing the formation of collagen fibrils, and acting as a repairman of the present damages, slowing the chronological aging and photoaging process, increasing the echogenicity of the dermis and improving the skin density [20][21][22].
A high dermis echogenicity is related to a high content of collagen fibers, so the lower this ratio is, the more echogenic the skin. Thus, the treatment with orthossilicic acid stabilized with choline has improved the dermis echogenicity when compared to placebo treatment in all regions. The Figure 2 shows the Echogenicity Ratio of all study regions. This parameter reflects the number of hypo echoic pixels/total number of pixels that increases during the aging process.
Evaluation by high resolution image of the skin
The evaluation by high resolution image (Visioface ® ) also showed improvement in the skin of the treatment group, but each volunteer had different improvement in each of the regions studied. Figures 3-5 shows a three-dimensional image shown where an evident reduction of wrinkles on the forehead to treatment with the product is visible. Alterations of collagen and elastin can directly the affect wrinkle [23]. Similar results in the reduction of wrinkles after the treatment with oral supplementation and analyzed with other techniques was related in the literature, demonstrating this way, the effectiveness of this type of products in the improvement skin appearance [24]. The hair resistance test made by Texturemeter ® TA.XT plus showed that the hair treatment group necessitated increased strength (force in Newton) to the break, whereas in the placebo group was not seen any significant alteration in the breaking strength ( Figure 6). Thus, it was found that oral supplementation with orthosilicic acid choline stabilized promoted greater resistance to the hair, improving hair fiber quality in general.
Subjective evaluation of skin, hair and nails
The evaluation of the perception of efficacy was performed to identify what were the changes noted by the volunteers, and check the effect of the use of placebo capsules in this type of treatment. The two groups noted changes related to skin, hair and nails. Thus, it was possible to realize the importance of making a comparison between the treatment group and the placebo group. Among the responses, we observed that voluntary treatment group saw greater improvement in skin, hair and nails appearance (Figures 7-9).
Conclusion
From the results obtained in this study, it was possible to assess the actual clinical efficacy of orthosilicic acid choline-stabilized as a nutricosmetic. Oral supplementation of this compound had satisfactory results regarding the improvement of the skin, hair and nails of the volunteers. Techniques skin image analysis showed positive results in increased echogenicity of the dermis and the micro relief of the improvement of the studied regions. It was also concluded that the supplementation improved the quality of the hair, since the hair after treatment were more resistant. In assessing the perception of the effectiveness of the volunteers, the results were better in the treatment group compared to the placebo group, showing results that coincided with the objective tests performed. In summary, the use of orthosilicic acid choline-stabilized showed significant effects in improving the general conditions of the skin, hair, nails, and can therefore be suggested as an effective nutricosmetic in the treatment of changes resulting from aging, complementary to the use of cosmetic products. | 2019-03-16T13:06:57.891Z | 2016-08-30T00:00:00.000 | {
"year": 2016,
"sha1": "499705f18d45195141a8d559835964266102fea6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2167-065x.1000160",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "296717f5dca4bc81e607d120f5697d53475b3548",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
204783134 | pes2o/s2orc | v3-fos-license | Pistacia weinmannifolia root exerts a protective role in ovalbumin-induced lung inflammation in a mouse allergic asthma model
Pistacia weinmannifolia (Anacardiaceae) has been used in herbal medicine for the treatment of influenza, dysentery and enteritis in China. It was recently observed that P. weinmannifolia root extract (PWRE) exerts anti-inflammatory effects both in in vitro and in vivo models. Based on the results from previous studies, the present study investigated the protective effect of PWRE on airway inflammation and mucus hypersecretion. Treatment with PWRE significantly decreased the number of eosinophils and the levels of Th2 cytokines, such as interleukin (IL)-4, IL-5 and IL-13, in the bronchoalveolar lavage fluid (BALF) of OVA-exposed mice. PWRE decreased the high serum levels of total and OVA-specific immunoglobulin E. PWRE also effectively inhibited the influx of inflammatory cells into the lung, as well as airway mucus hypersecretion. In addition, the increased level of monocyte chemoattractant protein-1 was significantly decreased with the PWRE treatment in the BALF of OVA-exposed mice and in lipopolysaccharide-stimulated RAW264.7 macrophages. These protective effects of PWRE on OVA-induced pulmonary inflammation were accompanied by the downregulation of mitogen associated protein kinases and nuclear factor-κB activation. Thus, the results from the present study indicate that PWRE could be valuable adjuvant for the treatment of asthma.
Introduction
Allergic asthma is a chronic inflammatory disease and a major health issue, and its prevalence is increasing worldwide (1). The major features of asthma pathophysiology include airway inflammation and mucus hypersecretion (2,3). It is well known that the increased levels of eosinophil recruitment and T helper lymphocytes 2 (Th2) cytokines, such as interleukin-4 (IL-4), IL-5 and IL-13, are closely associated with sustained airway inflammation (4). Macrophages-derived chemokines such as monocyte chemoattractant protein-1 (McP-1) increased the recruitment of inflammatory cells including eosinophils in asthma pathogenesis (5,6) The increased concentration of immunoglobulin E (IgE) has a pivotal role in allergic reactions and is much higher in asthmatic patients (7). changes in the number of goblet cells and production of mucus are key to airway inflammation and obstruction (8). The mitogen-activated protein kinase (MAPK) signaling pathways have an important role in the inflammatory processes of allergic asthma (9). The activation of c-Jun N-terminal kinase (JNK) has been implicated in IgE class switching (10). Extracellular signal-regulated kinase (ERK) and p38 have been reported to play a role in the production of cytokines, including IL-5 (11). Nuclear factor (NF)-κB plays an important role in inflammatory cell influx, Th2 cytokine levels and inflammatory molecules in allergic asthma (12,13).
In recent years, the approaches to improve the side effects of medicine have focused on research into allergic asthma (14) and natural herbal extracts are attracting increased attention due to their prominent biological activities and minimal side (15). Pistacia weinmannifolia (PW) is used as a herbal medicine in china (16,17) and its major metabolites possess biological activities, such as inhibitory activities against histamine release (16,18,19). In a previous study, it was confirmed that the anti-inflammatory activities of P. weinmannifolia root extract (PWRE) in PMA/tumour necrosis factor-α-stimulated airway epithelial cells and in pulmonary inflammatory response induced by cigarette smoke and lipopolysaccharide (LPS) (20). Based on these results and those of other studies (16)(17)(18)(19)(20), which reflect the anti-inflammatory activities of PWRE on pulmonary inflammation, it was hypothesized that PWRE could exert a protective effect against ovalbumin (OVA)-induced lung inflammation. Therefore, the aim of the present study was to evaluate the regulatory effects of PWRE against eosinophil recruitment and Th2 cytokines, IgE and mucus overproduction, which are the major characteristics of allergic asthma.
Materials and methods
Preparation of PWRE. PWRE was prepared as previously described (20 Counting the inflammatory cells. BALF collection was performed in order to count the inflammatory cells and evaluate the levels of inflammatory cytokines as previously described (22). The mice were anesthetized with Zoletil 50 ® (30-50 mg/kg IP; Virbac) and Xylazine (5-10 mg/kg IP; Bayer Korea) on day 25 based on prior anesthesia condition (20). Briefly, on day 25, the trachea was cannulated and infused with 0.7 ml PBS for the collection of BALF (infusion was performed twice with a total volume of 1.4 ml) and blood was collected for the detection of IgE. Mice were sacrificed under Zoletil/Xylazine anaesthesia and exsanguinated. In order to distinguish the different cells, 0.1 ml of BALF was centrifuged at 246 x g for 5 min at room temperature to transfer the cells to the glass slide and then the glass slide was stained with diff-Quik ® solution (IMEB, Inc.) according to the manufacturer's protocol.
Measuring the Th2 cytokines and IgE production. The degree of inflammatory score and mucus production in each group was assessed by two independent observers in the laboratory using a semi-quantitative scope. The H&E staining was scored as follows: 0, no recruitment of inflammatory cells; 1, small amount of recruitment; 2, moderate recruitment; 3, large amount of recruitment. The PAS staining was scored as follows: 0, no mucus production; 1, mild mucus production; 2, moderate mucus production; 3, distinct mucus production; 4, severe mucus production.
Cell culture. The macrophage cell line RAW264.7 was obtained from the American Type culture collection. The cells were grown in dulbecco's modified Eagle's medium (DMEM; Gibco; Thermo Fisher Scientific, Inc.) with 10% fetal bovine serum (FBS; Hyclone; GE Healthcare Life Sciences), 100 U/ml penicillin and 100 µg/ml streptomycin and were incubated at 37˚C in a humidified chamber with 5% CO 2 . The cells were activated with lipopolysaccharide (LPS; 0.5 µg/ml) 1 h after PWRE treatment (1.25, 2.5 and 5 µg/ml). The dose of LPS was based on a previous study (23). The level of McP-1 in the culture supernatant was determined by ELISA.
Statistical analysis. All values are expressed as the mean ± standard deviation of at least three independent experiments. The statistical significance was determined by a two-tailed Student's t-test for comparisons between two groups. One-way analysis of variance followed by dunnett's multiple groups. Data were analyzed using SPSS 20.0 (IBM Corp.). P<0.05 was considered to indicate a statistically significant result.
Results
Effect of PWRE on alleviating the eosinophil numbers in the BALF. The significant increase in eosinophils and macrophages has been well established in OVA-induced pulmonary inflammatory response (24,25). Therefore, the present study focused on the inhibitory effect of PWRE on the cell numbers. To distinguish the inflammatory cells and count the cell numbers, Diff-Quik ® staining was performed according to the manufacturer's protocol. As presented in Fig. 1, the numbers of eosinophil and macrophages were significantly increased in the OVA-exposed group compared with the NC group (P<0.05). Conversely, this increase in inflammatory cell numbers was significantly decreased in the PWRE-treated group (P<0.05; Fig. 1A and B).
Effect of PWRE on attenuating Th2 cytokines in the BALF.
The present study next investigated the regulatory effect of PWRE on the production of Th2 cytokines that are deeply associated with the pathophysiology of asthma. ELISAs were performed in order to evaluate the levels of Th2 cytokines. It was revealed that IL-4, IL-5 and IL-13 were significantly increased in the OVA group when compared with the NC group (P<0.05; Fig. 2A-C). However, treatment with PWRE decreased the levels of these cytokines induced by OVA. In particular, the inhibitory effects of 15 mg/kg PWRE on the production of cytokines were similar to those of 30 mg/kg MON, which was used as a positive control.
Effect of PWRE on downregulating IgE production. The serum total IgE level is highly elevated in allergic patients such as bronchial asthma and is known to increase with the onset and aggravation of the disease (26,27). A specific IgE test is needed together with total IgE for proper evaluation of allergic diseases (28). Based on the importance of the IgE-mediated immune response in asthma (29), the present study investigated the inhibitory activity of PWRE on OVA-induced IgE production. As presented in Fig. 3, the concentration of total IgE or OVA-specific IgE in the serum were significantly increased in the asthmatic group compared with those in the Nc group (P<0.05), whereas treatment with PWRE effectively decreased the levels of total IgE and OVA-specific IgE (Fig. 3).
Effect of PWRE on inhibiting inflammatory cell influx and mucus hypersecretion.
In order to investigate whether PWRE suppresses the OVA-induced inflammatory cell influx into the lungs, paraffin lung sections were stained with H&E in the present study. A significantly increased level of inflammatory cell influx was observed in the OVA group compared with the NC group (P<0.05; Fig. 4A). Notably, this level was downregulated in the PWRE-treated group. The arrows point to the influx of inflammatory cells. The increased secretion of MCP-1 is closely associated with airway inflammation by inducing the influx of inflammatory cells (5,30). Therefore, the present study next assessed the inhibitory effect of PWRE on OVA-induced McP-1 secretion. As presented in Fig. 4B, the marked increase in MCP-1 was observed in the BALF of the OVA group, whereas treatment with PWRE inhibited this secretion. In order to further investigate the regulatory effect of PWRE on McP-1 secretion, the inhibitory effect of PWRE on McP-1 was assessed in LPS-stimulated RAW264.7 macrophages. As presented in Fig. 4C, the administration of LPS significantly increased the MCP-1 secretion (P<0.05). However, pretreatment with PWRE significantly downregulated this secretion (P<0.05; Fig. 4C). Mucus hypersecretion is an prominent characteristic in the pathophysiology of allergic asthma (31). Therefore, the present study assessed whether PWRE led to an attenuation of the OVA-induced mucus overproduction. The paraffin lung sections were stained with the PAS staining reagent to measure the mucus production around the airways. As presented in Fig. 5, the levels of mucus production were significantly increased in the OVA group when compared with the NC group (P<0.05). However, a decrease in this level was observed in the PWRE group (Fig. 5). The mucus was stained a purple color by the PAS staining reagent.
Effect of PWRE on decreasing MAPKs and NF-κB activation in the lungs.
In order to investigate whether the airway inflammatory response was mediated by MAPK-responsive mechanisms, the present study evaluated the levels of ERK, JNK and p38 phosphorylation. As presented in Fig. 6, the levels of JNK, p38 and ERK were significantly upregulated in the OVA group compared with the NC group (P<0.05). However, 15 mg/kg PWRE significantly downregulated the enhanced activation of JNK, p38 and ERK in the lungs (P<0.05; Fig. 6).
In order to further investigate the mechanism of PWRE, the NF-κB signaling pathway was assessed in the present study. As presented in Fig. 7, the activation of NF-κB p65 and IκBα was significantly upregulated in the OVA-exposed group compared with the Nc group. However, this increase was effectively blocked by the PWRE treatment.
Effect of PWRE on LPS-stimulated MAPKs and NF-κB activation in RAW264.7 macrophages.
In the present study, PWRE exerted a protective effect in pulmonary inflammation in OVA-exposed mice. Its effects were accompanied by MAPK and NF-κB inactivation (Figs. 6 and 7). In particular, NF-κB activation was effectively downregulated upon PWRE administration. The results from the present study also demonstrated that PWRE regulates McP-1 production in the BALF of OVA-exposure mice and in LPS-stimulated RAW264.7 macrophages (Fig. 4B and c). The regulatory effect of PWRE on LPS-stimulated MAPKs and NF-κB activation was therefore investigated in RAW264.7 macrophages. The administration of LPS significantly upregulated the activation of MAPKs and NF-κB (P<0.05; Figs. 8 and 9). However, the levels of JNK, p38 and ERK activation was not significantly downregulated by PWRE pretreatment (Fig. 8). Similar to those results presented in Fig. 7, the activation of NF-κB p65 and IκBα was significantly downregulated by ≥2.5 µg/ml PWRE pretreatment (P<0.05; Fig. 9).
Discussion
Previously, studies have demonstrated that PWRE exerts anti-inflammatory effects via downregulation of inflammatory molecules including IL-6 and IL-8, which are important parameters in chronic obstructive pulmonary disease (16,18,20). The present study extended the results of these previous publications, which demonstrate the protective effects of PWRE in OVA-induced pulmonary inflammation.
The airway inflammatory response is well known as a major cause of allergic asthma and is caused by a variety of inflammatory cells and molecules. IL-4 has been reported to differentiate native T cells into Th2 cells and induce class switching in B cells to IgE production (32,33). IL-5 has an important role in the maturation and recruitment of eosinophils, and IL-13 is recognized as a dominant factor for IgE class switching, eosinophil inflammation and mucus production (9). Eosinophil infiltration is well known as an indispensable indicator in airway inflammation and the increase of eosinophil cationic proteins leads to airway hyper-responsiveness (34). The high level of macrophages is also well known as a characteristic of the allergic asthma murine model and macrophage-derived McP-1 is known as a potent eosinophil chemoattractant (5,30). Therefore, the regulation of eosinophil influx, Th2 cytokine secretion and IgE production are important therapeutic approaches in the treatment of asthma. OVA has been used as an allergen in asthma animal models and the utility of OVA-induced asthma model has been well established and this model has been widely used to evaluate anti-asthmatic effects and immunological mechanisms involved in the pathogenesis of asthma (35). In this study an allergic asthma mouse model, in which the levels of Th2 cytokines, IgE and mucus production were successfully upregulated by OVA compared with the Nc control was established. In the present study, it was confirmed that PWRE administration attenuated OVA-induced eosinophils and macrophage recruitment. OVA-induced IL-4, IL-5, IL-13 and IgE were suppressed by the treatment of PWRE. In addition, the increased levels of McP-1 were downregulated following PWRE treatment in both in vivo and in vitro studies. Therefore, the results from the present study suggest that PWRE has a protective role against OVA-induced pulmonary inflammation.
In normal circumstances, goblet cell-derived mucus exerts protective roles against harmful agents. However, the excessive production of mucus could easily obstruct breathing (36,37). Therefore, the regulation of mucus hypersecretion may be a valuable therapeutic strategy in alleviating airway obstruction. MUc5Ac is a major oligomeric mucin in airway mucus and its level is upregulated in patients with asthma (38). The inhibitory activities of PWRE on MUc5Ac secretion in PMA-stimulated airway epithelial cells have already been confirmed (20). Therefore, the regulatory effect of PWRE on mucus overproduction was expected in the present study and it was observed that PWRE ameliorated the OVA-induced mucus hypersecretion.
The MAPK and NF-κB signaling pathways are known as key mediators in allergic asthma, and are closely associated with the activation of various immune cells (39,40). Accumulating evidence emphasizes the importance of the inhibition of the MAPK pathway in airway inflammatory diseases such as asthma (9). Accordingly, the inhibitory effect of PWRE on MAPKs activation was assessed in the present study. It was subsequently confirmed that OVA-induced MAPKs activation was significantly decreased by PWRE treatment. In LPS-stimulated RAW264.7 macrophages, PWRE did not exert any inhibitory effects on MAPKs activation. It is well established that the activation of IκB leads to airway inflammation by inducing NF-κB activation and production of inflammatory molecules (41)(42)(43); therefore, the present study next investigated the ability of PWRE to inactivate NF-κB and IκB. Notably, PWRE exerted an inhibitory effect on OVA-induced NF-κB p65 and IκBα activation. Similar to the results presented, the inhibitory effect of PWRE was observed in IκBα and NF-κB activation in LPS-stimulated RAW264.7 macrophages. Therefore, the results from the present study suggest that the molecular mechanism underlying the protective effects of PWRE on pulmonary inflammation primarily regard the downregulation of NF-κB activation.
In the present study, PWRE inhibited the pulmonary inflammatory response by diminishing the recruitment of inflammatory cells and the concentration of IL-4, IL-5, IL-13 and IgE. PWRE also downregulated the levels of McP-1 and mucus production. Notably, the effects of PWRE were accompanied by MAPKs and NF-κB inactivation. Abnormal weight changes and toxicological changes (such as intraperitoneal changes) were not observed after administration of PWRE. Therefore, the results from the present study suggest that PWRE may ameliorate airway inflammation and mucus hypersecretion in allergic asthma as a potential anti-inflammatory adjuvant. However, there was no evaluation of accurate count of inflammatory cells using flow cytometry. The levels of T-cell activation and eotaxin production in the pathogenesis of OVA-induced pulmonary inflammation have also not been investigated. It is necessary to confirm the inhibitory effect of PWER on MCP-1 in alveolar macrophages. These limitations should be addressed in the near future. In addition, the present study has limitations on the efficacy of PWRE in OVA-induced pulmonary inflammation. Therefore, clinical trials should be performed to elucidate this efficacy. | 2019-10-10T09:16:39.915Z | 2019-10-07T00:00:00.000 | {
"year": 2019,
"sha1": "ad1338944bd02511cae7c3019935c82b4a81359a",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2019.4367/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "c3ab7bb671ec4ceb1ce4f30b0f4698d6a84a4968",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
138807065 | pes2o/s2orc | v3-fos-license | Uses of X-ray 3D-Computed-Tomography to Monitor the Development of Garlic Shooting Inside the Intact Cloves
The X-ray scan technology, initially born as 2D computed tomography and lately evolved as X-ray high resolution 3D computed tomography (XHR3DCT), attracted the attention of researchers in the last three decades. This noninvasive technique has many applications including the structure of biological sample. Since its invention in 1973 by Godfrey Hounsfield, this technique has been mainly applied mainly to medical analysis and rarely to animals. Recently, this technique found its application even in the field of botany such as detection and visualization of the root system hidden below ground as growth of roots are central to the eco-physiology of terrestrial plants including agriculturally important crops and forest trees (Pires et al., 2010). However, therefore, XHR3DCT should be used only if the plants are grown in the laboratory-scaled small pots. Due to the size of equipment and other restriction of the use of X-ray in the open space, the use of X-ray for scanning of root cannot be readily employed on-site in the most plant producing facilities. For the purpose of belowground plant tissue sensing, we have recently employed acoustic probes by developing a novel non-invasive sensing technology for detection of belowground plant tissues based on sound propagation in the soil (Iwase et al., 2015a; 2015b). Apart for root visualization in the pots, the utilization of XHR3DCT has been limited until its potential use for analyzing the inner plant tissue structure emerged out (Stuppy et al., 2003). Recently, the applications and capabilities of XHR3DCT have moved forward with the improvement of the available technologies (Mooney et al., 2012). The time required for entire analysis and imagesprocessing has been largely shortened while high resolution of the data acquisition and the ability of precisely scanning the large-sized objects have been enhanced year by year. Furthermore, recent effort for manufacturing the shortsized devise may allow application of XHR3DCT in fresh agricultural product on-site (Dhondt et al., 2010). Past studies demonstrated the ability of X-ray to perform realtime monitoring of the growing plant tissue (Brodersen et al., 2010; Ferreira et al., 2010) and although the ionizing effect of the X-rays should be taken in account, sporadic examinations can be a perfect approach to monitor the quality and the development of living plant material, especially in
INTRODUCTION
The X-ray scan technology, initially born as 2D computed tomography and lately evolved as X-ray high resolution 3D computed tomography (XHR3DCT), attracted the attention of researchers in the last three decades. This noninvasive technique has many applications including the structure of biological sample. Since its invention in 1973 by Godfrey Hounsfield, this technique has been mainly applied mainly to medical analysis and rarely to animals. Recently, this technique found its application even in the field of botany such as detection and visualization of the root system hidden below ground as growth of roots are central to the eco-physiology of terrestrial plants including agriculturally important crops and forest trees (Pires et al., 2010). However, therefore, XHR3DCT should be used only if the plants are grown in the laboratory-scaled small pots. Due to the size of equipment and other restriction of the use of X-ray in the open space, the use of X-ray for scanning of root cannot be readily employed on-site in the most plant producing facilities. For the purpose of belowground plant tissue sensing, we have recently employed acoustic probes by developing a novel non-invasive sensing technology for detection of belowground plant tissues based on sound propagation in the soil (Iwase et al., 2015a;2015b).
Apart for root visualization in the pots, the utilization of XHR3DCT has been limited until its potential use for analyzing the inner plant tissue structure emerged out (Stuppy et al., 2003). Recently, the applications and capabilities of XHR3DCT have moved forward with the improvement of the available technologies (Mooney et al., 2012). The time required for entire analysis and imagesprocessing has been largely shortened while high resolution of the data acquisition and the ability of precisely scanning the large-sized objects have been enhanced year by year. Furthermore, recent effort for manufacturing the shortsized devise may allow application of XHR3DCT in fresh agricultural product on-site (Dhondt et al., 2010). Past studies demonstrated the ability of X-ray to perform realtime monitoring of the growing plant tissue (Brodersen et al., 2010;Ferreira et al., 2010) and although the ionizing effect of the X-rays should be taken in account, sporadic examinations can be a perfect approach to monitor the quality and the development of living plant material, especially in (Received September 1, 2015;Accepted November 9, 2015) X-ray high resolution three-dimensional computed tomography (XHR3DCT) is a non-invasive technique to monitor the inner morphology of an object. It permits to obtain a series of horizontal stack of the structure that allows its 3D reconstruction of images by a computer post-processing analysis. This technology is commonly used for medical analysis on human or rarely on animals and its utilization in the plant field has been recently discussed. As we are engaged in the investigation on the possibility to use XHR3DCT for monitoring the storage quality and/or post-harvest development of fresh produces such as vegetables, here we report on minimal demonstration performed on garlic bulbs. In particular, immediately after the harvest from the soil, cloves of garlic bulbs have been maintained under different conditions differed in temperature and humidity, with and without irradiation by red (660 nm) or infra-red (735 nm) lights. At an intermediate time, some cloves have been non-invasively monitored by XHR3DCT to predict the changes in the size (volume) of growing inner shoots (sprouts).
To determine the sprout volume based on the XHR3DCT-scanned images, several mathematical approaches have been tested. With approximation of the garlic sprout shape as a parabolic cone, estimation of shoot volume could be readily achieved. By analyzing the inner shoot size in garlic clove kept under different conditions, increase in the shoot size under red light or under higher temperature and relative humidity could be monitored non-invasively, suggesting that XHR3DCT can be used for monitoring of inner structure within the clove of garlic without damaging the samples. Future applications of this technique in during post-harvest managements of a wide range of fresh produces are expected.
Keywords : garlic, plant morphology, sprout volume, X-ray computed tomography Plant Eco-physiology Section: Short Communication Environ. Control Biol., 54 (1), 39 44, 2016DOI: 10.2525 case of monitoring a limited number of samples to be used non-destructively in the long-term experiments. However, even today, XHR3DCT remains strongly underused in plant sciences despite its high potential in delivering detailed 3D phenotypical information because of the low Xray absorption of most plant tissues (Staedler et al., 2013).
In the present paper a case of study where the XHR3DCT has been applied to monitor garlic sprout is shown. The aim of the work was to investigate long-term garlic storage. In fact, due to many problems related to the sprout growth the majority of the growers sell their garlic as a fresh harvest at markets today. Only limited portion of total garlic are commercially available over the winter time due to difficulty to maintain the fresh unfrozen cloves of garlic without development of sprout during storage without the use of chemicals, despite indication in public guidelines suggesting that garlic can be kept for 6 7 months if it is stored at 0°C at 65 70% of relative humidity (RH) (Bachmann et al., 2008). In this study, we tested the utilization of the XHR3DCT for non-invasive monitoring of the growth and development of garlic sprout within the cloves, under different model storage conditions. Furthermore, mathematically assisted estimation of the size (volume) of growing inner shoots (sprouts) has been performed in order to evaluate if the use of XHR3DCT can be a promising approach permitting the analysis of hidden growth stages without interfering the physiological state in the living samples.
MATERIALS AND METHODS
Garlic was obtained freshly immediately after the harvest from growers in Mizumaki town, situated in Fukuoka prefecture, Japan. For the analysis of garlic storage, several intact garlics cloves covered by outer tunic layer were placed under different conditions. Cold incubator maintaining the samples at ca. between 10 and 13°C were used for low temperature storage of garlic cloves. One group has been subjected to storage in the cold incubator a temperature of 13.09 0.5°C with a RH of 83 11.5%, another group to 10 1.15°C with a RH of 21 11%. The whole cloves have been scanned after a period of 130 150 days for each treatment. The scanning procedure has been done in one day during the storage by transporting the garlic samples in insulated bag to minimize the shock due to temperature changes.
Some sub-groups of cloves have been maintained under completely dark condition whilst others have been irradiated at a photon flux density of 25 mol m 2 s 1 under different light emitting diodes (LEDs), namely, red LED (peak emission at 660 nm) and infra-red LED (peak emission at 735 nm). In the end of storage, the garlic bulbs have been sliced by knife in half and analyzed.
The X-ray scanning device ( Fig. 1) used for the experiment was a HMX225-ACTIS+3 (X-Tek Systems Ltd., Tring, Hertfordshire, United Kingdom) was available at Fukuoka Industrial Technical center (Kitakyushu, Japan). The scanning conditions were as follows: Source voltage 100 (kV), Source current (80 A), and Scan width 0.2 mm. Images were post-processed with the Actis Multi Planar Reconstruction software. Measurements of morphologic parameters on the images have been performed by using the software Image J.
Shoot volume estimation after XHR3DCT
For the estimation of the garlic sprout volume, several simplified mathematical approaches (Table 1) have been tested. The easiest way to calculate the garlic sprout volume is to assume the shape of shoot as a cone with an oval Environ. Control Biol. base (occasionally circle) which volume can be calculated with the subsequent formulae: 1) Oval cone model: Volume dDh/12, where d is the minor diameter and D is the major and h is the height of the cone. Then this calculation has been made by analogy to the methodology usually used to calculate the volume of a tree trunk (Leverett et al., 2008). The procedure requires the sectioning of shape of garlic sprout into a series of pieces with horizontal cuts (Fig. 2A) and the total volume was calculated as a sum of several conical frustums (Fig. 2B). Starting from the top, the base of the first cut is the top of the successive part and the height is the distance between the lower and the upper section. The section surface area in base of each block in garlic sprout shape was neither circular, oval nor elliptical, and then the section base areas have been calculated by manually measuring the size of section in each image from the horizontal scan stacks obtained for the multi planar reconstruction. The total volume calculation using the cut-surface areas of each section (Fig. 2 right) has been performed using the following formula: 2) Section area-based cone model Volume h{a1 a2 (a1a2) 1/2 }/3, where a1 is the area of the bottom section and a2 is the area of the upper section. Then, this calculation can be further simplified by measuring the minor and major diameter of each base instead of measuring the total area. Since this approach lack the data for the upper portion of the shoot, we have to calculate the volume of conical frustum portions, using the formula: 3.1) Conical frustum compartment model Volume of a single conical frustum h {d1D1 d2D2 (d1D1d2D2) 1/2 }/12 , where d1 is the minor diameter and D1 is the major diameter of the base at the bottom and d2 and D1 the diameters of the upper base section. Then the total volume is calculated as the sum of all conical frustum plus the top part as an oval cone (because doesn't have the upper base): 3.2) Sum of compartments Total Volume 1 k Volume of conical frustum Volume of the top part ( dDh/12), where n are the number of conical frustum assumed. These calculations can be considered accurate if the number of section is big enough, but is not really suited for the analysis of multiple samples because it requires frustum time-consuming frustum steps where all the morphological parameters have to be manually measured. For these reason, these calculation has been compared with the simplified assumption that the volume of the garlic sprouts can be considered as the volume of a parabolic or an elliptical cones. These calculations are relatively simple and requiring only the step for determination of the diameters at the base and the height of the whole sprout. The calculation of the volume as parabolic cone and elliptical shape of the sprout have been performed using the parabolic cone model: Volume dDh/8, and the elliptical cone model: Volume dDh/6.
RESULTS AND DISCUSSION
XHR3DCT is a non-invasive tool to determine the structure of an object exploiting the characteristic of an Xray beam when it pass through a material and reach a detector. In particular it is possible to monitor the bulk compactness of the sample by its intrinsic ability to attenuate the X-Ray beam depending on the bulk compactness. For this reason, it is possible to obtain a 2D image of the section cut by the X-ray beam having different gray tonalities proportional to the density, derived by the interactions between the beam and the object. This operation is repeated for all the height of the object to obtain a stack of images of all the cross-section that are finally combined to form the 3D representation.
Here, the quality of the images permitted a reconstruction and the calculation of all standard parameter (e.g. perimeter, area, etc...) for the cloves morphology determination. As shown in Fig. 3, The XHR3DCT resulted in a stack of images that can be analyzed singularly or processed through a multi planar reconstruction (MPR) to obtain all the views of the sprout. After the MPR, it is possible to manually modulate and analyze the size of cutting plane from the horizontal, vertical and lateral views, eventually to obtain all the desirable oriented sections as showed in Fig. 4. The quality of the obtained scan was Vol. 54, No. 1 (2016) Table 1 List of formulae used for estimation of shoot volume based on the size data from XHR3DCT images of intact garlic.
Model
Formulae Description Where d is the minor diameter and D is the major and h is the height of the cone.
2) Section area-based cone model
Where a1 is the area of the bottom section and a2 is the area of the upper section.
3.1) Compartment model: conical frustum
Vcf h d1D1 d2D2 d1D1d2D2 12 Volume of a single conical frustum (Vcf), Where d1 is the minor diameter and D1 is the major diameter of the base at the bottom and d2 and D1 the diameters of the upper base section.
3.2) Compartment model: Sum of compartments
Where n is the number of conical frusta assumed.
5) Elliptical cone model
Vs dDh 8 high enough for structural analysis defining the size of shoot tissue. For example, it is possible to observe and identify shape of single leaves being developed, with a good balance between the luminosity and the contrast. In Fig. 5 two examples where the number and shape of the leaves clearly detected are shown and compared with conventional anatomical sections. All the methodologies tested here for the estimation of the garlic shoot volume have been compared in order to select the most suited approach for this work. It is obvious to us that the methodology summing conical frustum described in the formula (2) is the most accurate if the number of section is large enough, and it is also obvious to us that such approach is time-consuming and requires higher resolution by spending longer scanning time with finer special intervals, thus no-longer non-invasive. Including the simplified parabolic cone model and the elliptical cone model, all the methodology has been compared. The differences among all calculations have been compared as the ratio respect the formula (2) that is considered 100% of the volume ( Table 2).
The comparison of different approaches for evaluation of the shoot volume in the garlic cloves showed that the one most close to the calculation by formula (2) was shown to be the approach using formula (3.2), which gave a 92% of the volume. The Elliptical cone volume calculation (5) was shown to be less accurate showing its tendency to overestimate by 50%, whilst the simple oval cone model Environ. Control Biol. showed tendency to underestimate the volume by 25 %. The parabolic cone calculation (4), which is a simple and easy method to estimate the total volume of the sprout, resulted in a good approximation at about 112% compared to the calculation by formula (2). For this reason, further calculation of the shoot volume was performed with this model which is much rapid than the use of formula (3.2).
Impact of light
The dark treatment under cold condition resulted in lower size and weight of garlic sprouts inside the cloves. Samples harvested at the end 210 d of storage under red and infra-red light treatments showed greater fresh weight compared to dark control, thus reflecting the lightdependent changes in volume (Fig. 6A). Compared to 735 nm infra-red light treatment, the 660 nm red light treatment resulted in much greater enhancement in the size of the sprouts (Fig. 6A). By XHR3DCT-based shoot volume calculation using formula (4), non-destructively performed during storage (130 150 days of storage), the increment in fresh weight due to the treatments by light (especially red light) could be well predicted (Fig. 6B). As predicted by XHR3DCT-based shoot volume estimation prior to harvest, enhanced shooting under red light was anatomically examined by harvesting the sample at the end of storage (Fig. 7). CONCLUSION XHR3DCT successfully detected the inner structure in garlic cloves and permitted us to perform estimation of shooting status under different conditions. Among the model we examined, the simple mathematic model based on approximation of the garlic sprout shape as parabolic cone was shown to be the most easiest and reliable approach for estimation of garlic shoot volume.
The cold storage under high RH caused an increase in the sprout growth and development in all treatments. In both the light treatments, the growth of the cloves resulted enhanced respect the dark control. In the absence of light, the garlic development at both 13.09 0.5°C with a RH of 83 11.5% and at 10 1.15°C with a RH of 21 11% were shown to be minimal. The XHR3DCT-based prediction of shoot volume showed similar trend with the actual increase in fresh weight measured on fresh garlic. Therefore we can conclude that it would be possible to predict the development of the sprout hidden inside the cloves without destroying the sample. | 2019-04-29T13:09:12.471Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "0e41621202c413f7564c29b88184dd7446ebdb9b",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/ecb/54/1/54_39/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "1bdd338c1232de5909439f9106761e4899a6d53f",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
248572119 | pes2o/s2orc | v3-fos-license | Ground states of atomic Fermi gases in a two-dimensional optical lattice with and without population imbalance
We study the ground state phase diagram of population balanced and imbalanced ultracold atomic Fermi gases with a short range attractive interaction throughout the crossover from BCS to Bose-Einstein condensation (BEC), in a two-dimensional optical lattice (2DOL) comprised of two lattice and one continuum dimensions. We find that the mixing of lattice and continuum dimensions, together with population imbalance, has an extraordinary effect on pairing and the superfluidity of atomic Fermi gases. In the balanced case, the superfluid ground state prevails the majority of the phase space. However, for relatively small lattice hopping integral $t$ and large lattice constant $d$, a pair density wave (PDW) emerges unexpectedly at intermediate coupling strength, and the nature of the in-plane and overall pairing changes from particle-like to hole-like in the BCS and unitary regimes, associated with an abnormal increase in the Fermi volume with the pairing strength. In the imbalanced case, the stable polarized superfluid phase shrinks to only a small portion of the entire phase space spanned by $t$, $d$, imbalance $p$ and interaction strength $U$, mainly in the bosonic regime of low $p$, moderately strong pairing, and relatively large $t$ and small $d$. Due to the Pauli exclusion between paired and excessive fermions within the confined momentum space, a PDW phase emerges and the overall pairing evolves from particle-like into hole-like, as the pairing strength grows stronger in the BEC regime. In both cases, the ground state property is largely governed by the Fermi surface topology. These findings are very different from the cases of pure 3D continuum, 3D lattice or 1DOL.
I. INTRODUCTION
Ultracold Fermi gases provide an ideal platform for investigating the pairing and superfluid physics over the past decades, primarily owing to the high tunability of multiple parameters [1,2]. Using a Feshbach resonance [3], one can tune the effective pairing strength from the weak coupling BCS limit all the way through to the strong pairing Bose-Einstein condensation (BEC) limit. There have been a great number of experimental and theoretical studies on ultracold Fermi gases in recent years, with many tunable parameters which have been made accessible experimentally, including pairing interaction strength [1], population imbalance [4][5][6][7][8][9][10][11][12], and dimensionality [13][14][15]. In particular, ultracold Fermi gases in an optical lattice exhibit rich physics due to the tunable geometry [16][17][18]. As is well known, population imbalance suppresses or destroys superfluidity in three-dimensional (3D) homogeneous systems [9,19]. For example, superfluidity at zero temperature is completely destroyed at unitarity and in the BCS regime, whereas stable polarized superfluid (pSF) with a finite imbalance p exists only in the BEC regime [19]. Meanwhile, in the absence of population imbalance in a 3D lattice, one finds the superfluid transition temperature T c ∝ −t 2 /U in the BEC regime, due to virtual pair unbinding in the pair hopping process [20,21], which makes it hard to reach the superfluid phase in the BEC regime. (Here t is the lattice hopping integral, and U < 0 is the onsite attractive interaction). While the superfluid transition for both population balanced and imbalanced Fermi gases have been realized experimentally in the 3D continuum case (often in a trap), it has not been realized even for the balanced case in 3D lattices. However, superfluidity, long-range or Berezinskii-Kosterlitz-Thouless (BKT)-like [22], as well as pairing phenomena have been explored experimentally in 2D and 1D optical lattices [23][24][25][26][27][28] or quasi-2D traps [29][30][31][32][33]. Common to these experiments is the the presence of one or two continuum dimensions. Until further breakthrough is made in cooling techniques, the presence of continuum dimensions seems to be crucial for the superfluid phase to be accessible experimentally so far in low dimensional optical lattices (and quasi-2D traps) besides the 3D continuum. We note, however, that these optical lattice experiments have mostly been restricted to the small t limit such that the coupling between different pancakes (2D planes) or cigar-shaped tubes (1D lines) is negligible. Therefore, a systematic investigation of the vast unexplored parameter space of the low dimensional optical lattices is important in order to uncover possible exotic and interesting new quantum phenomena.
In the presence of population imbalance, an open Fermi surface of Fermi gases in a one-dimensional optical lattice (1DOL), caused by large d and/or small t, often leads to destruction of the superfluid ground state in the BEC regime [34]. Our recent study on pairing and superfluidity of atomic Fermi gases in a two-dimensional optical lattice (2DOL), which is comprised of two lattice and one continuum dimen-sions, reveals that for relatively large d and small t, a pair density wave (PDW) ground state emerges in the regime of intermediate pairing strength, and the nature of the in-plane and overall pairing changes from particle-like to hole-like in the unitary and BCS regimes, with an unexpected nonmonotonic dependence of the chemical potential on the pairing strength [35].
In this paper, we focus on the ground state superfluid behavior of atomic Fermi gases in 2DOL, under the effects of lattice-continuum mixing, population imbalance and its interplay with the lattice parameters. We first investigate the evolution of the Fermi surface as a function of hopping integral t and lattice constant d, and then calculate the zero T superfluid phase diagram using the BCS-Leggett mean-field equations [36], but supplemented with various stability conditions, including those derived from finite-temperature formalism [9]. We explore the superfluid phase diagrams in various phase planes, as a function of lattice constant, hopping integral and interaction strength for population balanced cases and also of polarization for population imbalanced cases.
We find that in the population balanced case, while the phase diagram at zero T is dominated by the superfluid phase, a PDW ground state may emerge at intermediate pairing strength, for relatively small t and large d, and the nature of the in-plane and overall pairing changes from particle-like to hole-like in the BCS and unitary regimes. This is associated with an open Fermi surface, where the effective number density in the lattice dimensions can go above half filling. The PDW state originates from strong inter-pair repulsive interactions and relatively large pair size at intermediate pairing strength, which is also found in dipolar Fermi gases within the pairing fluctuation theory [37].
In the population imbalanced case, due to the constraint of various stability conditions, stable superfluid ground states are found to exist only in a small portion of the multi-dimensional phase space, spanned by the parameters t, d, p and U , mainly in the low p and bosonic regime of intermediate pairing strength, and for relatively large t and small d. As the pairing interaction becomes stronger in the BEC regime, the nature of the overall pairing of a polarized Fermi gas in 2DOL evolves from particle-like into hole-like. As manifested in the momentum distribution of the paired fermions and excessive majority fermions, there is a strong Pauli exclusion between them for small t and large d. Therefore, decreasing t and increasing d and p help to extend the hole-like pairing regime toward weaker coupling. These results are very different from their counterpart in pure 3D continuum, 3D lattices and 1DOL.
We mention that the values of t and d for which one finds hole-like pairing in the weaker coupling regime in the balanced case and in the stronger coupling regime in the imbalanced case do not overlap. This can be understood as the balanced case and the p → 0 + case are not continuously connected at T = 0.
A. General theory
Here we consider a two-component ultracold Fermi gas with a short-range pairing interaction, V k,k = U < 0, in 2DOL. The dispersion of noninteracting atoms without population imbalance is given by where k z is the momentum in the z direction in the continuum dimension, k x and k y are the momenta in the lattice plane, t and d are the hopping integral and lattice constant in the xy plane, respectively, and µ is the chemical potential. Following our recent works [15,34,38,39], we take t to be physically accessible, under the constraint 2mtd 2 < 1 in our calculation. And the critical coupling for forming a two-body bound state of zero binding energy is given by Here and throughout we take the natural units and set = k B = 1.
At zero temperature, the mean-field BCS-Leggett ground state follows the gap and number equations [36] where E k = ξ 2 k + ∆ 2 is the Bogoliubov quasiparticle dispersion, with an energy gap ∆.
To make sure the mean-field solution is stable, we impose the requirement that the dispersion of the Cooper pairs be nonnegative, both in the lattice plane and along the z direction. To this end, we extract the inverse pair mass (tensor) using the fluctuating pair propagator, as given in the pairing fluctuation theory which was previously developed for the pseudogap physics in the cuprates [40] and extended to address the BCS-BEC crossover in ultracold atomic Fermi gases [1]. In particular, we mention that, compared to rival T -matrix approximations for the pairing physics, the pair dispersion as extracted from this theory is gapless below T c , fully compatible with the mean-field gap equation. Here the pairing T matrix is given by t pg (Q) = U/[1 + U χ(Q)], with the pair susceptibility χ(Q) = K G 0 (Q − K)G(K), the bare Green's function G 0 (K) = (ω − ξ k ) −1 , and the full Green's func- are the BCS coherence factors, and K ≡ (ω, k), Q ≡ (Ω, q) are four momenta.
The inverse T -matrix t −1 pg (Q) can be expanded for small Q, given by , and µ p = 0 in the superfluid phase. Then we extract B = 1/2M , with M being the effective pair mass in the z direction, and t B the effective pair hopping integral in the xy plane. The sign of a 0 determines whether the fermion pairs are particle-like or hole-like, with positive a 0 for particle-like pairing and negative a 0 for hole-like pairing. For example, in a 3D lattice, in general one finds a 0 > 0 for fermion density below half filling, a 0 = 0 at half filling due to particle-hole symmetry, and a 0 < 0 above half filling. The sign of a 0 is controlled by the average of the inverse band mass [41]. While one could perform a particle-hole transformation for a pure lattice case, it does not seem to be feasible in our case since both lattice and continuum dimensions are present. The expressions for the coefficients a 1 , a 0 , B and t B can be readily derived during the Taylor expansion. In this way, using the solution for (µ, ∆) from Eqs. (1)-(2), we can extract the pair dispersioñ Ω q = ( a 2 0 + 4a 1 a 0 Ω q − a 0 )/2a 1 . The non-negativeness of the pair dispersion implies that the pairing correlation length (squared) ξ 2 = a 0 B and ξ 2 xy = a 0 t B d 2 must be positive. For the population imbalanced case, the spin polarization is defined via p = (n ↑ − n ↓ )/(n ↑ + n ↓ ), where spin index σ =↑, ↓ refers to the majority and minority components, respectively. Then the dispersion of noninteracting atoms is modified as ξ kσ = k − µ σ , with µ σ the chemical potential for spin σ.
Now the bare and full Green's functions are given by respectively, whereσ is the opposite spin of σ, Thus E k↑ becomes gapless, as it should, in order to accommodate the excessive majority fermions [See Eq. (5) below]. These gapless fermions will contribute in both the gap and number equations. Following the BCS self-consistency condition and the number constraint, we arrive at the gap and number equations at zero T in the presence of population imbalance: where Θ(x) is the Heaviside step function, and n = n ↑ + n ↓ and δn = n ↑ − n ↓ = pn are the total and the difference of fermion densities, respectively. In the imbalanced case, the pair susceptibility is modified as χ(Q) = K,σ G 0σ (Q − K)Gσ(K)/2, which is consistent with the BCS self-consistency condition so that the pair dispersion remains gapless at q = 0. Then we follow the same procedure as in the balanced case, and extract the inverse pair mass tensor along with coefficients a 0 and a 1 via the Taylor expansion of the inverse T matrix, t −1 pg (Q). Equations (3)-(5) form a closed set of self-consistent equations, and can be used to solve for (µ, h, ∆) as a function of (U , t, d, p), which is then further constrained by various stability conditions. Figure 1. Qualitative behavior of the pair dispersionΩq for different signs of a0 and ξ 2 . For illustration purpose, a simple isotropic quadratic Ωq = ξ 2 q 2 /a0 is used. The three columns are for a0 > 0, a0 = 0 and a0 < 0 from left to right, and the top and bottom rows are ξ 2 > 0 and ξ 2 < 0, respectively. The black solid curves in the top row represent propagating modes.
B. Stability analysis
As shown in the 3D continuum and 1DOL cases, in the presence of population imbalance, not all solutions of Eqs.
Following the stability analysis of Refs. [9,19], the stability condition for the superfluid phase requires that for fixed µ and h, the solution for the excitation gap ∆ is a minimum of the thermodynamic potential Ω S , which is demonstrated to be equivalent to the positive definiteness of the generalized compressibility matrix [9,43]. Thus we have where δ(x) is the delta function. In addition, the positivity of the pair dispersion in the entire momentum space imposes another strong stability condition. Illustrated in Fig. 1 are the qualitative behaviors of the pair dispersion, for different signs of a 0 and ξ 2 . For illustration purpose, a simple isotropic quadratic dispersion is assumed. In general, there are two branches of the dispersion, from the inverse T -matrix expansion up to the Ω 2 order. The positive branch represents a propagating mode, while the negative branch represents a hole-like mode which contributes to quantum fluctuations. The case of a 0 > 0 and ξ 2 > 0 ( Fig. 1(a)) corresponds to particle-like pairing, with a monotonically increasing energy and a positive effective pair mass, B > 0 and t B > 0, so that q = 0 is the bottom of the pair energy. For the a 0 < 0 case ( Fig. 1(c)), this dispersion flips upside down into the blue-dashed hole mode. This corresponds to hole-like pairing, for which q = 0 becomes a local maximum, with B < 0 and t B < 0, similar to the hole band in a semiconductor. In case of a pure lattice, one could flip the sign of a 0 via a particle-hole transformation so that this bluedashed line is flipped back to become positive as the dispersion for hole pairs. However, for our present case, due to the presence of the continuum dimension, there is no easy way to do a particle-hole transformation so that we have to stay with the (black solid) gapped positive branch, which is a flip of the hole branch in Fig. 1(a), as the dispersion of particlelike Cooper pairs. When a 0 = 0, the two branches become symmetric, without a gap. For all three cases, the coefficients of the q 2 terms in the inverse T matrix expansion, ξ 2 and ξ 2 xy , must be positive. (Note that a 1 is always positive.) Indeed, as shown in Figs. 1(d-f), for a negative ξ 2 , the dispersionΩ q of both particle-like (Figs. 1(d)) and hole-like (Figs. 1(f)) pairs quickly become diffusive and thus cease to exist, unless higher order terms, e.g., the q 4 terms, are included. In that case, the pair dispersion will reach a minimum at a non-zero q. Our numerics shows that in 2DOL, ξ 2 in the continuum dimension remains positive in general but ξ 2 xy ∝ a 0 t B in the lattice plane may indeed change sign so that ξ 2 xy > 0 will constitute another stability requirement for the superfluid phase.
Finally, the superfluid density must also be positive definite in a stable superfluid. This, however, has been found to be a relatively weaker constraint in the cases of 3D continuum [9,19].
C. Superfluid density
As a representative transport property, superfluid density is an important quantity in the superfluid phase. While it is always given by n/m at zero T for the balanced case in 3D continuum, it will take the average of the inverse band mass in the presence of a lattice. Furthermore, in the presence of population imbalance, it may become negative [9,19,44], signaling an instability of the superfluid state. Here we shall also investigate the behavior of the anisotropic superfluid density (n s /m), and pay close attention to the population imbalanced case and the situations where it becomes negative.
The expression for superfluid density can be derived using the linear response theory. Following Refs. [9,19,40,44,45], we obtain for zero T where i = x, y and z for the lattice and the continuum directions, respectively.
III. NUMERICAL RESULTS AND DISCUSSIONS
Due to the multiple tunable parameters for the present 2DOL, the compete multidimensional phase diagram can be extremely complex. Therefore, we shall focus on the lattice effect for the p = 0 case, together with the population imbalance for the p = 0 case, to give several representative and informative phase diagrams. For our numerics, it is convenient to define Fermi momentum k F = (3π 2 n) 1/3 and Fermi energy E F ≡ k B T F = 2 k 2 F /2m, as the units of momentum and energy, respectively, which also sets 2m = 1. Note, however, that this E F is not equal to the chemical potential in the noninteracting limit.
A. Fermi surfaces in the noninteracting limit Fermi surface plays an important role in the superfluid and pairing behavior of atomic Fermi gases. For 2DOL, it is very different from the 3D continuum or 3D lattice case, so is it from 1DOL [15,34,38]. This will lead to different physics. Here we first present the shape and topology of the Fermi surface for a series of representative sets of lattice parameters (t, d). Shown in Fig. 2 is the typical evolution behavior of the Fermi surface, calculated self-consistently in the noninteracting limit at zero temperature. The top row shows the evolution with the lattice constant, for k F d = 1, 2, 3 and 4 at fixed hopping integral t/E F = 0.05. Then the bottom row shows the effect of hopping integral, with t/E F = 0.01, 0.04, 0.07, and 0.1 and fixed k F d = 3.
The lattice constant d provides a confinement in the momentum space; the larger d the stronger confinement. The top row in Fig. 2 suggests that the Fermi surface becomes thicker along the z direction as d increases for fixed t. Indeed, fermions feel a stronger confinement in the lattice dimensions with a shrinking first Brillouin zone (BZ), as k F d increases from 1 to 4, and thus need to occupy higher k z states to keep the Fermi volume unchanged, so that the noninteracting fermionic chemical potential is pushed up. As a rough estimate, the maximum occupied k z increases by a factor of 16 from left to right. For relatively small t/E F = 0.05, the shape and topology of the Fermi surface evolve from a closed plate for k F d = 1 into one with only the top and bottom faces while completely open on the four sides at the BZ boundary of the lattice dimensions for k F d = 3 and 4. For the intermediate k F d = 2, the Fermi surface is open only at the center of the four side faces at the BZ boundary. At the same time, the effective filling factor in the lattice dimensions increases to nearly unity as k F d increases from 1 to 4. In this way, for large d, fermion dispersion on the Fermi surface on average becomes hole-like in the lattice plane, while it always remains particle-like in the continuum dimension.
On the other hand, a smaller t makes the fermion energy less dispersive in the lattice dimensions, and thus the lattice band becomes narrower and more fully filled. In other words, fermions will tend not to go to higher k z states until the BZ at lower k z is fully occupied, leading to a flatter top and bottom of the Fermi surface. This will also pull down the noninteracting fermionic chemical potential. As shown in the bottom row in Fig. 2, the Fermi surface becomes thinner and flatter in the z direction as t/E F decreases from 0.1 to 0.01 for fixed k F d = 3. In contrast, the t/E F = 0.07 and 0.1 cases have a much more dispersive Fermi surface as a function of the inplane momentum (k x , k y ). Fermions at high (k x , k y ) states are removed for relatively large hopping integral t/E F = 0.07 and 0.1.
The evolution of the Fermi surface reveals that the in-plane fermion motion on the Fermi surface becomes hole-like for relatively small t and large d. As a result, the nature of the in-plane and overall pairing in this case will also change from particle-like to hole-like when the contributions from lattice dimensions are dominant in the BCS and unitary regimes [35].
It should be mentioned that in the strong pairing regime, the detailed shape of the Fermi surface is no longer relevant, as pairing extends essentially to the entire momentum space. However, the confinement in the momentum space imposed by the lattice periodicity is always present and will govern the physical behavior in the BEC regime.
B. Phase diagram for the population balanced case
It is known from the 3D continuum case that the balanced case and the imbalanced case with p → 0 + are not continuously connected in the BCS and unitary regimes at T = 0 [19,46]. Population imbalance leads to very distinct behaviors. Therefore, we present in this section the balanced results only. In Fig. 3, we present a typical phase diagram (a) in the d -U plane, for fixed relatively small t/E F = 0.05, and (b) in the t -U plane, for relatively large k F d = 3, corresponding to the cases of the top and bottom rows in Fig. 2, respectively. The lattice constant in panel (a) ranges from relatively small k F d = 1 with 2mtd 2 = 0.05 to the upper limit k F d = 2 √ 5 with 2mtd 2 = 1 denoted by the horizontal (cyan) dotted line, and the hopping integral in panel (b) ranges from relatively small t/E F = 0.01 with 2mtd 2 = 0.09 to the upper limit t/E F = 1/9 with 2mtd 2 = 1 denoted by the horizontal (cyan) dotted line. In either panel, the (black dot-dashed) µ = 0 curve defines the boundary between the fermionic and the bosonic regimes. The (yellow) shaded region on the left of the (orange) dashed a 0 = 0 curve is a hole-like pairing regime with a 0 < 0, whereas the overall pairing evolves from holelike into particle-like with a 0 > 0 across the a 0 = 0 curve. A PDW ground state with t B < 0 emerges within the grey shaded region, enclosed within the (green) t B = 0 curve. The entire phase space is a superfluid except for the PDW phase. Note that the PDW phase usually starts immediately before µ decreases down to zero, as the pairing strength increases. The fact that there are two branches of the t B = 0 curve indicates that there is an reentrant behavior of T c as a function of pairing strength. In the absence of population imbalance, similar reentrant behavior of superfluidity and associated PDW ground state have not been found in any other balanced systems with a short range pairing interaction, except in a very narrow range of density slightly above 0.53 in the attractive Hubbard model [45,47,48]. With a long-range anisotropic dipole-dipole interaction, however, such a reentrant behavior and PDW state have been predicted in the p-wave superfluid in dipolar Fermi gases [37].
As shown in Fig. 3, the interaction range for hole-like pairing extends toward stronger pairing regime with (a) increasing d or (b) decreasing t. This can be explained by the evolution of the shape and topology of the Fermi surface, as shown in Fig. 2. As d increases or t decreases, the Fermi surface gradually opens up at the four X or Y points located at (k x , k y ) = (±π/d, 0) and (0, ±π/d), and becomes fully open at the first BZ boundary for large d small t, leading to an effective filling factor above 1/2 in the lattice dimensions. In contrast to the 1DOL case, the existence of two lattice dimensions is enough to dominate the contributions of the remaining one continuum dimension (which is always particle-like due to its parabolic fermion dispersion), so that both the in-plane and the overall pairing becomes hole-like when d is large or t is small, with a 0 < 0 in the linear frequency term of the inverse T matrix expansion. This is especially true in the weak coupling regime, where the superfluidity is more sensitive to the underlying Fermi surface. As the interaction goes stronger toward the BEC regime, the gap becomes large and the Fermi level (i.e., chemical potential µ) decreases and then becomes negative, hence the shape of the non-interacting Fermi surface is no longer important. In this case, the contributions from the lattice dimensions will spread evenly across the entire BZ, so that the continuum dimension will become dominant, and the overall pairing eventually changes from hole-like to particlelike (with a 0 > 0). As shown in Fig. 2, within the occupied range of k z , the average (or effective) filling factor within the first BZ in the xy plane increases with increasing d and/or decreasing t. Therefore, as d increases, or t decreases, the effect of the above-half-filling status persists into stronger pairing regime, and thus the hole-like pairing region in Fig. 3 extends toward right. Shown in Fig. 4 is the behavior of (a) µ as a function of U , along with (b) 2n p /n, where n p ≡ a 0 ∆ 2 , for t/E F = 0.05 and k F d = 3. Also plotted are a 0 and ∆. This corresponds to a horizontal cut at k F d = 3 in Fig. 3(a) or at t/E F = 0.05 in Fig. 3(b). Inside the hole-like pairing regime, a 0 < 0 and thus the chemical potential µ goes above its noninteracting value. This can be seen from the expression [35,45] The chemical potential µ increases with the pairing strength, until it reaches a maximum where n p reaches a minimum.
Here 2n p /n is roughly the pair fraction, which reaches unity in the BEC regime. This plot is very close to its counterpart at T c , which can be found in Ref. [35], since the temperature dependencies of both µ and a 0 are weak, except that here a 0 changes sign at a slightly larger U/U c . As usual, the excitation gap ∆ increases with U/U c . The PDW ground state in Fig. 3 with t B < 0 at a interme- diate coupling strength for (a) relatively large k F d with fixed t/E F = 0.05 or (b) small t with fixed k F d = 3 is associated with the strong inter-pair repulsive interaction, relatively large pair size and high pair density. Close to µ = 0, nearly all fermions have paired up with a relatively large pair size and a heavy effective pair mass, and the inter-pair repulsive interaction becomes strong. A large d or small t strongly suppresses the pairing hopping kinetic energy, and the large pair size and high pair density strongly reduce the pair mobility. All these factors lead to Wigner crystallization and hence PDW in the xy plane, which can also be called a Cooper pair insulator. The negative sign of t B within the grey shaded region indicates that the minimum of the pair dispersionΩ q has shifted from q = 0 to q = (π/d, π/d, 0), with crystallization wave vector (q x , q y ) in the xy plane. As the pairing interaction increases in the BEC regimes, the pair size shrinks and inter-pair repulsive interaction becomes weak; hence t B changes from negative back to positive, corresponding to a quantum phase transition from a PDW insulator to a superfluid.
Combining Figs. 2 and 3, we find that the emergence of hole-like pairing and the PDW phase is associated with the open Fermi surface topology. Once the Fermi surface is closed, both hole-like pairing and the PDW phase disappear.
In case of a closed Fermi surface, typical behaviors of the chemical potential µ and the excitation gap ∆ for the balanced case can be seen from the p = 0 lines in Fig. 5, calculated for t/E F = 0.05 and k F d = 1. Here µ decreases monotonically with U/U c . Without a hole-like pairing regime, these solutions look qualitatively similar to other cases, e.g., in 3D continuum or 3D lattice, except that they follow a different asymptotic behavior in the BEC limit [35].
C. Phase diagram for the population imbalanced case
We now proceed and present our results for the population imbalanced case. With the added parameter p, the phase diagram becomes much more complicated. It renders the otherwise superfluid state unstable in the vast areas in the phase space.
To make the comparison easier, we begin by presenting phase diagrams in Fig. 6 in the same (a) d -U and t -U planes, as in Fig. 3, but with a tiny nonzero p = 0.001. Here a normal gas phase (grey shaded) emerges in the weak coupling regime, delineated by the (black solid) T MF c = 0 line, which is given by Eqs. (3)-(5) with ∆ = 0. Indeed, in the presence of an imbalance, pairing cannot take place for an arbitrarily weak interaction. There exists a stable pSF phase (yellow shaded), defined by the (green solid) t B = 0 line and further confined by the stability condition (red solid line). The pSF phase resides in the low d and large t regime. A PDW ground state emerges in the dot shaded region, enclosed by the t B = 0 line and the dashed part of the (red) stability line. Then the rest unshaded space allows for an unstable mean-field superfluid solution, which may yield to phase separation. Now that the underlying lattice in the xy plane breaks the continuous translational symmetry, the exotic Fulde-Ferrell-Larkin- Ovchinnikov (FFLO) states may possibly exist in part of the unstable region [49][50][51].
One can immediately tell that the vertical axes in Fig. 6 take different parameter ranges from those in Fig. 3, even though the imbalance p = 0.001 is very small. While the d -U phase diagram in Fig. 6(a) is still calculated with t/E F = 0.05, the stable pSF phase is now restricted to relatively small d (yellow shaded area). However, the t -U phase diagram has to be calculated at a much smaller d, with k F d = 1.5, as there is no stable pSF phase for k F d = 3 within the constraint 2mtd 2 ≤ 1 (i.e., t/E F ≤ 1/9). In both cases in Fig. 6, the Fermi surface is closed. Unlike the balanced cases, one cannot find a stable superfluid solution with an open Fermi surface. For this reason, one does not find a hole-like pairing region in the weak coupling regime, but rather one in the strong coupling regime, on the right of the (blue dashed) a 0 = 0 line. Note that in the superfluid phase of hole-like pairing (on the right of the blue dashed line), both a 0 and t B are negative but the product ξ 2 xy is positive. Outside the t B = 0 curve, we have ξ 2 xy < 0, so that the mean-field superfluid solution becomes unstable, yielding to the PDW phase. The smallness of p suggests that the ground state of p → 0 + is not continuously connected to the p = 0 case, consistent with that in 3D continuum [19]. In comparison with Fig. 3, the current large PDW phase in the bosonic regime is totally a consequence of population imbalance. Now we take p as a varying parameter and explore phase diagrams in the p -U plane. Shown in Fig. 7 are the phase diagrams for (a) (t/E F , k F d) = (0.15, 1), (b) (0.05, 1), and (c) (0.15, 1.5). Panels (b) and (c) show the effect of changing t and d, respectively. In all three cases, there are three different phases, delineated by solid lines, as well as a PDW phase. A normal gas phase (grey shaded) takes the weaker coupling and larger p area, on the left of the T MF c = 0 curve. The vast majority is an unstable mean-field superfluid (unshaded), which should yield to phase separation or FFLO solutions. The stable pSF phase (yellow shaded) occupies only a small area. Finally, the PDW phase (dot shaded) takes the small region next to the pSF phase, bounded by the (red dashed) stability ∂ 2 Ω S /∂∆ 2 = 0 line and (green sold) t B = 0 line. When compared with panel (a), one readily sees that the pSF phase shrinks as t decreases (panel (b)) and/or as d increases (panel (c)). This is because both increasing d and reducing t lead to stronger momentum confinement in the lattice dimensions. In agreement with Fig. 6, the Fermi surface for all these three cases are closed. Note that the (red) stability line and the (green) t B = 0 line cross into each other, and the pSF phase is bounded by the stronger of these two conditions. Here also plotted are the lines along which the superfluid density vanishes. As found in 3D continuum, the positivity of superfluid density constitutes a much weaker stability constraint, as both lines of (n s /m) x = 0 in the lattice dimension and of (n s /m) z = 0 in the continuum dimension lie completely within the unstable area. Note that while the (n s /m) z = 0 line looks very similar to its 3D continuum counterpart, the (n s /m) x = 0 line exhibits an unusual nonmonotonic behavior, caused by the lattice effect. From the (violet dotted) µ = 0 curve, one readily sees that, as in Fig. 6, the pSF phase resides completely within the bosonic regime.
The fact that the pSF phase exists only in a small bosonic region (in both Fig. 6 and Fig. 7) is in stark contrast with the 3D continuum case, for which the stability line ∂ 2 Ω S /∂∆ 2 = 0 extends monotonically up to p = 1, and a polarized superfluid exists for arbitrary imbalance p in the BEC regime [19]. Apparently, this difference can be attributed to the presence of two lattice dimensions. Indeed, for 1DOL, with only one lattice dimension, the stability line already cannot extend to p = 1. However, the pSF phase in 1DOL can extend all the way to the deep BEC limit [42]. This is also supported by the fact that with three lattice dimensions in a 3D attractive Hubbard model, one can barely find a pSF state except at very low density and extremely low p [52]. Therefore, one can conclude that more lattice dimensions make it more difficult to have a stable pSF ground state.
This phenomena can be easily understood from the momentum distribution of paired fermions, which would be given by v 2 k had there been no imbalance. In 3D continuum, v 2 k in the deep BEC regime extends to the entire infinitely large momentum space in all directions, leading to a vanishingly small occupation for paired fermions. Therefore, the excessive majority fermions can readily occupy the low momentum states, with essentially no Pauli blocking from paired fermions. However, when one or more lattice dimensions are present, the momentum in these dimensions is restricted to the first BZ, so that v 2 k in these dimensions cannot be infinitesi- mally small even in the extreme BEC limit, which will cause a repulsion to excessive majority fermions. This repulsion increases with p, and may become costly enough so as to render the mean-field superfluid solution unstable. As a result, the distribution of paired fermions is now roughly given by that of the minority fermions, n k↓ = Θ(E k↑ )v 2 k , which reduces to v 2 k for p = 0. Unlike the p = 0 case, for which hole-like pairing takes place in the weaker coupling regime when t is small and/or d is large, here hole-like pairing occurs in the BEC regime via a completely different mechanism. As mentioned above, all the three cases shown in Fig. 7 have a closed noninteracting Fermi surface. As the pairing becomes stronger, the momentum distribution of v 2 k in the xy plane extends to the entire first BZ, and becomes roughly a constant at strong coupling; in the absence of population imbalance, this would lead to a rough cancellation (via averaging over the inverse fermion band mass) due to the particle-hole symmetry of the lattice band. However, for any finite p, the excessive majority fermions will tend to occupy the low (k x , k y ) states, and thus expel paired fermions toward higher (k x , k y ) states, which have a negative (i.e., hole-like) band mass, leading to a net hole-like contribution to a 0 in the pair propagator, when integrated over the entire BZ. This also explains why the a 0 = 0 line leans toward weaker coupling with increasing p.
Shown in Fig. 8 is an example of the momentum distributions of v 2 k (left), n k↓ (middle) and δn k (right column) in the (k x , k y ) plane at different k z /k F = 0 (top), 0.2 (middle) and 0.4 (bottom row), with U/U c = 4 and p = 0.05, for t/E F = 0.15 and k F d = 1.5. This corresponds to a PDW state in Fig. 7(c). Indeed, the excessive fermion distribution, δn k = Θ(−E k↑ ), occupies the low in-plane momentum part and below k z /k F = 0.4 (right column). In addition, v 2 k (left column) remains roughly constant in the entire BZ and for |k z /k F | ≤ 0.4. Most interestingly, the minority fermion distribution n k↓ (middle column) is given by v 2 k but with a hole dug out at the center, due to the Pauli repulsion with the excessive fermions.
As a representative example, we show in Fig. 5 the behavior of (a) µ σ and (b) the gap ∆ for p = 0.05 (red) and 0.1 (blue) with fixed t/E F = 0.05 and k F d = 1, as a function of U . They correspond to horizontal cuts at p = 0.05 and Fig. 7(b), and should be compared with the p = 0 case (black solid curves). The solid part of these lines are stable pSF solutions, while the dashed lines are unstable mean-field solutions. There are a few remarkable features. Firstly, the excitation gap changes only slowly with imbalance p, except that it does not have a solution below certain threshold of interaction strength. Secondly, at given pairing strength, µ σ for p = 0.05 and p = 0.1 are very close to each other, but both far separated from the µ curve for p = 0. This again indicates that the p → 0 + case is not continuously connected to the p = 0 case; with a tiny bit of imbalance, µ ↑ and µ ↓ immediately split up. Lastly, µ ↑ increases slowly with pairing strength in the BEC regime. This is different from its counterpart in 3D continuum and 1DOL; for the former, µ ↑ decreases while for the latter µ ↑ approaches a p-dependent constant asymptote, as the pairing strength increases toward the BEC limit. This can be attributed to the emergence of hole-like pairing (with a 0 < 0) in the strong pairing regime as the number of lattice dimensions increases. To verify this idea, we have also checked the mean-field solution for imbalanced 3DOL, and found that, indeed, µ ↑ also increases with the pairing strength in the BEC regime at T = 0, along with a negative a 0 . Finally, we present the typical behavior of the superfluid density in the imbalanced case. Shown in Fig. 9 are (a) (n s /m) z and (b) (n s /m) x in the continuum and lattice dimensions, respectively, as a function of U/U c for p = 0, 0.05 and 0.1 at fixed t/E F = 0.05 and k F d = 1. Here solid and dashed lines are stable and unstable solutions, respectively. As expected, both are always positive for the balanced case. In addition, (n s /m) x is much smaller than (n s /m) z , because it involves the average of the inverse band mass. For the imbalanced case, the superfluid density deviates continuously from its positive p = 0 value as p increases from 0. However, in the unitary and weak coupling regimes, both continuum and lattice components will become negative for p = 0. Furthermore, the superfluid density is more negative for smaller (but finite) p. This implies an immediate discontinuous jump from the p = 0 value to a large negative value for p = 0 + in this regime. Note that for strong enough interaction, (n s /m) x will again change sign to negative, but gradually rather than abruptly, as can already be seen from the p = 0.1 curve. This has to do with the lattice induced confinement in the momentum space and the Pauli exclusion between paired and excessive fermions.
in
So far, it is not yet clear whether the PDW state can sustain a superfluid order, with and without an imbalance. If the answer is yes, then it will become a supersolid state rather than a Cooper pair insulator. We leave this to a future study.
It should be noted that we have worked with a system with homogeneous fixed densities. For this reason, we have not chosen to use µ and h as control variables, which are more appropriate for systems connected with a large reservoir so that the chemical potentials are fixed or can be tuned separately. In such a case, all h < min(0, µ) 2 + ∆ 2 corresponds to the population balanced state. One can, however, convert between these two approaches, by calculating corresponding densities (and Fermi energy) for given µ and h, and performing a rescaling.
IV. CONCLUSIONS
In summary, we have studied the superfluid phase diagram of Fermi gases with a short range pairing interaction in 2DOL at zero temperature with and without population imbalance in the context of BCS-BEC crossover. We find that the mixing of lattice and continuum dimensions, together with population imbalance, has an extraordinary effect on pairing and the superfluidity of atomic Fermi gases. For the balanced case, the ground state is a stable superfluid, except that a PDW ground state emerges for a finite range of intermediate pairing strength in the case of relatively small t and large d, and the nature of the in-plane and overall pairing may change from particle-like to hole-like in the BCS and unitary regimes for these t and d, which are associated with an open Fermi surface on the BZ boundary of the lattice dimensions. Thus the phase space for the PDW ground state and hole-like pairing shrinks with increasing t and/or decreasing d.
For the imbalanced case, the presence of population imbalance has a dramatic detrimental effect, in that the stable polarized superfluid phase occupies only a small region in the bosonic regime in the multi-dimensional phase space, and will shrink and disappear with increasing d and p and decreasing t. The pSF phase can be found only for relatively large t and small d, associated with a closed non-interacting Fermi sur-face, as well as for low p. In comparison with 3D continuum, the presence of lattice dimensions introduces confinement in the momentum space, which leads to strong Pauli repulsion between paired and excessive fermions. Due to this repulsion, the nature of pairing changes from particle-like to hole-like in the strong pairing regime, and a PDW phase emerges next to the pSF phase. In addition to the normal gas phase, stability analysis shows that an unstable mean-field solution exists and may yield to phase separation (and possibly FFLO) in the rest of the phase diagram. These findings for 2DOL are very different from pure 3D continuum, 3D lattices, and 1DOL, and should be tested in future experiment. | 2022-05-10T06:47:55.833Z | 2022-05-09T00:00:00.000 | {
"year": 2022,
"sha1": "e11fd3b94c2e7981f26ef43aee25910dcd830159",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2205.04045",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b5f28d1f69f5568927984e59c6b31cafa0c7ee7",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
17027506 | pes2o/s2orc | v3-fos-license | A Preliminary Survey of Knowledge Discovery on Smartphone Applications (apps): Principles, Techniques and Research Directions for E-health
People usually seek out varied information to deal with their health problems. However, the large volume of information available may present challenges for the public to distinguish good from suboptimal advice. How to ensure the right information for the right person at the right time and place has always been a challenge. For example, smart phone application vendor markets provide a varied selection of health applications for users. However, there is a lack of substantive reference information for consumers to base well-informed decisions about whether or not to adopt the applications they review and to ascertain the validity of the information provided by these e-health solutions. Thus, this study aims to review the existing relevant research about smart phone applications and identify pertinent research questions in the field of knowledge discovery for health applications that can be addressed in future research. Therefore, this study can be seen as an important step for researchers to explore this domain and extend our work for the well-being of public.
INTRODUCTION
A rapid development of smart phone applications (hereafter referred to as "apps" per the popular usage and for brevity) has brought many business opportunities for their suppliers and relevant stakeholders. However, this rapid development also causes many potential problems for their users. For example, for health apps, how to guarantee the quality of these products particularly as serious concerns regarding the safety and validity of the information offered users thereby have arisen in recent years [1][2]. Recently, there are several principles under discussion to ascertain the quality of medical mobile apps for the public, for instance, regulations drafted by U.S. FDA [3] and selected principles from HONcode [4] by TL [2]. However, none of these relevant principles are applied to regulation of the smart phone applications market for the benefits of consumer health. In addition, the large volume of information may present challenges for the public to distinguish good from suboptimal advice. Therefore, we can conclude that how to ensure the right health apps for the right person at the right time and place has always been a challenge [1][2][5][6]. This study aims to review possible techniques, applications and principles which can be used for clustering, validating and classifying health smart phone apps across vendor markets and discuss how to develop relevant studies that facilitate consumers' search and selection of e-health apps thus contributing to the greater public health in the near future. In addition, this study is a first study to explore knowledge discovery via health apps which provides many unique research insights from different perspectives (e.g., social network analysis approach) to broaden the scope of this line of research.
II.
RESEARCH METHOD Current research of health apps are focusing on manually analyzing small scale health apps to find evidence of their effectiveness for human beings. In order to do further investigation, we first analyzed and compared these studies to see if there are any further possibilities to do large scale analyses (e.g., classification and clustering) in future e-health research.
A. Identifying Relevant Studies
For this study, IS and CS studies were searched to find any manual and (semi-) automatic analysis approach for health apps. In addition, a pilot review of current analytic results involving medical apps research is also provided. In May, 2013, data was collected from five major scientific databases-IEEE Xplore, ScienceDirect, ACM Digital library, SpringerLink and PubMed.
2. In addition, studies of the use of existing e-health solutions, design of new apps for particular experimental treatments, studies not in the English language, studies whose app selection criteria are not clear and those studies not providing full-text articles via host organization's online library system were all excluded. Our aim is to find research focusing on a "review" of health apps on major public app vendor markets [7].
Two evaluations of the relevant research literature were conducted. In the first run, the author screened the titles and abstracts of all relevant articles found to select possible studies. Selections were made with the expressed agreement of the authors of those studies and collaborative experts. After that, a full-text study of all possible candidate cases was made to determine the final selection.
III. RESULT
After reviewing these articles and filtering out the studies which did not meet the aforementioned criteria, 30 articles on pertinent to this study remain. The results of this analysis are listed and discussed in Table I hereafter: HIV/STD-related apps (E) Varied I,A 55(1937) West JH et al. [22] Paid health and fitness apps (E) Varied I 3336(5430) Stark [23] Cell Biology apps (E,R) E-learning for students A 10(12) Connor et al. [24] Hernias apps (E) Varied (clinicians and their patients) [31] Medical, health and fitness apps (E,R) Varied I 6(200) Abroms et al. [32] Smoking cessation apps (E) Varied I 47(62) Al-Hadithy et al. [33] Education, telemedicine and global health apps (E) Plastic surgeons I,A 18 Boulos et al. [34] GPS and geosocial apps (E) Varied (children and adolescents)
A. Findings
As can be seen in the survey of the approaches listed in Table I, 8 out the 30 articles selected focus on evaluating diabetes diagnostic-related applications. The second most discussed topic is that dealing with applications offering advice on physical activities, for example, e-coach [11], injury and pain management solutions [14][15]22]. This reflects a need of our community for advice on personal health challenges. According to statistics cited in AppBrain [37], there are more than 28,000 health apps on the Android market. Thus, the result of this analysis represents only a very small proportion of the health apps on the market. In sum, we acknowledge that there should be some methods which can be used to help researchers to discover useful information regarding these apps, and improve the speed of research progression. To that end, researchers can put more focuses on the content of their studies which benefit society. Thus, hereafter, current proposed principles for categorizing medical smart phone apps and reviewing techniques for recommendations of apps and methods of knowledge discovery concerning the smart phone app market are introduced to help us understand the gap between the information available to app users and suppliers.
A. Recommendation of Apps
Users tend to use ways provided by vendors to find desirable apps. However, inexperienced users using less than optimal search methods and/or inadequate descriptions of apps may lead to unexpected results and incur associated cost. Thus, several studies endeavor to deal with this issue. The research by Jiang, Vosecky et al. [38] provides a semantic-based search and ranking method for Android vendors to resolve problems of poorly organized descriptions of apps and poor ranking results, also claiming that similar circumstances exist among other major vendors. Instead of recommending new apps, Yin, Luo et al. [39] consider the needs of replacing an old app with a new one. They used an estimated tempting value of a new app and compared that with actual satisfactory value of the old app recommended for replacement. Lulu and Kuflik [40] propose an unsupervised learning algorithm for eliciting descriptions of apps and information from professional blog sites on the Internet to cluster apps and demonstrate them hierarchically for ease of search. Böhmer, Ganev et al. [41] argue that the performance of a recommendation system should be evaluated through the life cycle of a user's engagement with a mobile app, reviewing not only the acceptance of new apps by users, but also the stages of viewing, installation, direct usage and long-term usage to get insight into the reactions of user in each step. In addition to previous works, a hybrid social recommendation system which employs tag and context information is purposed in [42], whose authors designed an app for users to use in background mode interactively with a recommendation system on the Internet to achieve ideal results and deal with first-rater, cold-start and sparsity problems. However, none of the research analyzed focuses on recommendation of health apps. More specifically, users can only consider the clustering and classification problems of apps according to varied principles (e.g., FDA [3]) first. After clustering and classification of apps, a user can then effectively search among apps manually or adopt suggestions from any recommendation system based on the organized hierarchy. In addition, the explicit information given for apps, for instance, descriptions and comments are not enough to be used in the recommendation stage. The major considerations are correctness of relevant health information and potential malicious behavior of these apps. Further investigation before recommending health apps for users is necessary.
B. Knowledge Discovery on the Android Market
In order to understand the apps marketed by the major vendors (e.g., Android Google Play and Apple App Store), there are ever more studies starting to analyse the information provided by, and different features of, apps. The Android system is designed with open source concepts and becoming the first choice for conducting research and building applications among academia and industry. Thus, our research focuses on relevant analytic methods and technical analyses of apps on the Android market.
1) Security and Privacy Concerns of Apps
To date, a massive amount of knowledge (and relevant tools) is discussed, as provided by interested people, directly or indirectly helping consumers understand these apps from different perspectives. For example, app developers can use open-source APIs to download apps and their relevant information from the Android market [43]. After that, they can reverse-engineer apps into Java source codes [44]. Thus, there are varied studies using these tools to evaluate and help people to understand the nature of these apps, such as a study of different privacy behaviours of apps conducted to help people understand their impact on users. More specifically, for further analysis, researchers have decomposed the codes by which apps are written and extracted their APIs as the designers have used and linked them to privacy-behaviours of users' activities. In their experinmnets, they conducts a static analysis on nearly 80,000 apps available from Android vendors, sent the results to users and asked for their feedback of any potential effect on their adoption patterns [45].
In addition, security concerns regarding use of apps is also an important topic to be explored and discussed by researchers. The current major app vendor markets do not provide ideal verification mechanisms to check for malicious behaviour of downloaded apps. Thus, an analysis was conducted on 47 SMS-based Android apps in regard to their potenial to be used for theft of users financially related data and detected there are nearly 90% of apps that have the potential for such issues [46]. In addition, Zhou et al. [47] find that some apps are designed for stealing advertisement revenues from the app designers and to allow for remote control of victims' smart phones.
2) Network Concepts of Apps In addition, the potential for leak of information allowed by some suspicious apps has been discovered and discussed by their destinations (networks) from around 4,000 apps on the Android market. According to the report by Rastogi, Chen et al. [48], the designers of these apps and information flow from these apps to the problematic domains on the Internet can be seen as network structures. Multiple vulnerable apps use these domains as the destination that provides sensitive information (e.g., GPS information, contacts and IMEI/IMSI number) of their users for unknown purposes. Those researchers conclude that the authors of these free apps use third-party advertisement libraries which in turn causes their app to fall victim to malicious behavior designed to leak the personal private information of their users.
3) Dynamic Analysis In addition to previous work, Zheng, Zhu et al. [49] focus on dynamic flow analyses of programs, claiming the static analysis method cannot effectively detect full program behaviours. The authors design a tool for analysing UI (User Interface) interactions behind these apps and effectively detected paths which lead to sensitive information. To sum up, we conclude that security and privacy issues are certainly manifest among these apps on the Android market. However, no specific research focuses on health apps. Thus, a further and complete analysis according to current accepted verification principles is necessary.
4) Categorization Principles of E-health Apps
In order to further categorize these e-health apps, we have to understand current developments of relevant principles on the market. These potential principles are designed by abstract concepts of consumer usage, target groups, subject content of apps or disease-based perspectives intended to protect consumer health. Hereafter listed are details of aims, intended end users and principles for each categorization approach in Table II. Obviously, one cannot find that any major vendor or relevant health informatics research has applied these features to (semi-) automatic selections of qualified medical apps for providing better user experiences. Thus, we argue a first academic approach for validating these medical apps should be purposed and tested for the benefits of public health. U.S. FDA intends to apply below principles for its authority to different types of health mobile apps Industry and Food and Drug Administration Staff
Principles
(a) Mobile medical apps that are extensions of regulated medical device for purposes of controlling the medical device or for the purpose of displaying, storing, analyzing, or transmitting patient-specific medical device data.
(b) Mobile medical apps that transform or make the mobile platform into a regulated medical device by using attachments or sensors or similar medical device functions.
(c) Mobile medical apps that allow the user to input patient-specific information and -using formulae or a processing algorithm -output a patient-specific result, diagnosis, or treatment recommendation that is used in clinical practice or to assist in making clinical decisions.
TL [2]
Building a systematic self-certification model for mobile medical apps Health care professional and app designers Principles Information must be authoritative, Purpose of the website (to vendor market), Confidentiality, Information must be documented: referenced and dated, Justification of claims, Contact details, Financial disclosure and Advertising policy.
A. Research Direction of Profiling Health Apps
Recent research [1,[5][6] has discussed several suggestions to resolve health apps overload and quality issues. For example, management policies (e.g., standardized medical information and use of authorized open source medical data for app development) and peer review of the content of health apps. However, these potential measurements or movements could take time and large resources to be effective. Based on this current research's review, these current studies provide us deep insight into manual analyses of health apps, principles for categorizing medical apps and methods for uncovering issues of concern as well as management of security and privacy issues of products on the Android market. Even so, these studies cannot be used as a certification framework for health apps. As aforementioned, different principles and concerns may have different methods of clustering and classification of apps. From the author's perspective, some possibilities should be sought out to address these questions effectively. For example, in Fig. 1, an app crawler is designed to obtain e-health apps from the Android market and decompile codes of these apps by open source tools. Secondly, newly developed algorithms for supporting statistic and dynamic analyses of app content will be proposed to understand the profiles of health apps on the Android market. Based on the creation of app profiles and the current potential principles for categorizing medical apps, a further large scale analysis will be executed to cluster, validate and classify e-health apps on the Android market.
B. Research Direction of Analysing Social Network Patterns among App Vendor Markets
In addition, given the prior discussion of network concepts of apps in the previous discussion section, there are also potential opportunities for analyzing these vendor markets by utilizing social network analysis approaches [50]. In Fig. 2 and Fig. 3, one can clearly see the relationship between each actor in the network according to their roles. These relationships can be seen as explicit and/or implicit networks. In Fig. 2, an app creator can design two different versions of similar functioning apps and market them both via Google's Play and Apple's App Store. In addition, they can also cooperate with advertising companies and put their advertisements in their apps to gain extra benefits. Then, users can download, use and/or evaluate these apps if they want to do it. If these apps have malicious designs, they may detect a user's private information and send it back to these advertisement networks for unknown purposes. Furthermore, as seen in Fig. 3, designer 1 and designer 2 are working in the same organization which means they may share their knowledge and experience of designing apps. Considering these relationships, they can learn from each other and apply new skills for future designs to attract more supported users and produce more creative apps. Thus, a deep analysis can help us to utilize this powerful information to deal with issues of security and privacy among all these apps and vendor markets. Future research may arise from these scenarios. For example, Further studies are required for exploring this theme for the benefits of consumer health.
In conclusion, this review has reviewed current methods, approaches (manual and automatic) and principles which can be used for clustering, validating and classifying health smart phone apps. In addition, it is here suggested that a social network analysis approach would help researchers to address the positive and negative impact on e-health apps from different perspectives and allow them to take necessary actions for avoiding or even contrasting these mean behaviors of apps across these popular markets for promoting the level of reliable and comfortable user experience. Most importantly, support for well-informed, intelligent decision by users in selecting appropriate e-health apps is possible in the future. | 2014-07-27T20:49:34.000Z | 2014-07-27T00:00:00.000 | {
"year": 2014,
"sha1": "b17ff6e04d9cfca82d023428b7069cc388af80f3",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "b17ff6e04d9cfca82d023428b7069cc388af80f3",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
52919769 | pes2o/s2orc | v3-fos-license | Intervention Integrity in Mindfulness-Based Research
Assessing program or intervention fidelity/integrity is an important methodological consideration in clinical and educational research. These critical variables influence the degree to which outcomes can be attributed to the program and the success of the transition from research to practice and back again. Research in the Mindfulness-Based Program (MBP) field has been expanding rapidly over the last 20 years, but little attention has been given to how to assess intervention integrity within research and practice settings. The proliferation of different program forms, inconsistency in adhering to published curriculum guides, and variability of training levels and competency of trial teachers all pose grave risks to the sustainable development of the science of MBPs going forward. Three tools for assessing intervention integrity in the MBP field have been developed and researched to assess adherence and/or teaching competence: the Mindfulness-Based Cognitive Therapy-Adherence Scale (MBCT-AS), the Mindfulness-Based Relapse Prevention-Adherence and Competence Scale (MBRP-AC), and the Mindfulness-Based Interventions: Teaching Assessment Criteria (MBI:TAC). Further research is needed on these tools to better define their inter-rater reliability and their ability to measure elements of teaching competence that are important for participant outcomes. Research going forward needs to include systematic and consistent methods for demonstrating and verifying that the MBP was delivered as intended, both to ensure the rigor of individual studies and to enable different studies of the same MBP to be fairly and validly compared with each other. The critical variable of the teaching also needs direct investigation in future research. We recommend the use of the “Template for Intervention Description and Replication” (TIDieR) guidelines for addressing and reporting on intervention integrity during the various phases of the conduct of research and provide specific suggestions about how to implement these guidelines when reporting studies of mindfulness-based programs.
Introduction
The scientific investigation of Mindfulness-Based Programs (MBPs) has progressed rapidly in the last 20 years. A frequently employed and effective way to demonstrate this expansion is by citing the number of peer-reviewed publications with Bmindfulness^in the title. In 1984, there were two papers, whereas in 2016, there were 856 such papers (based on a search of the Web of Science database on 26 June 2017). There have been voices of caution within the field regarding this proliferation of research, the potential for gaps in the methodical development of the science, and calls for greater levels of rigor and strategic thought in research developments going forward (Dimidjian and Segal 2015;Van Dam et al. 2017).
A central issue in the study of MBPs, which we believe needs to be better addressed for the field to advance, is the issue of intervention integrity. Intervention integrity is defined as ensuring that the intervention was delivered as intended (Perepletchikova et al. 2007). Intervention integrity is a delicate and challenging area in many types of nonpharmacological intervention research in which the intervention is delivered by a person. Randomized controlled trials (RCTs) were initially designed to investigate drugs, for which it is straightforward to standardize dose and ingredients. It is difficult to standardize and operationalize the behavior of the person delivering the program. MBPs are complex interventions with multiple elements to be accounted for during implementation (Craig et al. 2006). One key emphasis within MBP teacher training and program delivery is the importance of embodied communication of mindfulness by the teacher, which draws on the teacher's personal practice of mindfulness. This strong reliance on a certain sort of inner work within the teacher to enable effective teaching practice presents challenges to researchers in their work of unpacking and analyzing the critical ingredients of MBPs, and ensuring that the intervention was delivered as intended.
One approach to ensuring intervention integrity in the context of complex interventions, including some MBPs, has been the development of detailed intervention manuals and assessment of whether the manual was adhered to. This approach has been encouraged by the National Center for Complementary and Integrative Health (NCCIH) (2017), which funds a substantial amount of the MBP research in the USA, and it has been applied in different trials of mindfulness interventions (Daubenmier et al. 2016;Mackenzie et al. 2006;Vieten and Astin 2008). Simply assessing whether manualized curriculum topics and pacing were adhered to, however, may overlook some of the most important elements of intervention delivery. As one example, Daubenmier et al. conducted a clinical trial testing whether adding mindfulness components (mindful eating and many elements from MBSR) to a diet and exercise intervention was more effective than diet and exercise alone for weight loss maintenance for people with obesity (Daubenmier et al. 2016). At 18 months, there were statistically significant differences in weight loss between participant groups within the mindfulness arm, depending on who led the groups. Weight loss at 18 months was correlated with participant ratings of how helpful the teacher was 1 year earlier. Although there were only three teachers to compare, the differences did not appear to be explained by experience (all teachers had substantial experience), nor by adherence to the intervention manual. In fact, the teacher with the weakest outcomes appeared to be most adherent to the timing elements specified in the manual. Although our data cannot establish this with any certainty, our experience suggested that the effort to adhere closely to delivering elements specified in the intervention manual might have detracted from elements important to intervention potency, such as the ability to convey course themes through interactive inquiry, and the capacity to embody the practice of mindfulness. This implies that manualization alone is not the answer to assuring intervention integrity in MBPs, and underlines the potential importance of methods to assess the components of teacher competence that matter most for intervention potency. In another example, Huijbers et al. (2017) analyzed the links between MBP teacher competence and participant outcome. While no significant link in this particular study was found, there were differences between teachers. Preliminary evidence in the MBP field indicates that teacher factors could influence medium significant effects in an adequately powered study (Prowse et al. 2015). Taken together, these suggest that this issue of teacher effects is an area ripe for investigation.
Intervention integrity is a critical issue for the field going forward because the systematic process of building the evidence base relies on the integrity of each individual research study, and the comparability of research outcomes from different studies on the same programs relies on whether they were delivered in similar ways. The intervention delivery is a critical variable within the research process, and if it cannot be verified that it was delivered as intended, it is difficult to meaningfully interpret the outcomes of the study (Sharpless and Barber 2009). Meaningful fidelity checks may enable nuanced analysis of the potential reasons for particular study outcomes. For example, it becomes possible to analyze whether outcomes may have been influenced by differing levels and sorts of teacher training, adherence to good practice norms, or whether specific domains of teacher competence are important for particular outcomes. All these issues can feed into the development of future research questions (Herschell 2010).
No single trial is enough to give definitive results. It is through each trial contributing to a larger corpus of knowledge synthesized in systematic reviews and meta-analyses, that we can begin to see patterns based on overlaps and differences in populations, comparator conditions, outcomes, and characteristics of the program, itself. It therefore becomes a critical issue that each contributing trial is of the highest quality possible.
In the current wave of expanded interest in MBPs, there is a proliferation of new program forms. This is part of a creative response to the need to adapt programs to new contexts and the populations but does create challenges in building an evidence base for MBPs. There can be an assumption that research results derived from one MBP form can be interpreted in light of results derived from another. Factors that can confound this include deviation from a published curriculum while still labelling it with the original title, and variations in the quality of the teaching itself. If an MBP does not adhere to existing curriculum protocols, it is an important matter of accuracy, ethics, and careful science to ensure that it is given a new title or, deviations and adaptations be carefully documented in the paper.
We summarize the status of understanding on teacher integrity/fidelity issues in the MBP field, underline the importance of assessing intervention integrity for the forward development of the science, and offer guidance on addressing it within the various phases of conducting research. We discuss a number of related areas-the level of adherence to the program being researched, the level of competence of the teacher(s) delivering the program, the teacher's adherence to norms of good practice, and their training and experience prior to teaching within a research trial. The aim is to lay out good practice guidance for researchers of MBPs during the design, conduct, and reporting phases of research on the issues of integrity of the MBP within their research. We use the term MBP in the way it is defined by Crane et al. (2017). The term Bintervention^is used at points to emphasize linkage to the broader literature on intervention integrity. However, in the context of the mindfulness field, the term Bprogram^is preferred because it speaks to the wider use of MBPs in a range of contexts beyond health care.
Status of Understanding on Teaching and Program Integrity in the MBP Field
The concept of intervention integrity or fidelity arises out of research on educational and psychotherapeutic programs. Several conceptual models of treatment integrity have been proposed (Sanetti and Kratochwill 2009). A commonly used conceptual model of treatment integrity in the psychotherapy field uses three dimensions: adherence, differentiation, and competence (Borrelli 2011;Weck et al. 2011). Adherence and differentiation are closely related content aspects of integrity: how frequently the teacher/therapist delivers prescribed intervention procedures (adherence) and omits proscribed elements (differentiation), and to what degree these procedures are employed to ensure intervention Bpurity.^Competence is the skill level of the therapist/teacher in delivering the intervention. While adherence, differentiation, and competence are related, they do not presuppose each other. In particular, delivering an intervention with adherence and differentiation does not necessarily mean the intervention has been delivered competently.
Intervention integrity, particularly the dimension of teacher competence, links to three interconnected areas: standards/ guidelines for good practice for teachers, models for training teachers, and methods of understanding and assessing program integrity (Crane et al. 2012) (see Fig. 1).
Good Practice Guidelines (GPGs)
In recent years in the MBP field, there have been concerted efforts to develop and communicate agreed upon norms for good practice for both teachers and trainers of teachers. Some have arisen in national and regional collaborations of trainers (UK Network for Mindfulness-Based Teacher Training Organisations 2016), of teachers (European Association of Mindfulness based Approaches (EAMBA 2017), and in other examples, have been coordinated by a training organization in collaboration with international colleagues (Center for Mindfulness in Medicine, Health Care and Society, University of Massachusetts Medical School 2014; Segal et al. 2016). There are differences in detail, but much alignment on general principles within these guidelines. They all outline minimum teacher training levels, stipulate that the teacher engages in a personal daily mindfulness practice combined with periodic intensive residential mindfulness practice opportunities, a commitment to on-going development through further training, keeping up with the evidence base, supervision, linkage with colleagues, and adherence to an ethical code of conduct. There is currently no direct empirical support for particular ingredients within GPGs, and there is ample room for scientific study of the effects of (for example) regular supervision on teaching practice, and attendance on residential mindfulness practice intensives on the teacher's capacity to embody and communicate mindfulness. The GPGs have though emerged through a rigorous process of consensus building by highly experienced MBP trainers, and are based on evidence in related fields, and on understanding of MBP pedagogy.
Teacher Training Models
There is considerable practice-based evidence and understanding on this theme, which has been disseminated both informally and via journal articles (e.g., Crane et al. 2010;Dobkin and Hassed 2016;Marx et al. 2015). Similar to the GPG issue above, there is little empirical analysis of the effects of teacher training models on building competence and on participant outcomes. There is the beginning of research activity in this area, however. For example, van Aalderen et al.
(2014) conducted a triangulated qualitative analysis of how the MBCT teacher-participant relationship impacts participants. This study found that teacher embodiment of mindfulness, empowerment of participants, teacher non-reactivity, and group support were important factors in the teaching process. Ruijgrok-Lupton et al. (2017) conducted an investigation of the impact of teacher training on participant outcomes. They found that participants' gains after taking an MBSR program were correlated with teacher training and experience-gains in wellbeing and reductions in perceived stress were significantly larger for the participant cohort taught by Fig. 1 Three interconnected aspects of quality and integrity in teaching mindfulness-based courses (from Crane et al. 2012) teachers who had completed an additional year of mindfulness-based teacher training that involved assessment of teaching competence. Kuyken et al. (2017) have integrated investigation of the comparative effects of lighter and more substantial teacher training on outcomes of school children into the protocol for a trial on mindfulness in schools.
Methods of Assessing Intervention Integrity
The development and validation of assessment methods for MBP competence is at an early stage in the field (see Table 1 for a summary of the methods currently available). Currently, the MBI:TAC (Crane et al. 2013;Crane et al. 2016) is the most commonly used tool within the field in both training and research contexts. It focuses primarily on assessing teaching competence within the context of MBSR and MBCT, though an addendum has been developed for the Mindfulness in Schools program (Mindfulness in School Project 2017), and work is underway to develop an addendum for MBP teaching in workplace contexts. The MBI:TAC was a collaborative development led by Bangor University with Exeter and Oxford University mindfulness centers. The primary aim for the initial development was to create a reliable and valid system for assessing MBSR/MBCT teacher trainee's teaching practice within post-graduate training programs. It describes six domains within the teaching process: coverage, pacing, and organization of session curriculum; relational skills; embodiment of mindfulness; guiding mindfulness practices; conveying course themes through interactive and didactic teaching; and holding the group-learning environment. Within each domain, it identifies key features that unpack the elements within that domain, and levels of competence (incompetent, beginner, advanced beginner, competent, proficient, and advanced). The person performing an assessment using the MBI:TAC needs to be an experienced teacher of MBPs, experienced in teaching the particular MBP that is the subject of the assessment, and trained to use the tool reliably. S/he gathers their observational data via experiential participation in a piece of teaching (either in person or through audio-visual recordings), and then systematically applies the criteria to make an assessment point within each domain.
Preliminary research on the psychometric properties of the tool demonstrated good inter-rater reliability (intra-class correlation coefficient; r = .81, p < .01). The evaluations of validity that were possible at this early stage in the tool's development were encouraging, but there are important limitations of this initial validation work. Although 43 different teachers were rated, only two assessments were used for assessing reliability, which limits the precision of the estimates of inter-rater reliability. In addition, raters were aware of the level of experience of the teachers they were rating, which may have influenced ratings. Further research in a range of contexts is needed to clarify the MBI:TAC's reliability and validity. The only study so far to use the MBI:TAC to investigate links between teacher competence and participant outcome, did not find significant effects on mediators and outcome variables in MBCT for recurrent depression (Huijbers et al. 2017). Further work is required to systematically investigate these important issues. The MBI:TAC is a set of criteria rather than a measure of teacher competence. As such, it requires the user of the tool to have training to ensure that the criteria are being applied consistently-one person's idea of Bcompetent^might be another person's idea of Badvanced.^It is therefore important to ensure that the use of the tool does not rely on the ideas and interpretations of the user (which are inevitably biased by cultural, educational, and personal conditioning) but is based on training towards centralized norms of what a competent teaching of a sitting meditation in week 5 of an MBSR looks like (for example). Assessors therefore need to engage in a training process to build their reliability in using the tool and alignment of their assessments to central benchmarked assessments.
The MBI:TAC does seem to have face validity in that it is being implemented in MBP training centers worldwide both as an assessment tool and as a tool to support reflection on skills development (Evans et al. 2014;Marx et al. 2015). It offers to trainers and trainees a useful orienting map of the territory of the competencies being developed.
There are other tools that have been developed to assess MBP integrity/fidelity. The MBCT-Adherence Scale (MBCT-AS) is a 17-item scale designed to assess the presence/absence of MBCT curriculum elements and principles (Segal et al. 2002). Individual items are rated as Bno evidence^, Bslight evidence^or Bdefinite evidence^. Inter-rater reliability was tested during the original MBCT research trials (Ma and Teasdale 2004;Teasdale et al. 2000), and with intra-class correlation coefficients (ICC) ranges from.59 for the cognitive therapy subscale, .97 for the mindfulness subscale and.82 for global ratings. A subsequent study employing the MBCT-AS (Prowse et al. 2015) demonstrated the value of implementing fidelity assessment within delivery of an RCT-fidelity assessment Bproved critical in diagnosing program weaknesses and identifying program strengths to support improved treatment delivery^(p. 1407). There are several limitations of this scale at present to assess MBP integrity/fidelity. First, the instrument focuses mainly on adherence to intervention content rather than teacher competence; second, the scale is primarily intended for use with MBCT and, to our knowledge, has not been adapted for use with other MBPs; third, the initial assessment of inter-rater reliability was done with only 3 raters rating 16 audiotapes. This is a small number for assessing interrater reliability (Saito et al. 2006); hence, the inter-rater reliability is not fully established. Finally, like other instruments, the relationship between items on this instrument and participant outcomes has not been fully assessed.
The Mindfulness-Based Relapse Prevention Adherence and Competence Scale (MBRP-AC) (Chawla et al. 2010) is a measure of the intervention integrity of MBRP that was developed in the context of a randomized controlled trial. A strength of this scale is that it includes both an adherence section (level of fidelity to individual components of MBRP and delivery of key concepts), and a competence section (ratings of teaching style and approach). Inter-rater reliability was generally good, and ratings on the adherence section were positively related to changes in mindfulness over the duration of the program. Like the MBCT-AS, it was designed for a particular intervention, and adaptation may be needed to apply it to other MBPs, although the competence domains (inquiry, attitude/modeling of mindfulness, use of key questions, and clarifying expectations) may readily transfer to other MBPs. In assessing inter-rater reliability, a substantial number of sessions were assessed (44) but only by 2 raters, limiting the precision of the estimates of inter-rater reliability. In addition, some of the ICC results on scale items were just above the threshold of 0.5, which has been considered the lower range of moderate reliability (Koo and Li 2016): of 13 items, 4 had ICCs between 0.5 and 0.6. If 95% confidence intervals had been provided, as would be ideal for evaluating the precision of the ICC estimate, the lower bound would almost certainly have been below 0.5, an ICC that is considered to show poor inter-rater reliability.
Integrating Assessment of Intervention Integrity into the Phases of Research
The CONSORT (Consolidated Standards of Reporting Trials) guidelines provide an important set of good practices for reporting clinical trials (Schulz et al. 2010). These include standard elements for authors to describe when preparing reports of trial findings, facilitating their complete and transparent reporting, and aiding their critical appraisal and interpretation. The element most applicable to the issue of intervention fidelity is item 5, which involves describing the: Binterventions for each group with sufficient detail to allow replication, including how and when they were actually administered.^The CONSORT guidelines also include an extension for reporting non-pharmacological intervention trials that is helpful in addressing the additional issues involved in reporting MBPs (Boutron et al. 2008). Item 4 in this extension outlines additional elements for non-pharmacologic trial intervention reporting, includes reporting details of the intervention components, how the interventions were standardized, and how adherence to the protocol implementation was assessed.
Another recent set of recommendations, which expands item 5 within the CONSORT guidelines by providing detailed guidance on how to report intervention integrity issues, is the Template for Intervention Description and Replication (TIDieR) guidelines (Hoffmann et al. 2014). These provide a much more detailed set of recommendations for how to report interventions so that adequate information is provided to allow replication. We believe the TIDieR guidelines provide an important roadmap for improving reporting on the intervention component of MBP trials in general, and how intervention fidelity was addressed. Such guidelines are important not only for researchers, but for all of us who read the research literature to inform our practice. In the following sections, we describe how we suggest researchers performing trials of MBPs might best apply the TIDieR guidelines when planning and conducting MBP trials, and how these steps are reported when publishing the trial. Table 2 summarizes these TIDieR guidelines and their relevance to the MBP research context.
Item 1 of the TIDieR guidelines is to Bprovide the name or a phrase that describes the intervention.^For planning and reporting MBPs, this means addressing a critical first question: defining which MBP is being studied. If an existing MBP is being employed it is important to ensure that the delivered curriculum maps exactly onto the manual or curriculum guide for this MBP (Hoffmann et al. 2014). For MBSR curriculum guide see ); for MBCT see (Segal et al. 2013) and for other MBPs specific guidelines are available. If the adaptations are significant, the MBP needs a new name. A challenging question is how much adaptation can take place before an MBP needs a new title (Dobkin et al. 2013). Crane et al. (2017) provide a meta-perspective on this question in the context of all MBPs by defining the essential and variant ingredients and qualities of any program that is based on mindfulness. Researchers then need to narrow these questions down to the specifics of the program under consideration. There are no definitive answers but there are some important elements, including (a) the dosage (i.e., if calling a program MBSR it needs to include a minimum of 31 hours of direct instruction plus assignment of 45 mins per day of formal home practice); (b) delivery and sequencing of the core meditation practices (i.e., in MBCT these are the body scan, mindful movement, sitting meditation, and the 3-min breathing space, each taught over particular durations, in particular ways at particular time points within the program); and (c) the core themes of each session as laid out within the curriculum guide. An acceptable level of adaptation (while retaining the particular MBP title), might therefore be adjusting the psychoeducational material to a particular population (which in turn is informed by understanding of the mechanisms by which vulnerability is created and perpetuated in this population); or by adjusting the delivery format (but not the overall dosage) to suit the constraints of a particular context.
Item 2 in the TIDieR guidelines is to describe the rationale or theory of the intervention elements. For MBPs, this means defining and reporting why the particular MBP was selected for study, and the theoretical model by which it is hypothesized to be effective in the study context. If program adaptations are made, investigators should make sure they have a clear rationale for the adaptations, which is described in publications. How does the MBP interface with the particular vulnerabilities/life themes of the participants? How do these vulnerabilities present themselves? How are they perpetuated? How does the MBP interface with the context for delivery? See Crane et al. (2017).
Items 3, 4, 7, and 8 of the TIDieR guidelines include describing a set of detailed curriculum-related items that are challenging for MBP's due to the complexity of most MPBs. Addressing these items will typically require either referencing an existing manual/curriculum guide, together with noting any adaptations, or publishing a new manual/ curriculum guide if this represents a new MBP. While these items might be concisely summarized within the methods section in a trial results publication, a new manual/curriculum guide or a lengthy description of adaptations will typically require publication in one of four formats: (1) a separate trial protocol publication in an appropriate journal (for example, a series of on-line journals now publish detailed trial protocols; (2) as an on-line appendix to the article, if the journal provides such an option; (3) as an on-line resource on a website that will serve as a long-term reference (i.e., is not likely to have the URL change or be abandoned); and (4) as a book (e.g., Segal et al. 2013).
TIDieR item 3 covers describing what informational or physical materials are used in an intervention. For MPBs, this would typically involve describing (and ideally providing examples) of materials such as handouts for participants and guided meditation audio-tracks.
Item 4 involves describing the procedures and activities used. For MBPs, this will typically involve noting the types of mindfulness practices performed during in-person sessions (e.g., a 15-min body scan at the beginning of the class meeting), or for home practice. Other in-class activities, such as didactic teaching (e.g., stress reactivity and mindfulness), and group exercises should be described, with enough detail to support consistency by multiple teachers within a trial, or to facilitate replication by other investigators. While specifying detail is challenging for elements such as group exercises, outlining issues such as themes that group leaders aim to address can facilitate replication and provide items that are useful in assessing fidelity to intervention curriculum. All the teachers within a trial need to be working to the same curriculum guide.
Clarity is needed within trial teacher training processes regarding how to address adherence. For example, some trials take the line of requiring inclusion of certain poems within certain sessions, and standardization of the audio recordings of meditations given to the participants for home practice. However, another approach is to address adherence by seeing it as adherence to the essence of the process of teaching MBPs. In this case, the teachers are encouraged to work responsively in the moment by selecting poetry that meets emergent themes in the teaching space, by working flexibly with the curriculum to enable responsiveness to a theme that has spontaneously emerged, and by offering participants Provide the name or a phrase that describes the intervention and reference to the most recent curriculum guide-i.e., MBSR Why 2. Describe any rationale, theory, or goal of the elements essential to the intervention. In addition to referencing published literature on this issue, theoretical rationales are needed for any adaptations, or tailoring to a particular population or context.
3.
Materials: Describe any physical or informational materials used in the intervention, including those provided to participants or used in intervention delivery or in training of intervention providers. Provide information on where the materials can be accessed (such as online appendix, URL). For example, written course materials and guided mindfulness meditation practices.
4.
Procedures: Describe each of the procedures, activities, and/or processes used in the intervention. If using a published MBP curriculum guide this is not needed-only include descriptions of adaptations. Detail in full if delivering a new MBP.
5.
For each category of intervention provider, describe their expertise, background, and any specific training given. Describe (1) what MBP teacher training has been undertaken by trial teachers, (2) how they adhere to ongoing MBP Good Practice Guidelines such as on-going practice, and (3) measures of teacher competence that were used to select trial teachers How 6.
Describe the modes of delivery (such as face to face or by some other mechanism, such as internet or telephone) of the intervention and whether it was provided individually or in a group. If following a standard MBP curriculum guide this is not required-only detail deviations/adaptations from standard protocols, or if a new curriculum, detail in full, including delivery method (i.e., in person teacher-led group sessions; digital delivery, etc.). Where 7.
Describe the type(s) of location(s) where the intervention occurred, including any necessary infrastructure or relevant features.
When and How Much 8. Describe the number of times the intervention was delivered and over what period of time including the number of sessions, their schedule, and their duration, intensity, or dose. If following a standard MBP curriculum guide this is not required-only detail deviations/adaptations from standard protocols, or give full details of new MBPs. Tailoring 9.
If the intervention was planned to be personalized, titrated, or adapted, then describe what, why, when, and how. Describe how individual needs/vulnerabilities of MBP group participants were handled by the trial teacher(s), and whether any steps such as individualized additional meetings with the teacher were used to address issues that varied by participant.
10.
If the intervention was modified during the course of the study, describe the changes (what, why, when, and how). How well 11.
Planned: If intervention adherence or fidelity was assessed, describe how and by whom, and if any strategies were used to maintain or improve fidelity, describe them. Describe whether an MBP fidelity tool was used to assess intervention delivery via reviews of recorded sessions were employed, by whom and how. Describe the rationales for the choices made.
12.
Actual: If intervention adherence or fidelity was assessed, describe the extent to which the intervention was delivered as planned. Detail the assessed level of MBP teaching competence, adherence and differentiation in the results section of the paper.
Adapted from Table 1 in Hoffmann et al. (2014) meditation practice recordings with their own teacher's voice. The field is tending towards the latter. This level of fluidity is entirely in keeping with the spirit of MBP teaching, but the challenge is to ensure that it continues to flourish within overarching agreed norms of understanding about program fidelity.
Item 5 of the TIDieR guidelines involves describing who delivered the intervention, and what their background, expertise, and specific training was. This encompasses the critical question of whether the teachers selected for teaching on an MBP trial are at an acceptable level of competence, have trained to acceptable levels, and are adhering to accepted norms of good practice. Good trial governance asks that competence checks are conducted on the teachers in advance of embarking on research trial classes. The requirements for this vary depending on the nature and stage of the research. In this section, we refer to the phases of clinical research, as adapted to behavioral intervention research by (Onken et al. 2014).
Stage II efficacy research trial (Onken et al. 2014). For this kind of trial, it is important to choose the best available teachers because the trial is asking a proof of concept question. If the teaching is of a poor quality, it will not be possible to determine whether lack of efficacy was the result of poor teaching or a weakness in the intervention itself. If the teaching is of a high quality, this variable has effectively been eliminated, and the outcomes can be interpreted in the light of other issues. While more research is needed about the best ways to assess teacher competence, there are a couple of options that currently exist. One is to establish certain criteria for the type of training that teachers have received, and the level of experience teaching, and report these in the intervention methods. While this may be useful, as noted earlier, this may not fully establish teacher competence. The second method, which can be combined with the first, is to use an instrument such as the MBI:TAC. If the MBI:TAC is being used to assess competence, we recommend that (for stage II trials) the teaching is at Bproficient level^or above.
Stage III and VI trial (Onken et al. 2014). For these trials, the core research questions are different. By this phase of the research journey, the MBP has been proven to be of value in a carefully controlled research environment. The next phases of investigation are to ask whether it can stand up to the challenge of being implemented in a real world/community setting. During these phases, a legitimate research question could be: what are the effects of different levels of experience/training/good practice/competence within the trial teachers? These could be manipulated in the trial design, or the natural expression of them captured in the data so that these questions can be analyzed. In this phase of research, the key issues are to accurately assess the level of skill and experience of the teacher. If the MBI:TAC is being used to assess competence, the Badvanced beginner^level is at a level that is Bfit for practiceî n that the participants would come to no harm (although their opportunities for learning might be compromised); competent is the level at which teacher trainees are able to graduate from post-graduate programs in the UK context and is generally recommended as a minimum level for trial teaching. Teaching that is at competent level as assessed by the MBI:TAC is a solid demonstration of good practice, with some areas for development.
TIDieR item 6 involves describing the mode of delivery of the intervention (i.e., face-to-face, digital, individual or group).
TIDieR item 7 involves describing where the intervention was conducted, and any infrastructure (e.g., a large, carpeted room) that was needed for the intervention.
Item 8 involves describing the number of sessions involved in the intervention, length of session, and over what period the intervention was delivered.
Item 9 involves noting any plans to personalize or adapt the intervention for individual participants. Examples of how this might be applied for MBPs include whether any of the practices are modified for specific participant groups (e.g., the mindful yoga postures could be modified in the following ways for participants with limited mobility), or whether individual attention is available for certain participants (e.g., participants reporting difficulty with the mindfulness practices were offered an option of having a 15-min individual meeting with the mindfulness teacher).
TIDieR items 11 and 12 (planning for and conducting assessments of intervention fidelity): In studies of MBP's one of the elements of item 11 in the TIDieR guidelines should typically involve creating a plan to assess intervention fidelity during the trial, as well as plans to ensure that the teachers are supported and adhering to field norms of good practice. In the UK context, this includes regular engagement in Mindfulness Supervision (Evans et al. 2014), and (at least annual) residential, teacher-led mindfulness practice intensives (Peacock et al. 2016; UK Network for Mindfulness-Based Teacher Training Organisations 2016).
Assessing intervention integrity involves having at least some sessions observed or recorded and reviewed to assess the degree to which the intervention is implemented in the way it was intended. It is important to decide what protocol to follow in terms of selection of teaching for integrity checks, and who conducts the checking. These issues need to be carefully addressed in the context of the overall trial and reported in trial publications. Decisions will depend on the overall amount of teaching within the trial, the resources available, and the core purpose of the integrity checks. Is intervention integrity part of the research hypotheses/questions, or are the checks to ensure confidence in answering primary efficacy or effectiveness question? If the former, then there will need to be inter-rater reliability checks on the assessment process itself. If the latter, the fidelity assessment outcomes will be important in enabling the trial to be benchmarked against other trials within the field. Typically, if the check is part of trial governance rather than actually contributing to the trial data, an independent assessor will randomly sample one to two sessions per eight-session course for rating. The outcomes will be reported as part of the trial conduct (TIDieR item 12). The assessor conducting the integrity checks needs to be an experienced MBP teacher in the program that is being researched and trained to use the integrity assessment tool to acceptable levels of reliability.
Research governance requires that the trial protocol is established and ideally published, and the trial registered before embarking on the work on the research. The trial's approach to intervention integrity, teacher training, and good practice for the teachers need therefore to be addressed and included in the reported protocol. When reporting MBP trials, we recommend that authors use the TIDieR guidelines, with the specific adaptations for MBPs outlined here, as a guide to how to achieve a high-quality section on intervention integrity.
Conclusions
The main theme that we address is how to integrate teaching integrity questions into the conduct of MBP effectiveness and efficacy trials. We hope this paper offers journal editors and peer reviewers clear guidance which will enable them to offer constructive commentary to authors and will in turn shape practice in this area. It also urges the field to focus future research directly on teaching integrity/fidelity issues. Relative to the overall expansion in research on MBPs, there has been little attention to the way that these effects are created-the curriculum and the teaching process themselves. While current developments offer a foundation for next steps, it is also clear that the methodologies to assess teaching integrity within the MBP field are themselves at an emergent stage and need on-going development and refinement informed by empiricism. As Dimidjian and Segal (2015) pointed out, developing empirical understanding of intervention integrity will be a critical foundation for the rigorous and sustainable development of the science. Research on teaching integrity is also important for the process of implementation (both the research on it and the practice of it). At this point in time, there is little direct evidence to support the length and type of teacher training that is stipulated in current GPGs (though see Ruijgrok-Lupton et al. 2017 for a small-scale exception to this). Indirect evidence on rigorous trials that do report teaching integrity underline that the teachers were working to published norms of training and good practice, which supports the GPGs, but direct investigation of these issues is needed going forward. We recommend that researchers of MBPs use the TIDieR framework and supporting resources for ensuring completeness of reporting of the intervention(s) within their study (Hoffmann et al. 2014).
Ultimately, if a research trial is useful to the world, it will contribute to the emerging evidence base, whether its results are positive or negative. Building empirical understanding is an extraordinary process of interconnected human endeavor, with each researcher contributing one piece in an overall jigsaw of understanding. This collaborative knowledge generation works well if each researcher takes responsibility to do what they say they are doing, to do it well, and then to report it transparently and clearly. We hope that this paper provides clarity on one aspect of Bdoing it well^within the MBP research process. Current understandings on MBP teaching integrity are themselves preliminary and subject to evolution as evidence builds. They do, however, offer us ground to stand on for now and a platform for future development.
Compliance with Ethical Standards This article does not contain any studies with human participants or animals performed by any of the authors. | 2018-10-22T06:13:30.805Z | 2018-01-24T00:00:00.000 | {
"year": 2018,
"sha1": "62bf75ce3992f30a9d9971ecfea035d526e038ee",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12671-018-0886-3.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "736fc0130af51d49c35033d3c7e88bf65a4bfc95",
"s2fieldsofstudy": [
"Psychology",
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
236944845 | pes2o/s2orc | v3-fos-license | Antimicrobial Properties of Palladium and Platinum Nanoparticles: A New Tool for Combating Food-Borne Pathogens
Although some metallic nanoparticles (NPs) are commonly used in the food processing plants as nanomaterials for food packaging, or as coatings on the food handling equipment, little is known about antimicrobial properties of palladium (PdNPs) and platinum (PtNPs) nanoparticles and their potential use in the food industry. In this study, common food-borne pathogens Salmonella enterica Infantis, Escherichia coli, Listeria monocytogenes and Staphylococcus aureus were tested. Both NPs reduced viable cells with the log10 CFU reduction of 0.3–2.4 (PdNPs) and 0.8–2.0 (PtNPs), average inhibitory rates of 55.2–99% for PdNPs and of 83.8–99% for PtNPs. However, both NPs seemed to be less effective for biofilm formation and its reduction. The most effective concentrations were evaluated to be 22.25–44.5 mg/L for PdNPs and 50.5–101 mg/L for PtNPs. Furthermore, the interactions of tested NPs with bacterial cell were visualized by transmission electron microscopy (TEM). TEM visualization confirmed that NPs entered bacteria and caused direct damage of the cell walls, which resulted in bacterial disruption. The in vitro cytotoxicity of individual NPs was determined in primary human renal tubular epithelial cells (HRTECs), human keratinocytes (HaCat), human dermal fibroblasts (HDFs), human epithelial kidney cells (HEK 293), and primary human coronary artery endothelial cells (HCAECs). Due to their antimicrobial properties on bacterial cells and no acute cytotoxicity, both types of NPs could potentially fight food-borne pathogens.
Introduction
Food-borne pathogens are among the most common causes of bacterial contamination in food processing plants [1][2][3]. They predominantly exist as communities of sessile cells that develop as biofilms [4]. Biofilm formation as a microbial growth strategy offers numerous advantages to microorganisms in comparison to planktonic lifestyle, such as better protection from hostile environmental hazards, higher resistance to antimicrobial agents, bacteriophages and other hostile environmental conditions [5,6]. Biofilm development is commonly considered to appear in four main stages: (I) bacterial attachment to a surface, (II) microcolony formation when bacteria initiate to produce excessive extracellular matrix, (III) biofilm maturation and (IV) detachment (also termed dispersal) of bacteria which may then colonize new areas [7]. To enhance food safety, the inhibition of initial bacterial attachment is an essential strategy to prevent biofilm formation on food processing surfaces [8,9]. In the next stages, bacteria generate the extracellular matrix consisting of extracellular polymeric substances (EPSs) such as exopolysaccharides, extracellular DNA (eDNA), proteins and lipids which contribute to cell survival and the resistance of the biofilm mass to environmental conditions. These EPSs directly influence a variety of biofilm physico-chemical characteristics, such as its porosity, density, water content, permeability, absorption, hydrophobic properties, mechanical resistance and other properties [10][11][12].
In spite of intensive efforts to improve sanitization strategies, microbial contamination containing antimicrobial-resistant food-borne pathogens persists as a problem in the food industry [8,13]. Therefore, novel strategies must be explored in the effort to inhibit bacterial colonization and reduce the risk of associated potential food-borne diseases, which is an increasingly common public health problem [1,14,15]. Novel strategies for antimicrobial agents could be found in the field of nanotechnology. An earlier report exhibited the advantage of the use of metallic NPs over other commonly employed antimicrobials, as they do not differentiate between resistant and susceptible bacteria [16]. In addition, they disturb the biofilm integrity by interacting with EPSs, eDNA, proteins, and lipids of biofilms [16,17]. The interactions of NPs with bacteria induce oxidative stress via reactive oxygen species which damage bacterial cell envelopes, cell membranes, cellular structures and biomolecules [16][17][18][19]. Thus, nanoparticles may be particularly advantageous in treating bacterial infection, preventing infections in a form of antibacterial coatings of implantable devices and medicinal materials, the promotion of wound healing, or as antibiotic delivery systems to treat diseases [17,20]. On the other hand, different types of NPs have distinct disadvantages, such as a short shelf life, poor stability and insufficiently explored cytotoxicity [17,21].
In the food industry, nanotechnology is already being used, for example, to generate antimicrobial nanomaterials commercially available as food packaging, or as antimicrobial coatings on the food handling equipment [22,23]. Materials used for antimicrobial application may consist of polymers, organic/inorganic nanoparticles, plastics or ceramics [18]. Various syntheses have been developed to obtain NPs with the desired quality while avoiding the aggregation, oxidation, and inactivation of the NPs during synthesis [24]. Unfortunately, chemical synthesis involves toxic chemicals in the synthesis protocol. To avoid the presence of chemical agents associated with environmental toxicity, eco-friendly synthesis approaches are in demand [25]. For instance, earlier study demonstrated a robust simple but rapid green synthesis of gold nanoparticle-alginate biohydrogel, using thermostable nisin while retaining the strong antimicrobial activity [24].
Further, nanomaterials may be created from pure metals, or from their composites, with variable sizes and shapes [17,32,33]. The alteration of NPs' size and shape changes their properties on the atomic level and has the potential to design their optimal physicochemical, optical and biological properties for various applications [32,34]. The distinctive physicochemical and optical properties of nanoparticles allow the design of systems with high sensitivity, large surface areas, special surface effects, high functional density, catalytic effects and enhanced optical emission [34,35]. In addition, variable NP sizes and shapes are likely to influence particle transport behavior in biological systems, as well as how cells sense and respond to the particle [36].
In our previous study, we reported the antimicrobial properties of gold (AuNPs) and silver nanoparticles (AgNPs) [37]. In this follow-up study, we aimed to examine the potential antimicrobial properties of palladium (PdNPs) and platinum (PtNPs) nanoparticles and their mechanism of action. While PtNPs are believed to induce the intracellular hyper-production of ATP and oxygen radicals, in turn causing bacterial growth inhibition, DNA damage and bacteriotoxic effects [38][39][40], the precise mechanism of action of PdNPs has not been reported to date. Further, we investigated the acute cytotoxicity of NPs on selected cell lines to elucidate the potential impacts of NP exposure on the human population, as there is a gap in the current literature regarding their nanotoxicity [21,22,41].
In the presented study, four significant food-borne pathogens (Salmonella enterica, Escherichia coli, Listeria monocytogenes and Staphylococcus aureus) were selected to test the antimicrobial properties of PdNPs and PtNPs. These pathogens are well known for being potential biofilm-related sources of food-borne diseases with significant effects on human health and adverse economic impacts for the food industry. The effectiveness of the NPs was assessed by determining their minimum inhibitory concentrations needed for the inhibition of bacterial growth, biofilm formation, metabolic activity, and for biofilm reduction. TEM imaging was used to visualize the interactions of metallic NPs with planktonic cells and potentially reveal their mechanisms of action, which is schematically illustrated in Figure 1. The acute cytotoxicity of individual NPs was verified in vitro.
Results
Ten concentrations of NPs were tested to determine the minimum inhibitory concentration for planktonic growth, and six concentrations were applied for preformed biofilms (as the lowest concentration were known to be ineffective). The MIC was defined as the lowest substance concentration able to inhibit at least 80% of microbial growth (MICPC 80 for planktonic cells, MICBC 80 for further growth of biofilm cells), inhibit 80% of metabolic activity (MICBM 80 for biofilm metabolic activity, MICMPB 80 for metabolic activity of preformed biofilm), prevent biofilm formation by at least 80% (MICBF 80 for biofilm formation), or reduce a preformed biofilm by at least 80% (MICBR 80 for biofilm reduction). The results of MICs, log 10 CFU reduction and inhibitions are summarized in Tables 1-6. Complete data are provided in Supplementary Materials.
The Effect of Palladium Nanoparticles
According to the A 620 , planktonic growth was only inhibited in the case of two E. coli strains (683/17 and 693/17) where the MICPC 80 was determined as 22.25 mg/L. For the other strains, MICPC 80 values could not be determined, as they were higher than the maximal tested concentration (22.25 mg/L). The average A 620 inhibition ranged from 28.6 to 92% (Table 1 and Table S1 and Figure S1). Similarly, the MTT values for biofilm metabolic activity (MICBM 80 ) could not be determined neither for Gram-positive nor Gram-negative bacteria. The maximum inhibition of metabolic activity ranged from 3.3 to 52.1% (Table 1). For preformed biofilm, PdNPs were able to prevent further growth of biofilm cells and inhibit their metabolic activity in all strains (Table 3 and Table S2 and Figure S4). In addition, PdNPs were able to prevent the biofilm formation of both S. aureus strains and reduce biofilms of S. aureus 816 and both strains of S. Infantis (Tables 1 and 3, Figure S2, S3, S5 and S6).
The Effect of Platinum Nanoparticles
The results for PtNPs resemble those for PdNPs. In accordance with A 620, the MICPC 80 values could not be determined, as they were higher than the maximal tested concentration (50.5 mg/L) for all strains. The average A 620 inhibition ranged from 28.9 to 77.8% ( Table 2, Table S3 and Figure S7). For biofilm metabolic activity (MICBM 80 ), the MTT reduction assay evaluated maximum inhibition which ranged from 5.8 to 64.3% (Table 2). Thus, MICBM 80 values could not be determined for any tested strains. However, preformed biofilm PtNPs were able to inhibit further growth of biofilm cells and inhibit their metabolic activity for all tested strains (Table 4 and Table S4 and Figure S10). Furthermore, PtNPs were able to prevent biofilm formation to the same degree as PdNPs for S. aureus 816 and were able to reduce preformed biofilm in both S. aureus strains (Tables 2 and 4, Figure S8, S9, S11 and S12).
Colony Plate Counting and Inhibitory Rate Method
PdNPs' and PtNPs' effects on bacterial growth were further studied by the colony plate counting and calculation of inhibitory rate (Tables 5 and 6). The log 10 CFU reduction ranged from of 0.3-2.4 (PdNPs) and 0.8-2.0 (PtNPs), which represent the complete inhibition of bacterial growth at the maximal tested concentration (22.25 mg/L PdNPs or 50.5 mg/L PtNPs), except for L. monocytogenes 149 when PdNPs were applied ( Table 5). The average inhibitory rates ranged from 55.2 to 99% in the case of PdNPs (Table 5) and from 83.8 to 99% in the case of PtNPs (Table 6).
Transmission Electron Microscopy Imaging
To better understand the mechanism of action, selected bacterial strains were exposed to the highest effective concentration of the metallic NPs for different durations (for 4, 8 and 24 h), and were then observed with TEM. The application of NPs resulted in bacterial disruption and leakage of intracellular components (Figures 2 and 3). These observations were not detected in the planktonic cells without NPs.
Acute Cytotoxicity of Metallic Nanoparticles
The cytotoxic effect of metallic NPs on HRTECs, HaCat, HDFs, HEK 293, HCAECs was evaluated by a resazurin assay over 72 h to determine the concentration that halved the cellular viability (IC 50 ). The IC 50 (mg/L) values are demonstrated in Table 7. No IC 50 values were obtained for both PdNPs and PtNPs, because they did not cause any acute cytotoxicity in a concentration range up to 4.45 mg/L (PdNPs) and 10.1 mg/L (PtNPs).
Discussion
In this work, two types of metallic NPs (PdNPs and PtNPs) were tested for their ability to inhibit cell growth, prevent biofilm formation, and to reduce the biofilm mass of four selected bacterial food-borne pathogens (Gram-positive L. monocytogenes, S. aureus and Gram-negative E. coli, S. Infantis). The highest concentrations applied in this study (PdNPs 44.5 mg/L and PtNPs 101 mg/L) were prepared by using the cathodic sputtering approach, which requires a specific time deposition.
PdNPs and PtNPs were characterized by TEM and high-resolution TEM (round shape, size 4-6 nm). Both NPs exhibited greater antimicrobial effects on further growth of biofilm cells and the metabolic activity of preformed biofilm than on planktonic cells. Nevertheless, further investigations, such as colony plate counting and TEM visualization confirmed their antimicrobial properties. These effects were mainly observed at the highest concentrations applied (PdNPs 22.25-44.5 mg/L, PtNPs 50.5-101 mg/L), which may cause significantly higher expense for an application in food processing plants. In a previous study [37], we demonstrated a similar result for gold and silver NPs.
According to our review of the literature, the antimicrobial activity of PdNPs and PtNPs against L. monocytogenes and Salmonella Infantis has not been reported to date. A small handful of studies described the antimicrobial activity of PdNPs and PtNPs for other bacterial species [38,[42][43][44]. A study of Adams et al. [42] demonstrated greater antimicrobial activity of PdNPs (size 2 nm) at concentrations as low as 2.5 nM against Gram-positive S. aureus compared to Gram-negative E. coli. Nevertheless, the antimicrobial effect for Gram-negative E. coli required higher concentrations of PdNPs and longer exposure times before an inhibitory growth effect became evident, which corresponded with our current work. Their study further confirmed that the antimicrobial activity of NPs is size-dependent, as the most effective NPs size was established as <1 nm. However, NPs < 1 nm may possess relatively high ecological risk if they enter the environment. Therefore, comparatively "large" NPs were studied firstly. To the best of our knowledge, NP size could be successfully altered by adjusting the concentration of PEG or adding certain additives. This size-dependent correlation with antimicrobial activity was also demonstrated in studies describing that NP size plays a major role in their antimicrobial activity against both Gram-positive and Gram-negative bacteria [43,44]. For instance, NPs bigger than 5 nm only interact with the cell membrane, while smaller NPs have the potential to enter bacteria. As well as for entering bacteria, TEM visualization further confirmed interactions enable better binding of NPs to the bacterial cell wall. This observation was detected in our earlier study [37] and is explained by Slavin et al. [45], who described this affinity for a wide spectrum of bacteria.
Similarly, the potential antimicrobial activity of PtNPs has only been demonstrated in a few studies. Hashimoto et al. [38] reported the antimicrobial effect of PtNPs at concentrations of 400 mg/L with an NP size < 5 nm. According to their work, PtNPs exhibited an inhibitory effect on biofilm formation. Our study only indicated an inhibitory effect on biofilm formation for S. aureus 816.
As previously mentioned, the discrepancies of published results may be explained by differences in the nanoparticle sizes tested, nanoparticle concentrations or shapes, or by different testing conditions [37]. Additionally, there is limited understanding of the potential nanotoxicity associated with the use of metallic NPs. To date, many studies have explored the potential impacts of NP exposure on the human population, associated safety concerns, and environmental concerns [21,22,41]. There are only a few studies that offer useful conclusions regarding the safety of NPs [41]. Furthermore, it was demonstrated that it is not possible to make a single overarching recommendation concerning the safety of all nanoparticle types [21]. Instead, the toxicity of NPs should be judged on a case-by-case basis. Our results report no acute cytotoxic activity of PdNPs and PtNPs. However, each type of NP should be thoroughly investigated, especially regarding their composition, size and dose, before guaranteeing their safe application in the food industry [22].
For future studies, there needs to be a renewed focus on evaluating antimicrobial activity as a function of NP size. Although NPs seem to be a theoretically promising tool for bacterial growth combat in food processing plants, it may be difficult to strike a balance between their efficient use and toxicity. Therefore, it is very important to continue testing the efficacy and safety of NPs, in all their permutations, in the greater effort to find the most convenient and safe surface strategy required in the food industry.
Chemicals and Reagents
The liquid media used for the cultivation of bacteria were Brain Heart Infusion (BHI) or Tryptone Soya Broth supplemented with 1% glucose (TSB + 1% Glc). The following solid media were used: selective-diagnostic agars Baird-Parker (BP) agar, agar Listeria according
Preparation of Metallic Nanoparticles
Metallic NPs (PdNPs and PtNPs) were prepared by the Department of Solid State Engineering, University of Chemistry and Technology in Prague, by cathodic sputtering using a BAL-TEC SCD 050 nebulizer, loaded directly into 2 mL of polyethylene glycol pipetted in a Petri dish. The deposition was carried out under constant conditions: room temperature, argon pressure in 8Pa chamber, current 30 mA, electrode gap 50 mm and time deposition 1000 s. After spraying, the nanoparticulate polyethylene glycol was immediately diluted with 18 mL of distilled water, i.e., 1:9 by volume (PEG:H 2 O). NPs were characterized by TEM ( Figure 4) and HR-TEM ( Figure 5) as being of a round shape with size of 4-6 nm. (Table 8).
Bacterial Stock Cultures Preparation
Isolates were refreshed from a deep-frozen aliquot by inoculating one loopful on the following agar plates-ALOA for L. monocytogenes, BPA for S. aureus, XLD for Salmonella Infantis and TBX for E. coli. Strains were incubated at 37 • C for 24 h. Grown cultures were stored at 4 • C for up to one month and used for inoculum preparation.
Inoculum Preparation and Preparation of Dilution Series for Metallic Nanoparticles
A single colony from an agar plate was inoculated into 2 mL of BHI and incubated at 37 • C overnight. To obtain the starting cultures, strains of S. aureus, L. monocytogenes and E. coli were centrifuged (6000 g, 10 min) and the resulting pellet was resuspended in 2 mL of TSB + 1% Glc, which was previously shown as an optimal medium for their biofilm growth [37]. For Salmonella strains, the overnight grown culture was used directly as the starting culture, since the same medium (BHI) was used for inoculum preparation [37]. In all cases, inoculum was prepared by mixing the chosen fresh medium for biofilm formation with the starting culture to reach a bacterial density of 0.5 McFarland standard. A dilution series of the tested antimicrobial substances (metallic NPs) were prepared by diluting the substances in appropriate culture medium (BHI, TSB + 1% Gl) in a 1:1 ratio. The concentration range for PdNPs was 0.05-44.5 mg/L and for PtNPs, 0.1-101 mg/L. The highest available concentration of PdNPs and PtNPs was used only for biofilm reduction testing, where NPs were directly applied to a preformed biofilm. Ten different concentrations of NPs were prepared as two-fold dilution series by mixing the appropriate concentrations in the ratio 1:1.
Determination of Minimum Inhibitory Concentrations
The minimum inhibitory concentrations were determined as described by Chlumsky et al. [37]. Briefly, 75 µL of inoculum (0.5 McFarland) were transferred into a pre-sterilized polystyrene 96-well flat-bottomed microtiter plate in three replicates and were then carefully mixed with 75 µL of a test substance at a particular concentration. For a positive control of bacterial growth, the inoculum was mixed with pure sterile medium. Furthermore, sterile medium was included in the plate as a marker of potential microbial contamination.
Evaluation of Planktonic Cells Growth
For the determination of MICPC 80 , the optical density of the content of the microtiter plates was measured spectrophotometrically at 620 nm before and after 24 h of cultivation at 37 • C (25 • C for S. Infantis strains [37]). The difference of A 620 was considered as a measure of the ability of planktonic cells to grow in the presence of the tested NPs and was used to determine MICPC 80 . After cultivation, the biofilm was quantified using the crystal violet assay (4.7.2.) or tested for metabolic activity (4.7.3.).
Quantification of Biofilm Formation
For the determination of MICBF 80, biofilms were quantified using crystal violet staining [37]. The wells of microtiter plates with grown bacterial culture were washed five times with 200 µL of distilled water using an automated microtiter plate washer and dried at room temperature for 45 min. Then, 150 µL of 0.1% crystal violet solution in sterile distilled water was added to each well, staining the biofilm for 45 min. After staining, the wells were washed again as mentioned above. Then, 200 µL of 96% ethanol was added for 15 min to elute the stain from the biofilm. Next, 100 µL of eluted solutions was transferred into a new microtiter plate and measured spectrophotometrically at 595 nm.
Evaluation of Metabolic Activity
The determination of MICBM 80 was estimated by using the MTT (thiazyl tetrazolium bromide) reduction assay. The bacterial cultures in a microtiter plate were drained off and the wells were washed twice with 200 µL of PBS. Next, 80 µL of glucose solution (57.4 mg/mL) and 70 µL of MTT solution (1 mg/mL) were added into each well and mixed. The microtiter plate was wrapped in tinfoil and incubated for 2 h at 37 • C (25 • C for S. Infantis). Then, 100 µL of washing solution was added and the microtiter plate was statically incubated for at least 30 min at 37 • C (25 • C for S. Infantis) in order to dissolve the preformed formazan. Next, the solution was mixed by pipetting five times and 100 µL of each solution was transferred into a new microtiter plate and spectrophotometrically assessed at 595 nm.
Evaluation of Nanoparticles Effect on Preformed Biofilms
For the determination of MICBR 80 , 100 µL of inoculum (0.5 McFarland) was added into a microtiter plate well in three replicates for each strain and concentration. The plate was incubated for 18 h at 25 • C (S. Infantis) or at 37 • C (other species) to allow the cells to form biofilms. The plate was then washed four times with 200 µL of sterile distilled water by manual pipetting in order to avoid cross-contamination occurring when using the plate washer. Then, 100 µL of the tested substances diluted with medium was added onto the preformed biofilms. Positive and sterility controls were included in the experiment. The resulting suspensions were measured spectrophotometrically at 620 nm before and after following 24 h of cultivation at 37 • C (25 • C for S. Infantis). The difference of A 620 was considered as a measure of the ability of biofilm cells to grow in the presence of tested NPs and was used for the determination of MICBC 80 . After the cultivation, the biofilm was quantified using the crystal violet assay (MICBR 80 ) or tested for biofilm metabolic activity (MICMPB 80 ) as described above.
Evaluation of Growth Inhibition Using the Plate Counting Agar
The highest concentrations of metallic NPs (44.5 mg/L PdNPs or 101 mg/L PtNPs) were mixed with individual bacterial suspension (10 7 to 10 8 CFU/mL) in the ratio 1:1 and cultivated for 24 h at 37 • C with shaking of 135 rpm. Before and after cultivation, the suspensions were serially decimally diluted and compared by quantifying their CFU/mL. The three most diluted suspensions were applied in 20 µL droplets on a plate count agar (PCA, Oxoid, Cheshire, UK) in two parallels and incubated for 24 h at 37 • C. After the cultivation, the grown bacterial colonies were counted and quantified according to Lencova et al. [27]. Four independent replicates were performed for each bacterial strain with specific metallic nanoparticles. Bacterial suspensions without any added NPs were used as controls.
From the CFU/mL determination, log 10 CFU reduction was assessed according to Equation (2) (log 10 CFU reduction expresses the difference between bacterial growth in the control and the suspension with the PdNPs or PtNPs) [47]. The inhibitory effect was calculated using the modified formula below (Equation (3)). log 10 CFU reduction = log 10 control − log 10 nanoparticles (2) where log 10 control is the number of bacterial cells in the suspension itself and log 10 nanoparticles is the number of bacterial cells in the suspension with the added PdNPs or PtsNPs.
Inhibitory rate (%) = 100 × log 10 control − log 10 nanoparticles log 10 control (3) where CFU (control) is the number of CFU/mL in the bacterial suspension itself and CFU (nanoparticles) is the number of CFU/mL in the bacterial suspension with the added PdNPs or PtsNPs.
Transmission Electron Microscopy Imaging
The interactions between tested metallic NPs and planktonic cells were visualized by TEM. The volume of 0.75 mL of inoculum (10 7 or 10 8 CFU/mL) was added into a 2 mL centrifuge tube and mixed with 0.75 mL metallic NPs of selected concentration or 0.75 mL of sterile medium (control). After cultivation (37 • C for 4, 8 and 24 h) in a shaking incubator, a drop of a bacterial culture suspension was deposited on a copper carbon-coated electron microscopic grid and incubated at room temperature for about 10 min. After that, the excess of liquid was removed by filter paper and the grid was quickly rinsed with distilled water. The grid was then deposited into a solution of 1% sodium silicotungstate (pH 7.4) and negatively stained for about 10 sec. After the staining, the grid was left to dry and subsequently inserted into the TEM column JEOL JEM-1010 (JEOL Ltd., Tokyo, Japan) operated at 80 kV at various magnifications. The micrographs were recorded by SIS Megaview III CCD camera and analyzed using AnalySIS v3.2 software (Olympus Soft Imaging Systems, Münster, Germany).
Cytotoxicity Assay
The cell lines were maintained in a proper medium-HaCat, HDFs and Hek 293 in DMEM; HRTECs in VCB; HCAECs in ProxUp. The cytotoxicity experiment was realized according to Tran et al. [48]. Briefly, the cells were counted by a Cellometer Auto T4 (Nexcelom Bioscience, Lawrence, MA, USA) and the cell suspension containing a cell density of 10 5 cells/mL was split into the 96-well plate, 100 µL per well. The plates were then incubated for 24 h at 37 • C in humidified atmosphere of 5% CO 2 . Then, the plates were washed three times with PBS and the tested NPs diluted in the respective medium were added using a binary serial dilution. After 72 h of incubation, the cell viability was tested by a resazurin assay. The fluorescence was measured by a SpectraMax i3x microplate reader (San Jose, CA, USA) at a wavelength of 560 nm excitation/590 nm emission.
Statistical Analysis
All MIC measurements were performed in at least two independent experiments, each with three replicates. The MICs were calculated as an average of all measured values and represent the minimum concentrations which resulted in at least 80% inhibition of growth (MICPC 80 , MICBC 80 ), metabolism (MICBM 80, MICMPB 80 ) and biofilm formation (MICBF 80 ), or resulted in to at least 80% reduction in preformed biofilms (MICBR 80 ). The significance of the results was verified by t-test (p ≤ 0.05) using Statistica v13.5.0 (TIBCO Software Inc., Palo Alto, California).
The cytotoxicity results are expressed as the average IC 50 ± standard error of the mean (SEM). Values of IC 50 were obtained by using the online tool Quest Graph IC 50 Calculator (AAT Bioquest Inc, Sunnyvale, CA, USA). One-way analysis of variance (ANOVA) was used, followed by Duncan's post hoc test (p < 0.05) to show the differences between the groups. For ANOVA, the Statistica software (Tibco Software Inc., Palo Alto, CA, USA) was used in v12.
Conclusions
The aims of this study were to investigate the effectiveness of PdNPs and PtNPs against important food-borne pathogens and to evaluate their mechanisms of action. The interactions of NPs with bacteria were not dependent on their Gram-negative or Grampositive characteristics. NPs bound to the bacterial cell wall and subsequently entered the cell through the wall and membrane, which resulted in bacterial disruption and leakage of intracellular components. In vitro cytotoxicity study confirmed that PdNPs and PtNPs did not exhibit any acute cytotoxicity. Both types of NPs were able to inhibit viable bacterial cells. However, the most significant antimicrobial effects were observed at the highest concentrations tested and seemed to be less effective for biofilm formation and its reduction. Hence, the regular use of NPs in food processing plants as an antimicrobial strategy may be challenging and potentially costly at this stage. Therefore, more studies are needed to elucidate the effects of NP size on antimicrobial efficacy and their potential chronic cytotoxicity prior to their application in the food industry.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/ijms22157892/s1, Table S1: Absorbance values (A 620 ) of the effect of PdNPs on planktonic growth., Figure S1: Inhibition effect of PdNPs on planktonic growth. Figure S2: Inhibition effect of PdNPs on biofilm formation. Figure S3: Quantification of biofilm formation with the use of 10 different PdNPs concentrations. Table S2: Absorbance values (A 620 ) of the effect of PdNPs on further growth of biofilm cells. Figure S4: Inhibition effect of PdNPs on further growth of biofilm cells. Figure S5: Reduction effect of PdNPs on preformed biofilms. Figure S6: Quantification of biofilm reduction with the use of 6 different PdNPs concentrations. Table S3: Absorbance values (A 620 ) of the effect of PtNPs on planktonic growth. Figure S7: Inhibition effect of PtNPs on planktonic growth. Figure S8: Inhibition effect of PtNPs on biofilm formation. Figure S9: Quantification of biofilm formation with the use of 10 different PtNPs concentrations. Table S4: Absorbance values (A 620 ) of the effect of PtNPs on further growth of biofilm cells. Figure S10: Inhibition effect of PtNPs on further growth of biofilm cells. Figure S11: Reduction effect of PtNPs on preformed biofilms. Figure | 2021-08-08T05:24:20.822Z | 2021-07-23T00:00:00.000 | {
"year": 2021,
"sha1": "25f705cfeb4c127e0c1c92c56739310a0c2dc874",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/15/7892/pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "25f705cfeb4c127e0c1c92c56739310a0c2dc874",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
53293082 | pes2o/s2orc | v3-fos-license | Lithium treatment in bipolar adolescents: a follow-up naturalistic study
Background Although lithium is currently approved for the treatment of bipolar disorders in youth, long term data, are still scant. The aim of this study was to describe the safety and efficacy of lithium in referred bipolar adolescents, who were followed up at the 4th (T1) and 8th (T2) month of treatment. Methods The design was naturalistic and retrospective, based on a clinical database, including 30 patients (18 males, mean age 14.2±2.1 years). Results Mean blood level of lithium was 0.69±0.20 mEq/L at T1 and 0.70±0.18 mEq/L at T2. Both Clinical Global Impression-Severity (CGI-S) and Children Global Assessment Scale (C-GAS) scores improved from baseline (CGI-S 5.7±0.5, C-GAS 35.1±3.7) to T1 (CGI-S 4.2±0.70, C-GAS 46.4±6.5; P<0.001), without significant differences from T1 to T2. Thyroid-stimulating hormone significantly increased from 2.16±1.8 mU/mL at baseline to 3.9±2.7 mU/mL at T2, remaining within the normal range, without changes in T3/T4 levels; two patients needed a thyroid hormone supplementation. Creatinine blood level did not change. No cardiac symptoms and electrocardiogram QTc changes occurred. White blood cell count significantly increased from 6.93±1.68 103/mmc at baseline to 7.94±1.94 103/mmc at T2, and serum calcium significantly increased from 9.68±0.3 mg/dL at baseline to 9.97±0.29 mg/dL at T2, both remaining within the normal range; all the other electrolyte levels were stable and normal during the follow-up. The treatment with lithium was well tolerated, probably due to the relatively low lithium blood levels. Gastrointestinal symptoms (16.7%), sedation (9.7%) and tremor (6.4%) were the most frequently reported side effects. Conclusion Lithium was effective and safe in adolescent bipolar patients followed-up for eight months.
Introduction
In the context of a multimodal approach, including psychosocial, family and psychotherapeutic interventions, the primary treatment for bipolar disorder (BD) with onset during childhood or adolescence is pharmacological. 1 First-line medications include mood stabilizers, such as lithium, valproate and carbamazepine, and atypical antipsychotics, such as aripiprazole, olanzapine, quetiapine and risperidone. 2 Although risperidone showed higher response rates in bipolar adolescents, compared to both lithium and divalproex, 3 growing concerns are associated with the use of atypical antipsychotics in children and adolescents. 4 Lithium is currently approved by most regulatory agencies, including the Food and Drug Administration and the European Medicine Agency, for the treatment of BD adolescents older than 12 years. Although a recent double-blind, placebo-controlled study further supported the efficacy of lithium in youths with BD, 5 studies reporting the efficacy of lithium in youths, namely the long-term data, are still scant. 6 Among the long-term studies, Strober et al 7
2750
Masi et al end of the follow-up period, 21 patients (56.8%) relapsed, and the relapse rate was three times higher among patients who discontinued lithium earlier. Findling et al 8 explored the comparative effectiveness of lithium and divalproex in the maintenance treatment of juvenile BD, over 76 weeks, without differences in time to mood relapse or treatment discontinuation. Our aim was to describe the medium-term safety and effectiveness of lithium in referred adolescent bipolar patients, who were naturalistically followed up for 8 months.
sample and measures
This was a naturalistic study based on a clinical database of consecutive Caucasian youth, aged between 12 and 18 years, with three inclusion criteria: a Diagnostic and Statistical Manual of Mental Disorder, Fourth Edition-Text Revised (DSM IV-TR) and DSM-5 diagnosis of BD, a pharmacological treatment with lithium carbonate, three follow-up points, at the baseline, after 4 months and 8 months. These time points were selected according to the routine follow-up procedures of our department. All the subjects were screened for psychiatric disorders, using historical information, a structured clinical interview according to DSM criteria, the Schedule for Affective Disorders and Schizophrenia for School-Age Children -Present and Lifetime Version (K-SADS-PL). 9 Patients with schizophrenia, autism spectrum disorder and intellectual disability were excluded. Given the naturalistic design of the study, patients receiving other pharmacological treatments were not excluded, when the dosages of other treatments were stable from at least 4 weeks before starting lithium and remained unchanged during the follow-up. Among 45 patients who started lithium, 15 patients were excluded because their clinical condition needed the stable introduction of a new medication during the follow-up. Thirty patients remained on this treatment for at least 8 months and were included in the study (18 males, mean age of 14.2±2.1 years). Seventeen patients (56.7%) presented with anxiety comorbidities, 14 (46.7%) had oppositional defiant disorder or conduct disorder and 12 (40%) had an attention-deficit hyperactivity disorder (ADHD). Seventeen patients received second generation antipsychotics (SGAs), four valproic acid, two fluoxetine and 16 concomitant psychotherapy. Although our assessment was limited to the 4th and 8th month of treatment, patients monitored their blood levels of lithium more frequently, and the available data supported the good compliance with treatment.
Our primary outcome measure was the improvement in symptoms according to the Clinical Global Impression-Severity (CGI-S) score, 10 assessed at the baseline (T0), after 4 months (T1) and after 8 months (T2). Secondary outcome measure was the functional improvement assessed using the Children Global Assessment Scale (C-GAS), 11 assessed at the baseline, T1 and T2. Safety data were assessed at the baseline and follow-up and included physical examination, blood cell count, blood chemistry, electrolytes, thyroid function and electrocardiogram (ECG) (including QTc interval).
All the diagnostic and therapeutic procedures, including lithium treatment, as well as the follow-up visits and data collection, were part of our routine procedures. The institutional review board of the Scientific Institute Stella Maris (Pisa) approved the study. All subjects and parents received detailed information on the assessment measures and different treatment options and gave their written informed consent to the treatment with lithium.
statistical analyses
A paired t-test was used to compare the values of continuous variables from baseline to T1 and T2. Analyses were run in SPSS Version 20 (IBM Corporation, Armonk, NY, USA).
Results
Patients presented a severe baseline status (CGI-S 5.7±0.5, C-GAS 35.1±3.7). Mean lithium dosage at T1 was 843±176.6 mg/day (range 600-1,200 mg/day) and at T2 was 858±149 mg/day (range 600-1,200 mg/day). Mean blood level of lithium was 0.69±0.20 mEq/L at T1 and 0.70±0.18 mEq/L at T2. A summary of baseline characteristics of the sample and efficacy results is provided in Table 1.
Regarding efficacy, CGI-S score improved from 5.7±0.53 at baseline to 4.2±0.70 at T1 (t=8.7, P,0.001) and 4.0±0.9 at T2 (t=7.8, P,0.001; moderately to markedly ill); from T1 to T2, the difference was not significant (t=1.6, P=0.1). Similarly, C-GAS score improved from 35.1±3.7 at baseline to 46.4±6.5 at T1 (t=−9.0, P=0.001) and 47.5±8.3 (t=−7.4, P,0.001) at T2 (moderate impairment in most areas or severe in one area); from T1 to T2, the difference was not significant (t=−0.8; P=0. 4). No patients relapsed during the follow-up. Emergence of both suicidal ideation and behavior was not reported during the entire follow-up. The presence of any group of comorbidities (anxiety disorder, disruptive behavioral disorder and ADHD) had no effect on the results.
Regarding thyroid function, thyroid-stimulating hormone (TSH) significantly increased from 2.16±1. 8 Regarding kidney functioning, mean creatinine serum levels were 0.69±0.14 mg/dL at baseline, 0.72±0.15 mg/dL at T1 and 0.75±0.14 mg/dL at T2 (P=NS). No patients exceeded normal creatinine serum levels of 1.1 mg/dL. Two patients (6.4%) presented polyuria and polydipsia. Urine examination was normal for all the patients during the follow-up.
The other reported side effects were early gastrointestinal symptoms (five patients, 16.7%), asthenia and sedation (three patients, 9.7%) and tremors (two patients, 6.4%), which were not required lithium discontinuation. No patients experienced skin rash. In a subset of patients, we gathered body mass index (BMI) data of the two follow-ups. At baseline, mean BMI was 18.7 (SD 3.4), at T1 20.1 (SD 3.2; P=NS) at T2 19.7 (SD 3; P=NS). No patients discontinued lithium due to excessive weight gain.
Discussion
In our sample of severely impaired adolescents with BD, lithium treatment significantly improved clinical severity and functional impairment. The clinical and functional improvement occurred in the first 4 months and was confirmed in the following 4 months, without relapses. Consistently with the antisuicidal properties of lithium in mood disorders, 12 emergence of both suicidal ideation and behavior was not reported during the follow-up. It is noteworthy that effective lithium blood levels were relatively low (ranging from 0.4 mEq/L to 0.8 mEq/L). Regarding safety data, treatment with lithium was well tolerated, which is in line with previous studies. 6 A moderate increase in TSH was found, although two patients
2752
Masi et al needed the add-on thyroid hormone supplementation. Renal function was preserved. Neither cardiac symptoms nor QTc alterations emerged during the follow-up. White blood cell count and calcium levels increased, but within the normal range and without clinical implications. Other electrolyte levels (including sodium and potassium) remained normal during the follow-up. Gastrointestinal symptoms usually occurred in the early phase of treatment, while sedation and tremor were more rarely reported during the follow-up. The rate of side effects was lower, compared to similar studies in adult populations, probably due to relatively lower blood levels of lithium in our sample of adolescents. Our naturalistic study presents several methodological limitations. This is not a randomized, controlled study. We only gathered data from patients who completed 8 months of follow-up, and this may have biased the results in favor of better efficacy and safety. Another confusing element is that many patients received other medications, namely SGAs or valproic acid, with anti-manic properties even in monotherapy. However, the dosage of other medications was stable from at least 4 weeks before starting lithium and remained unchanged during the follow-up. Although not supported by all guidelines, polypharmacy is the rule rather than exception in the real world of bipolar patients receiving medications. In the Bhangoo et al's 13 study, youths were receiving 3.40±1.48 medications, and 77% of them had had a trial of an antipsychotic. In the Pavuluri et al's 14 study, only 17.5% of bipolar youths without psychotic symptoms were effectively controlled by a monotherapy with a mood stabilizer for at least 6 months, while 66.3% of those receiving a combination of a mood stabilizer and an SGA were responders. Another relevant limitation, considering the high rate of comorbidities, is that we have used the CGI-S, CGI-I and C-GAS as outcome measures, which are not specific for manic/mixed symptoms' severity and improvement, but give an overall rating of effectiveness. An improvement in the global measure scores may have been affected by other comorbid disorders and not by a specific effect of lithium on BD. However, the CGI and C-GAS criterion corresponds to what clinicians use to determine whether to continue or interrupt a medication trial in naturalistic settings, as the course of the clinical picture as a whole is more reliably captured by a more global measure. Finally, we cannot extend our findings to longer follow-ups, as patients were subsequently monitored and cared by their own territorial facilities. In our study, all the patients were treated as needed (mono-or polypharmacy) and followed up in a routine clinical setting, and this may actually be one of the strengths of our study. Long-term naturalistic perspective studies might represent an important source of information on effectiveness and safety of treatment under ordinary clinical conditions, and the present findings are therefore of particular relevance to clinical practice in treating child and adolescent BD. In addition, the reassuring safety profile allows to speculate early intervention with lithium especially in special populations with juvenile bipolar patients, including patients who suffered from traumatic experiences, 15 or those with high suicidal risk, 12 or with comorbid conduct disorder and substance abuse. 16 | 2018-11-16T19:36:54.440Z | 2018-10-01T00:00:00.000 | {
"year": 2018,
"sha1": "80c49d7c297e4687cdfbf051cfa2843759a6e82d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=45416",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d285ed3e6e886e74817e6581370f4e56e87e6f07",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
221510164 | pes2o/s2orc | v3-fos-license | Phospho-islands and the evolution of phosphorylated amino acids in mammals
Background Protein phosphorylation is the best studied post-translational modification strongly influencing protein function. Phosphorylated amino acids not only differ in physico-chemical properties from non-phosphorylated counterparts, but also exhibit different evolutionary patterns, tending to mutate to and originate from negatively charged amino acids (NCAs). The distribution of phosphosites along protein sequences is non-uniform, as phosphosites tend to cluster, forming so-called phospho-islands. Methods Here, we have developed a hidden Markov model-based procedure for the identification of phospho-islands and studied the properties of the obtained phosphorylation clusters. To check robustness of evolutionary analysis, we consider different models for the reconstructions of ancestral phosphorylation states. Results Clustered phosphosites differ from individual phosphosites in several functional and evolutionary aspects including underrepresentation of phosphotyrosines, higher conservation, more frequent mutations to NCAs. The spectrum of tissues, frequencies of specific phosphorylation contexts, and mutational patterns observed near clustered sites also are different.
Protein phosphorylation is likely both the most common and the best studied PTM (Ptacek & Snyder, 2006;Schweiger & Linial, 2010;Huang et al., 2017). Phosphorylation introduces a negative charge and a large chemical group to the local protein structure, hence strongly affecting the protein conformation (Pearlman, Serber & Ferrell, 2011;Nishi, Shaytan & Panchenko, 2014). As a result, diverse cellular signaling pathways are based on NCA and altered mutational patterns of amino acids in the phosphosite vicinity. Our study complements earlier observations on the general evolutionary patterns in phosphosites with the analysis of mutations in non-serine phosphosites and the demonstration of differences in the evolution of clustered and individual phosphorylated residues.
Data
The phosphosite data for human, mouse and rat proteomes were downloaded from the iPTMnet database (Huang et al., 2017). The phosphorylation breadth values for the mouse dataset were obtained from Huttlin et al. (2010). Human, mouse and rat proteomes were obtained from the UniProt database (The UniProt Consortium, 2018). Vertebrate orthologous gene groups (OGGs) for human and mouse proteomes were downloaded from the OMA database (Altenhoff et al., 2017). Then, all paralogous sequences and all non-mammalian sequences were excluded from the obtained OGGs.
Alignments and trees
We searched for homologous proteins in three proteomes with pairwise BLASTp alignments (Altschul et al., 1990). Pairs of proteins with highest scores were considered closest homologs. The information about closest homologs was subsequently used to predict phosphosites conserved between human and rat or human and mouse which we hereinafter refer to as HMR phosphosites. OGGs were aligned by the ClustalO multiple protein alignment (Sievers et al., 2014) while the HMR phosphosites were identified based on Muscle pairwise protein alignments (Edgar, 2004). The mammalian phylogenetic tree was obtained from Timetree (Kumar et al., 2017).
Phosphorylation retention upon mutations
After the identification of homologous protein pairs in human/mouse and mouse/rat proteomes and the proteome alignment construction, we identified homologous phosphosites as homologous STY residues which were shown to be phosphorylated in both species. We have shown that phosphorylation is retained on S-T and T-S mutation by comparing two pairs of retention probabilities (Fig. 1C): p(pS-pS) with p(pS-pT | S) and p(pT-pT) with p(pS-pT | T) (analogously for the phosphorylation of tyrosines), p(pX-pX) being defined as the fraction of X amino acids phosphorylated in both considered species: and p(pX 1 -pX 2 ), as the fraction of phosphorylated X 1 residues in one species given that in another species another amino acid residue (X 2 ) is also phosphorylated: pðpX 1 À pX 2 jX 1 Þ ¼ # pX 1 À pX 2 ð Þ # pX 1 À pX 2 ð Þþ# pX 1 À X 2 ð Þ pðpX 1 À pX 2 jX 2 Þ ¼ # pX 1 À pX 2 ð Þ # pX 1 À pX 2 ð Þþ# X 1 À pX 2 ð Þ Homologous phosphosite lists from the human/mouse and human/rat pairs were merged to produce HMR phosphosite list of human phosphosites.
False-positive rates of phosphorylation identification by homologous propagation
We assessed the quality of the phosphorylation prediction via homologous propagation approaches by counting false-positive rates of phosphosite predictions in species with large phosphosite lists. As the numbers of predicted phosphosites drastically differed between species (Huang et al., 2017), we considered multiway predictions in each case as characteristics of the procedure performance. Hence, considering mouse phosphosites predicted by homology with known human phosphosites, we also considered human phosphosites predicted based on known mouse phosphosites. The false-positive rate was assessed as the proportion of incorrectly predicted phosphosites among the STY amino acids in one species homologous to phosphosite positions in other considered species.
When assessing the quality of phosphosite predictions based on phosphosites experimentally identified in at least two species, we considered human, mouse and rat and the lists of phosphosites homologous between human and mouse and between human and rat. In these cases, predictions were made for rat and mouse, respectively with the false-positive rate assessed by the same approach as in the previous case.
Mutation matrices
To obtain single amino acid mutation matrices, we first reconstructed ancestral states with the PAML software (Yang, 2007). For the reconstruction, we used OGG alignments which did not contain paralogs and pruned mammalian trees retaining only organisms contributing to corresponding OGG alignments. The alignment of both extant and reconstructed ancestral sequences and the corresponding trees were then used to construct mutation matrices, where we distinguished the phosphorylated and non-phosphorylated states of STY amino acids. Here, the phosphorylation state was assigned to STY amino acids using the phosphorylation propagation approach described above. When calculating the mutation matrix, we did not count mutations predicted to happen on branches leading from the root to first-order nodes, as PAML did not reconstruct them well without an outgroup (Koshi & Goldstein, 1996;Yang, 2007). Tree pruning and calculating the mutation matrix count were implemented in ad hoc python scripts using functions from the ete3 python module (http://etetoolkit.org/).
Disordered regions and identification of phospho-islands
Intrinsically disordered protein regions are defined here, following (Xue et al., 2010), as regions of proteins lacking stable and well-defined three-dimensional structure. IDRs were predicted with the PONDR VSL2 software with default parameters (Xue et al., 2010). This algorithm was selected, firstly, as one of the best IDR predictors yielding results highly consistent with other top-IDR predictors (Peng & Kurgan, 2012;Zhou et al., 2020), and, secondly, as the one efficiently predicting long IDRs (Peng & Kurgan, 2012), which is essential for the present study.
Phosphorylated amino acids were divided into those located in predicted IDRs and those located in ordered regions (ORs). On the HMR set construction, phosphosites with conflicting IDR/OR labels were excluded from the analysis. In the analyses of separate IDR/OR mutations, we considered IDR and OR labels of amino acids to be conserved along the mammalian tree and hence inferred the remaining extant and ancestral IDR/OR states from homology with both mouse and human ORs and IDRs. We consider the premise of conserved mammalian IDRs justified here, as it is known that protein tertiary structure elements, including IDRs, are evolving slowly (Chen et al., 2006;Toth-Petroczy et al., 2008).
Phospho-islands were identified by a hidden Markov model (HMM) built upon the distributions of distances between clustered and individual phosphosites. For that, the most likely clustered/individual phosphosite assignments were obtained with the Viterbi algorithm that is guaranteed to maximize the posterior probability (Viterbi, 1967). The emission probabilities for the HMM were obtained as the ratio of density values in the decomposition of the distribution of amino acid distances between adjacent phosphosites in IDRs, S (the likelihood ratio normalized to 1) ( Fig. 2A). To select the optimal transitional probability values, we performed a stability check by analyzing the dependance of the fraction of clustered phosphosites on the transitional probability values (Fig. S1). The percentages of clustered phosphosites turned out to be extremely stable with respect to transitional probability values if the latter were smaller than 0.3. Hence, the transitional probabilities were set to 0.2 (Fig. 2B).
Phosphosite contexts
We employed the list of phosphosite contexts as well as the binary decision-tree procedure to define the context of a given phosphosite from Villen et al. (2007). The procedure is as follows. (i) Proline context is assigned if there is a proline at position +1 relative to the phosphosite. (ii) Acidic context is assigned if there are five or six E/D amino acids at positions +1 to +6 relative to the phosphosite. (iii) Basic context is assigned if there is a R/K amino acid at position −3. (iv) Acidic context is assigned if there are D/E amino acids at any of positions +1, +2 or +3. (v) Basic context is assigned if there are at least two R/K amino acids at positions -6 to -1. Otherwise, no context is assigned and we denote this as the "O" (other) context. We consider tyrosine phosphosites separately and formally assign the with the "Y" (tyrosine) context.
Local mutation matrices
We computed local substitution matrices (LSMs) as the substitution matrices for amino acids located within a frame with the radius k centered at a phosphorylated serine or threonine. When computing LSMs, we did not count mutations of or resulting in STY amino acids to exclude the effects introduced by the presence and abundance of phospho-islands. We have set k to 1, 3, 5, and 7 and selected 3 as for this value we observed the strongest effect, that is, obtained the largest number of mutations with frequencies statistically different from those for non-phosphorylated serines and threonines.
Statistics
When comparing frequencies, we used the χ 2 test if all values in the contingency matrix exceeded 20 and Fisher's exact test otherwise. To correct for multiple testing, we used the Bonferroni correction with the scaling factor set to 17 for the substitution vector comparison and to 17 × 17 for the comparison of substitution matrices with excluded STY amino acids. A total of 95% two-tailed confidence intervals shown in figures were computed by the χ 2 or Fisher's exact test. The significance of obtained Pearson's correlation coefficients was assessed with the F-statistic.
Code availability
Ad hoc scripts were written in Python. Graphs were built using R. All scripts and data analysis protocols are available online at https://github.com/mikemoldovan/phosphosites.
Conserved phosphosites
As protein phosphorylation in a vast majority of organisms has not been studied or has been studied rather poorly (Huang et al., 2017), the evolutionary analyses of phosphosites typically rely on the assumption of absolute conservation of the phosphorylation label assigned to STY amino acids on a considered tree (Kurmangaliyev, Goland & Gelfand, 2011;Miao et al., 2018). Thus, if, for instance, a serine is phosphorylated in human, we, following this approach, would consider any mutation in the homologous position of the type S-to-X to be a mutation of a phosphorylated serine to amino acid X (Fig. 1B). However, the comprehensive analysis of yeast phosphosites has shown low conservation of the phosphorylation label at the timescales of the order 100 My and more (Studer et al., 2016). Thus, we have considered only orthologous groups of mammalian proteins, present in the OMA database (Altenhoff et al., 2017). The mammalian phylogenetic tree is about 177 My deep (Kumar et al., 2017), which corresponds to about 50% of the phosphorylation loss in the 182 My-deep yeast Saccharomyces-Lachancea evolutionary path (Studer et al., 2016). The tree contains three organisms with well-studied phosphoproteomes: human (227,834 sites), mouse (92,943 sites), and rat (24,466 sites) (Huang et al., 2017) (Fig. 1A).
Still, the expected 50% of mispredicted phosphosites could render an accurate evolutionary analysis impossible. This could be partially offset by considering phosphosites conserved in well-studied lineages. Thus, we compiled a set of human phosphosites homologous to residues phosphorylated also in mouse and/or rat, which we will further refer to as human-mouse/rat (HMR) phosphosites. The HMR set consists of 53,437 sites covering 54.6% and 61.2% of known mouse and rat phosphosites, respectively, which is consistent with the above-mentioned observation about 50% phosphorylation loss in yeast on evolutionary distances similar to the ones between the human and rodent lineages (Figs. 1A and 1B).
We consider the sites predicted by homology with the HMR set to be enriched in accurately identified phosphosites, as retaining only conserved phosphosites we substantially reduce the number of mispredictions. If we simply propagated human phosphorylation labels to mouse and vice versa we would get about 77.6% and 42.3% of false positive labels, respectively. However, sites conserved between human and rat or sites conserved between rat and mouse would yield about twofold lesser percentages of 41.9% and 19.9% of false positives in mouse and human, respectively. The obtained percentages can be considered as upper estimates of false positive rates, as current experimental phosphosite coverage in mammals cannot guarantee the identification of all conserved phosphosites (Huang et al., 2017). Thus, the HMR dataset is sufficiently robust for the prediction of phosphorylation labels in less-studied mammalian lineages. Figure 1 Phosphosites considered in the study. (A) Venn diagram of iPTMnet human, mouse and rat phosphosites. Intersections correspond to conserved phosphosites. The HMR phosphosite dataset is shown in pink. (B) Phosphosite assignment procedures. Given a tree of a mammalian orthologous gene group and a column in the respective alignment, we assign phosphorylation labels to ancestral and extant amino acids, firstly, by propagating labels from one species to all other species in the tree (shown as separate red and blue arrows) and, secondly, by propagating labels predicted both in the selected species (e.g., human, as shown) and in one of the remaining species (mouse and rat); this corresponds to blue and red arrows entering a given node in the tree. Phosphosites obtained by the latter procedure are referred to as the HMR phosphosite dataset. In both procedures, phosphorylation is considered to be retained both for direct and indirect STY-to-STY mutations. (C) Retention of phosphorylation upon mutation. Bars represent the probability of a conserved modification for the human dataset in the case of mutation and if mutation has not occurred. The letter after the vertical bar is an amino acid over which the probability was normalized. Three asterisks represent p < 0.001 (χ 2 test). (D) STY amino acid content of three groups of phosphosite datasets.
Full-size DOI: 10.7717/peerj.10436/ fig-1 Treatment of STY amino acids homologous to phosphorylated ones as phosphorylated yields another possible caveat, stemming from the possible loss of phosphorylation upon STY-to-STY mutations. To assess this effect, we compared the probabilities of phosphosite retention upon pSTY-to-STY mutation, pSTY indicating the phosphorylated state, and the respective probabilities in the situation when a mutation has not occurred for a pair of species with well-established phosphosite lists, that is, human and mouse (Fig. 1C). We have observed only a minor, insignificant decrease of the probabilities of the phosphorylation retention in the cases of pS-pT and pS-pY mismatches relative to the pT-pT states in mouse and human, indicating the general conservation of the phosphorylation label upon amino acid substitution. An interesting observation here is that the pS-pS states appear to be the most conserved ones (Fig. 1C). Taken together, these results indicate the evolutionary stability of phosphorylation states upon mutation.
The increased evolutionary robustness of the pS state relative to the pT and pY states should manifest as overrepresentation of phosphoserines among phosphosites with respect to non-phosphorylated amino acid positions. Thus, we assessed the relative abundancies of pSTY amino acids in the HMR dataset relative to the established human phosphosite set and to the set of non-phosphorylated STY amino acids. Serines and threonines, comprising the vast majority of the pSTY amino acids, are, respectively, over-and underrepresented in the phosphosite sets (Figs. 1C and 1D). This effect is significantly more pronounced in the HMR dataset relative to the total human phosphosite dataset, further supporting the observation about lower conservation of pT relative to pS, as the HMR dataset is enriched in conserved phosphosites by design.
Phosphorylation islands
The distribution of distances between phosphosites is different from that of randomly chosen serines and threonines even accounting for the tendency of phosphosites to occur in intrinsiacally disordered regions (IDRs) (Schweiger & Linial, 2010) (Fig. 2A). However, this observation depends on an arbitrary definition of phosphorylation islands as groups of phosphosites separated by at most four amino acids (Schweiger & Linial, 2010). We have developed an approach that reduces the degree of arbitrariness in the definition of phospho-islands which is based on a statistical model of the distances between phosphorylated residues in phospho-islands and for individual phosphosites.
We consider only phospho-islands located in IDRs, due to two reasons. First, IDR phosphosites, being more abundant, yield reliable statistics. Second, ordered regions are largely non-uniform in terms of local structural features, for example, being localized in the protein hydrophobic core or at the surface (Van der Lee et al., 2014). This would render construction of the null model of between-phosphosite distances impossible without considering all protein structures of the mammalian proteome, which is currently not feasible. Hence, we will hereinafter refer to phospho-islands located in IDRs simply as phospho-islands and to non-clustered phosphosites located in IDRs as individual phosphosites.
Let S be the distribution of amino acid distances between adjacent phosphosites in IDRs. The logarithm of S is not unimodal ( Fig. 2A), and we suggest that it is a superposition of two distributions: one generated by phosphosites in phospho-islands and the other reflecting phosphosites outside phospho-islands (left and right peaks, respectively). The latter distribution can be obtained from random sampling from IDRs of non-phosphorylated STY amino acids while preserving the amino acid composition and the sample size, as we expect individual phosphosites to emerge independently while maintaining the preference towards IDRs (Fig. 2C). Gamma distribution has a good continuous fit to log(S+1) for randomly sampled STY amino acids located in IDRs. Given its universality and low number of parameters (Friedman, Cai & Xie, 2006;Reiss, Facciotti & Baliga, 2007;Mendoza-Parra et al., 2013), we have selected gamma distribution as a reasonable model for log(S+1) (Fig. 2C). Assuming that the distribution of log(S+1) values for phosphosites located in phospho-islands should belong to the same family and fixing the parameters of the previously obtained distribution, we decomposed the distribution of log(S+1) values into the weighted sum of two gamma distributions, one of which corresponds to STYs located in phospho-islands and the other one, to remaining STYs in IDRs ( Fig. 2A, red and gray curves, respectively). From these two gamma distributions we obtained parameters for a hidden Markov model, which, in turn, was used to map phosphorylation islands. The distributions of S values for phosphosites in identified islands and the distribution for other phosphosites yielded a good match to the expected ones (Figs. 2B and 2D). Both for the HMR and mouse datasets, more than half of phosphosites are located in phospho-islands (61% and 56%, respectively) ( Fig. 2E; Figs. S2A and S2B). For human phosphosites, however, we see a larger proportion of sites (53%) located outside phosphoislands. In the latter case the distributions in the decomposition differ less, compared to the former two cases ( Fig. 2A; Fig. S2). It could be caused by a larger density of phosphosites in IDRs of the human proteome, resulting from higher experimental coverage; that would lead to generally lower S values, which, in turn, could cause the right peak in the log(S+1) distribution to merge with the left peak, rendering the underlying gamma-distributions less distinguishable. To validate this explanation, we randomly sampled 40% of human phosphosites, so that the sample size matched the one for mouse phosphosites; however, the results on this rarefied dataset did not change ( Fig. 2E; Fig. S2C) indicating that our procedure is robust with respect to phosphosite sample sizes. Hence, phospho-islands for the human dataset are identified with a lower accuracy than those for the HMR and mouse datasets. This could be caused by different experimental technique applied to the human phosphosites, compared to the one used for mouse and rat phosphosites, and by a possibly large number of false-positive phosphosites in the former case (Huttlin et al., 2010;Bekker-Jensen et al., 2017;Xu et al., 2017) (see "Discussion").
In phospho-islands, the overall pSTY-amino acid composition differs from that of individual phosphosites, mainly because the fraction of threonines is significantly higher in phospho-islands at the expense of the lower fraction of tyrosines (Fig. 2F). Also, the conservation of residues in phospho-islands is larger than that of the individual sites (Fig. 2G). Overall, the general properties of clustered phosphosites seem to differ from those of individual phosphosites.
A similar attempt to decompose the S distribution for phosphosites located in ordered regions yielded the distribution of log(S+1) values highly skewed to the left (small distances), even relative to the distribution of log(S+1) values in phospho-islands in IDRs (Fig. S2E). This precluded decomposition of the S distribution into a weighted sum of two distributions. A more complex model possibly incorporating features of the tertiary protein structure might be required to infer and analyze phospho-islands located in ORs, which is beyond the scope of the present study.
Mutational patterns of phosphorylated amino acids
Next, we have reconstructed the ancestral states for all mammalian orthologous protein groups not containing paralogs and calculated the proportions of mutations P(X 1 X 2 ), where X 1 and X 2 are different amino acids. We treated phosphorylated and non-phosphorylated states of STY amino acids as distinct states. We then introduced a measure of difference in mutation rates for phosphorylated STY and their non-phosphorylated counterparts. For a mutation of an STY amino acid X to a non-STY amino acid Z we define R(X, Z) = P(pX / Z)/P(X / Z). If X Ã is another STY amino acid, R(X, X Ã ) = P(pX / pX Ã )/P(X / X Ã ). Thus, the R value for a given type of mutations is the proportion of the considered mutation of a phosphorylated STY amino acid among other mutations normalized by the fraction of respective mutations of the non-phosphorylated STY counterpart. The R values are thus not affected by differences in the mutation rates between phosphorylated and non-phosphorylated amino acids, as all probabilities are implicitly normalized by the mutation rates of pX and X.
In earlier studies, only mutations of serines or to serines had been considered, as the available data did not allow for statistically significant results for threonine and tyrosine (Kurmangaliyev, Goland & Gelfand, 2011;Miao et al., 2018). Here, we see that phosphorylated threonines from the HMR dataset tend to mutate to serines (Fig. 3B). At that, phosphorylated serines mutate to threonines more frequently than their non-phosphorylated counterparts for all considered samples, that is, for the human, mouse and HMR sets ( to isoleucine (p < 0.05, χ 2 test) and, for human samples, to arginine (p < 0.05, χ 2 test) and glycine (p < 0.001, χ 2 test) ( Fig. 3B; Figs. S3-S8). Phospho-tyrosines in the mouse dataset show a weaker tendency for the avoidance of the mutations to aspartate than the non-phosphorylated ones (p < 0.05, χ 2 test) while the rate of pY-to-I mutations is higher (Fig. 3B). Separate analysis of mutations in phospho-islands and in individual phosphosites yields three observations. Firstly, alterations of mutation patterns of phosphoserines and phosphothreonines (pST) in IDRs relative to non-phosphorylated ST in IDRs are similar to the patterns observed for the clustered pST and, to a lesser extent, to those observed for individual pSTs (Fig. 3B). This is mostly due to the fact that the mutational patterns of clustered pSTs generally differ from those of their non-phosphorylated counterparts to a greater extent than the mutational patterns of individual phosphoserines do ( Fig. 3B; Fig. S1). Secondly, for phosphotyrosines, alterations in their mutational patterns brought about by phosphorylation are mostly explained by individual phosphotyrosines. The mutational patterns of individual sites deviate from the ones observed for non-phosphorylated tyrosines more than those of clustered phosphotyrosines ( Fig. 3B; Figs. S3-S8). Also, if we compare the R values calculated for all possible mutations in clustered vs. individual phosphosites, the R value corresponding to the S-to-E mutation will be significantly higher for the set of clustered phosphosites (p = 0.009, χ 2 test, Fig. S10). Hence, we posit that the general phosphosite mutational pattern alterations can be explained mostly by mutations in clustered phosphosites for phosphoserines and phosphothreonines and by individual sites when phosphotyrosines are considered.
S T Y S T Y S T Y S T Y S T Y S T Y S T Y S T Y S T Y S T Y S T Y S T
We also studied mutation patterns in ordered regions (ORs), and observed that phosphothreonines located in ORs demonstrate higher T-to-S mutation rates (Fig. 3B) relative to those of non-phosphorylated threonines located in ORs. Also, sites located in ORs demonstrate enhanced S-to-T and Y-to-T mutation rates relative to non-phosphorylated serines and threonines in ORs, respectively (Fig. 3B).
Phosphosite contexts
Sequence contexts of phosphosites generally fall into three categories: acidic (A), basic (B), and proline (P) motifs, with tyrosine phosphosites comprising a special class (Y) (Villen et al., 2007;Huttlin et al., 2010). For each phosphosite from each dataset we have identified its context. As in previous studies (Villen et al., 2007;Huttlin et al., 2010), phosphosites not assigned with any of these context classes were considered as having "other" (O) motif. We studied the distribution of these motifs for all classes of phosphosites.
In IDRs, relative to ORs, we observed a higher percentage of phosphosites with assigned contexts (Fig. 4A). P-phosphosites demonstrate the highest overrepresentation in IDRs, with 25% of IDR phosphosites having the proline motif. Phospho-islands, compared to individual phosphosites, contain more phosphosites with assigned motifs relative to individual phosphosites. In IDRs, there are more B-and P-phosphosites and fewer A-phosphosites among clustered sites than among individual ones. Notably, the fraction of phospho-tyrosines is substantially higher in ordered regions. However, this effect could be at least partially explained by the general tendency of aromatic residues, including tyrosine, to occur in ordered protein regions (Receveur-Bréchot et al., 2005).
Phosphorylation breadth
An important feature of a phosphosite is its "phosphorylation breadth", that is, the number of tissues where it is phosphorylated. In this study, the maximal phosphorylation breadth is nine, as the phosphorylation data for nine mouse tissues are available (Huttlin et al., 2010). Among broadly expressed phosphosites (present in all nine tissues), compared to tissue-specific ones (present in only one tissue), very few sites have unassigned contexts (O) and almost none are tyrosine phosphosites. The fraction of acidic phosphosites (24%) is substantially lower among tissue-specific sites relative to broadly phosphorylated ones (37%) (p < 0.001, χ 2 test) (Fig. 4A). As mentioned above, the pS-to-E mutation yields the highest value, R(S,E) (Fig. 3A) and represents the only mutation with significantly different R values in phospho-islands and individual sites (p = 0.009, χ 2 test, Fig. S1). At that, R(S,E) significantly increase with increasing breadth of expression (Fig. 4B), from R(S,E) = 1.14 for tissue-specific phosphosites to R(S,E) = 6.64 for broadly expressed phosphosites (p = 0.016, t-test).
Finally, we compared percentages of phosphosites with different breadths in ORs vs. IDRs and in phospho-islands vs. individual phosphosites (Figs. 4C and 4D). As the phosphorylation breadth increases, so does the fraction of clustered phosphosites, reaching 85% for sites phosphorylated in nine tissues; the fraction of phosphosites in IDRs also increases, reaching 95.4%.
Hence, broadly expressed phosphosites have well-defined motifs, tend towards disordered regions and to phospho-islands, have mostly acidic context, and mutate to NCA more frequently than tissue-specific phosphosites.
Mutation patterns in the proximity of phosphosites
We now show that not only phoshosites require special motifs (Huttlin et al., 2010), but the mutational context of clustered phosphosites differs from that of individual sites. To assess evolutionary dynamics associated with phosphosite motifs, we analyzed mutational patterns in ±3 amino acid windows of HMR ST phosphosites located in IDRs and compared them with those of non-phosphorylated ST amino acids from IDRs. The ±3 window was selected, as it yielded the strongest effect in terms of the number of mutations with rates statistically distinct from the expected ones (Figs. S11A and S11B). We did not consider phosphotyrosines, as they have not been shown to possess any discernible general motif apart from the phosphorylated tyrosine itself (Huttlin et al., 2010).
We introduce the measure Q defined as Q X p 1 ! X p 2 , where X p 1 and X p 2 are amino acids near phosphorylated serines and threonines and X n 1 and X n 2 are amino acids near non-phosphorylated serines and threonines. Q measures overrepresentation of a given mutation in the proximity of pST amino acids relative to ST amino acids. We also considered sites located in phospho-islands and individual phosphosites separately (Fig. 5; Figs. S11C and S11D).
In the whole HMR dataset, 22 types of non-phosphorylated amino acid substitutions out of the total of 289 have Q values statistically different from the expected value 1 (p < 0.05, χ 2 test with the Bonferroni correction), among them three pairs of mutually reverse mutations (Fig. 5). As expected from the conservation of the phosphosite contexts, mutations between positively charged amino acids and NCAs, potentially changing acidic to basic contexts and vice versa, are underrepresented, whereas E-to-D, D-to-E and K-to-R, not changing the context type, are overrepresented. The P-to-A substitution is overrepresented, thus indicating the instability of proline contexts. Interestingly, all three mutations with Q values exceeding 2.5 involve lysine, two of them being reverse mutations F-to-K and K-to-F. The fourth most overrepresented mutation, Y-to-G with Q (Y→G) = 2.5, could explain the lack of tyrosine phosphosites in IDRs, as a large fraction of IDR phosphosites are clustered with the distances between sites not exceeding three amino acids. Thus, a large Q(Y→G) value would lead to general underrepresentation of tyrosines in IDRs.
Types of mutations with significant Q values generally differ near clustered and individual phosphosites (Figs. S11C and S11D). E-to-D, not changing the local acidic context type (Huttlin et al., 2010), is overrepresented and E-to-K, disrupting the acidic context (Huttlin et al., 2010), is underrepresented in both cases. On the other hand, around individual phosphosites, Q(F→K) = 3.4 and Q(P→A) = 1.12, indicating an enhanced birth rate of the basic context and disruption of the proline context, respectively. The R-to-D mutation, disrupting the local basic context, also is overrepresented near individual phosphosites. In general, among seven overrepresented mutations near clustered phosphosites, only the K-to-P mutation disrupts the local basic context in favor of the proline context and among seven overrepresented mutations near individual phosphosites, three mutations (E-to-F, R-to-D, and P-to-A) could be regarded as context-disrupting. Hence, the individual phosphosite contexts are somewhat less evolutionary stable and thus the lower percentage of individual phosphosites with identifiable contexts might be due to specific local context-disrupting mutation patterns for these phosphosites.
DISCUSSION Clustered vs. individual phosphosites
We have demonstrated that clustered phosphosites differ from non-clustered ones in a number of aspects: (i) overrepresentation of phosphothreonines and underrepresentation of phosphotyrosines in phospho-islands (Fig. 2F); (ii) stronger conservation of clustered phosphoserines and phosphothreonines (Fig. 2G); (iii) larger proportion of sites phosphorylated in many tissues (Fig. 4C); (iv) significantly larger probability of mutations to glutamate for clustered relative to the individual phosphoserines; (v) larger fraction of sites with specific motifs in phospho-islands (Fig. 4A); (vi) mutational patterns in the proximity of phosphosites consistent with the context-retention hypothesis (Fig. 5). What are possible explanations for the observed effects? Underrepresentation of phosphotyrosines in phospho-islands could be explained by phosphorylation of clustered phosphosites being co-operative. As serines and threonines are more similar to each other in their tendency to being phosphorylated by similar enzymes than they are to tyrosine (Villen et al., 2007;Huttlin et al., 2010;Landry et al., 2014;Studer et al., 2016), one would expect phospho-tyrosines to disrupt co-operative phosphorylation of adjacent ST amino acids by being phosphorylated independently, thus introducing a negative charge which would affect phosphorylation probabilities of the neighboring amino acids (Landry et al., 2014). Hence phospho-tyrosines could have been purged by selection from pST clusters.
Secondly, phosphosites located in phospho-islands are more conserved than individual ones (Fig. 2G), as opposed to an earlier hypothesis that individual phosphosites are more conserved than their clustered counterparts (Landry et al., 2014). Our result seems to contradict the notion that the cellular function of phosphosites in an island depends on the number of phosphorylated residues rather than specific phosphorylated sites, whereas individual phosphosites operate as single-site switches and hence should be more conserved (Landry et al., 2014). However, this argument implies that phosphorylation of most individual phosphosites is important for the organism's fitness, which may be not true (Landry et al., 2014;Miao et al., 2018) and hence our results do not contradict the model of evolution of functionally important phosphosites.
Overrepresentation of phosphosites with defined motifs among the clustered ones (Fig. 4A) and reduced numbers of mutations disrupting the local contexts of the clustered sites (Figs. S10C and S10D) may indicate enhanced selective pressure on clustered phosphosites and their contexts. An indirect support of this claim comes from the overrepresentation of ubiquitously phosphorylated sites among the clustered ones (Fig. 4C). Indeed, broad phosphorylation requires a stronger local context and indicates the reduced probability of a phosphosite being detected simply due to the noise inherent to the phosphorylation machinery (Landry et al., 2014).
Mutations of phosphoserines located in IDRs to NCA are generally overrepresented among all mutations of the type pS-to-X relative to the corresponding mutations of non-phosphorylated serines (Fig. 3B). This effect is stronger for clustered phosphosites and for ubiquitously phosphorylated sites. Together with the observation about clustered phosphosites being on average more broadly phosphorylated than the individual ones, this suggests that a large fraction of phosphosite clusters might be phosphorylated (nearly) constitutively, and thus changes of individual phospho-serines to NCAs could experience lesser degrees of negative selection acting upon the corresponding mutations, as these mutations introduce smaller degrees of local electric charge shifts on the protein globule than the mutations of non-phosphorylated serines to NCAs do.
Two types of mutations
In all considered phosphosite datasets, we have observed two types of pSTY-to-X mutations overrepresented relative to STY-to-X mutations (Fig. 3B): (i) pSTY-to-pSTY, especially pT-to-pS mutation and (ii) pSTY-to-NCA, especially pS-to-E mutations. The former effect could be explained by the relaxed selection against pST-to-pST mutations due to the phosphorylation machinery often not distinguishing between serines and threonines (Huttlin et al., 2010;Miao et al., 2018). The overrepresentation of pT-to-pS mutation for all datasets, including sites located in ORs, could stem from the higher probability of phosphosite retention following a pT-to-pS mutation relative to the probability of phosphorylated threonine retention when no mutations have occurred (Fig. 1C). Thus, the observed enhanced pT-to-pS mutation rate could be due to the enhanced evolutionary stability of serine phosphorylation relative to the threonine phosphorylation.
The enhanced serine-to-NCA mutation rates could stem from the physico-chemical similarity of phosphorylated serines and NCAs: both types of residues introduce negatively charged groups of similar size to the protein globule. Thus, if phosphorylation is (almost) constitutive, that is, happens very frequently in a large number of tissues, we would expect the serine-to-NCA mutation rate to be enhanced. Indeed, ubiquitous phosphorylated serines have the pS-to-E mutation rate more than six-fold larger than the S-to-E mutation rate (Fig. 4B). However, the same pattern does not hold for phospho-threonines (Fig. 3B).
The differences in the mutation rates observed for phosphosites are stronger when clustered phosphosites are considered. Although this might be explained by individual phosphosites likely resulting from noise generated by the phosphorylation machinery (Landry et al., 2014), this could also indicate a general pattern of phosphosites constantly arising at random points of the proteome due to a constant evolutionary process. If phosphorylation at a focal site turns out to be advantageous, its individual context could be reinforced yielding broader phosphorylation pattern of this site or, alternatively, other phosphosites could emerge in the vicinity of this phosphosite, thus forming phosphoislands. As the vast majority of broadly phosphorylated sites are clustered, and clustered phosphosites demonstrate stronger phosphosite-specific features than individual phosphosites do, we suggest that formation of phosphorylation clusters around beneficial phosphosites is the prevalent process compared to context reinforcement of just one site. However, this hypothesis requires further verification.
Human phosphosites
The results obtained for the human set of phosphosites differ somewhat from those for the mouse and HMR sets, like in cases with different STY amino acids representation among phosphorylated amino acids (Fig. 1D), proportion of phosphosites located in phospho-islands (Fig. 2E) or some mutational patterns of phosphorylated STY amino acids (Fig. 3B). This could be explained by differences in experimental procedures used to obtain phosphosite lists for human and for mouse and rat. Whereas for classic laboratory organisms, phosphosites are obtained directly from the analysis of an organism or an analysis of its live organ (Huttlin et al., 2010), for human phosphosite inference immortalized cell lines, such as HeLa, are used (Bekker-Jensen et al., 2017;Xu et al., 2017), with conditions differing from those in vivo, and hence one could expect different patterns of phosphorylation. In particular, the lower rate of mutations to NCA could be explained by overrepresentation of sites with noisy phosphorylation manifesting only in cell lines under the conditions of experiments. The mutation of such a residue to NCA would most likely result in the deleterious effect of an average non-phosphorylated serine mutation to NCA (Jin & Pawson, 2012). Thus, we propose that phosphosites conserved between human and rodent lineages, called here HMR sites, are more robust with respect to experimental techniques, and hence are better suited for phosphosite evolutionary studies.
Evolution of non-studied phosphosite groups
Previous studies dedicated to the evolution of phosphosites have focused on phosphoserines located in IDRs. The large datasets employed in the present study enabled us to assess the patterns of phosphothreonines, phosphotyrosines and sites located in ORs. Apart from the largely enhanced pT-to-pS mutation proportions relative to T-to-S ones (Fig. 3B) no patterns with straightforward biological explanation were observed in these cases. However, an interesting observation here is the consistent, significantly enhanced rate of pY-to-I mutations relative to the Y-to-I mutations in the mouse and HMR datasets (Fig. 3B).
Perspectives
We propose a simple yet accurate homology-based approach for the ancestral phosphosite inference yielding in our case the set of HMR phosphosites. As the predicted fractions of phosphorylation labels falsely assigned to internal tree nodes are much smaller than the ones for other phosphosite datasets, HMR set poses a valuable source of data for evolutionary studies. A practical extension of our homology-based approach could be a phosphosite prediction procedure incorporating additional pieces of information such as the tendency of phosphosites to cluster, the local phosphosite contexts, and the tree structure into the probabilistic model, which would predict phosphosites with a high degree of accuracy. On the other hand, it would be interesting to infer the interplay between phosphorylation and selection using population-genetics data. | 2020-09-03T09:11:11.023Z | 2020-09-01T00:00:00.000 | {
"year": 2020,
"sha1": "3b5c743cd6bd8cd908c790a020464d5c85b9595f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.10436",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aec1ceee177bee75ef572d27dc3fa7c43d060f2f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
]
} |
31981752 | pes2o/s2orc | v3-fos-license | Low-Fat or Low Carb for Weight Loss? It Depends on Your Glucose Metabolism
http://dx.doi.org/10.1016/j.ebiom.2017.07.001 2352-3964/© 2017 The Author(s). Published by Elsevier B Over the past 30 years there has been a controversy about the optimal diet composition for weight loss and maintenance. Some have defended the more conventional low-fat high-carbohydrate diet (Astrup et al., 2000), whereas others point at a restriction in carbohydrates as being more effective. Multiple variations of popular diets exist from ketogenic very-low carbohydrate diets (Astrup et al., 2004), to diets pointing more at a slight increase in protein, and lowering the glycemic index of the carbohydrates as the most effective strategy (Larsen et al., 2010). Numerous randomized controlled trials (RTC) have compared the various diets for the treatment of overweight and obesity based on the assumption that one diet fits all without being able to provide strong evidence for one or the other. And even metaanalyses comparing the different diet options have been unable to identity a clear winner. In this issue of EBioMedicine, Wan et al. compared three different diets in a RCT conducted in a Chinese population of overweight individuals (BMI b 28 kg/m) without traits of the metabolic syndrome: a LFHC diet (fat 20%, carbohydrate 66% energy), a MF-MC diet (fat 30%, carbohydrate 56%), and the a HF-LC diet (fat 40%, carbohydrate 46%) (Wan et al., 2017). About 300 individuals were randomized into the three treatment arms, and after 6months 245 (79.8%)were retained in the study. Reduction in body weight was significantly greater in LF-HC throughout the intervention, and after 6 months weight loss was 0.5 kg greater than in theMF-MC, and 0.7 kg greater than in theHF-LC group. Effects on cardio-metabolic risk factors were somewhat similar across the three diets. Wan et al. should be congratulated with an excellent trial conducted with state of the art methodology, and this trial is one of the few larger one conducted in an Asian population and lasting for 6 months. So how should the superiority of the low-fat carbohydrate diet in this population be viewed? Well, a recent discovery has shown that the effectiveness of these diets depends on the glucose metabolism of the overweight and obese participants (Hjorth et al., 2017). Briefly, normoglycemic individuals lost most weight on a low-fat high carbohydrate diet, whereas pre-diabetic individuals are muchmore susceptible to lose weight on a diet with more focus on quality of the carbohydrate content, i.e. lower glycemic index,more fiber andwholegrain (Hjorth et
Over the past 30 years there has been a controversy about the optimal diet composition for weight loss and maintenance. Some have defended the more conventional low-fat high-carbohydrate diet (Astrup et al., 2000), whereas others point at a restriction in carbohydrates as being more effective. Multiple variations of popular diets exist from ketogenic very-low carbohydrate diets (Astrup et al., 2004), to diets pointing more at a slight increase in protein, and lowering the glycemic index of the carbohydrates as the most effective strategy (Larsen et al., 2010). Numerous randomized controlled trials (RTC) have compared the various diets for the treatment of overweight and obesity based on the assumption that one diet fits all without being able to provide strong evidence for one or the other. And even metaanalyses comparing the different diet options have been unable to identity a clear winner.
In this issue of EBioMedicine, Wan et al. compared three different diets in a RCT conducted in a Chinese population of overweight individuals (BMI b 28 kg/m 2 ) without traits of the metabolic syndrome: a LF-HC diet (fat 20%, carbohydrate 66% energy), a MF-MC diet (fat 30%, carbohydrate 56%), and the a HF-LC diet (fat 40%, carbohydrate 46%) (Wan et al., 2017).
About 300 individuals were randomized into the three treatment arms, and after 6 months 245 (79.8%) were retained in the study. Reduction in body weight was significantly greater in LF-HC throughout the intervention, and after 6 months weight loss was 0.5 kg greater than in the MF-MC, and 0.7 kg greater than in the HF-LC group. Effects on cardio-metabolic risk factors were somewhat similar across the three diets. Wan et al. should be congratulated with an excellent trial conducted with state of the art methodology, and this trial is one of the few larger one conducted in an Asian population and lasting for 6 months.
So how should the superiority of the low-fat carbohydrate diet in this population be viewed? Well, a recent discovery has shown that the effectiveness of these diets depends on the glucose metabolism of the overweight and obese participants (Hjorth et al., 2017). Briefly, normoglycemic individuals lost most weight on a low-fat high carbohydrate diet, whereas pre-diabetic individuals are much more susceptible to lose weight on a diet with more focus on quality of the carbohydrate content, i.e. lower glycemic index, more fiber and wholegrain (Hjorth et EBioMedicine 22 (2017) al., 2017). Notably, these effects are quite pronounced even under ad libitum conditions i.e. without putting any limit on the caloric intake ( Fig. 1). For the overweight and obese diabetics a reduction in carbohydrate amount is pivotal, and for this group a relatively higher amount of fat and protein in the diet is beneficial for weight control and glycemic status (Snorgaard et al., 2017). With these studies it is obvious that one diet does not fit all, and a personalized dietary approach is warranted. Participants in the study by Wan et al. had a mean age of 23 years, mean BMI of 21.8 kg/m 2 and mean fasting glucose of 4.1 mmol/L. Therefore, the population must be characterized as a healthy, insulin sensitive group that actually could have been predicted to have a better responsiveness on the low-fat high-carbohydrate diet. It would be interesting to extend the studies to pre-diabetic and diabetic Asian obese to examine if the findings in a predominantly Caucasian population also are valid for Asians.
Disclosure
AA and MFH are co-inventors on an international patent application (PCT/US17/35537) on the use of biomarkers for prediction of weight loss responses based on fasting plasma glucose and insulin. | 2018-04-03T02:02:19.643Z | 2017-07-04T00:00:00.000 | {
"year": 2017,
"sha1": "76bdd485fa7783a593929230ffeac31c25ad8e2f",
"oa_license": "CCBYNCND",
"oa_url": "http://www.ebiomedicine.com/article/S2352396417302645/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "76bdd485fa7783a593929230ffeac31c25ad8e2f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
240549341 | pes2o/s2orc | v3-fos-license | Adaptive optics in super-resolution microscopy
Fluorescence microscopy has become a routine tool in biology for interrogating life activities with minimal perturbation. While the resolution of fluorescence microscopy is in theory governed only by the diffraction of light, the resolution obtainable in practice is also constrained by the presence of optical aberrations. The past two decades have witnessed the advent of super-resolution microscopy that overcomes the diffraction barrier, enabling numerous biological investigations at the nanoscale. Adaptive optics, a technique borrowed from astronomical imaging, has been applied to correct for optical aberrations in essentially every microscopy modality, especially in super-resolution microscopy in the last decade, to restore optimal image quality and resolution. In this review, we briefly introduce the fundamental concepts of adaptive optics and the operating principles of the major super-resolution imaging techniques. We highlight some recent implementations and advances in adaptive optics for active and dynamic aberration correction in super-resolution microscopy.
INTRODUCTION
Fluorescence microscopy is one of the most straightforward and effective approaches to observe the intrinsic processes of life activities and reveal their differences and variations in a non-invasive manner. When exploring the underlying molecular mechanisms at the subcellular level, the resolution of the imaging technique largely determines the depth of the studies. Super-resolution microscopy, representing a range of fluorescence microscopy techniques with different principles, breaks the diffraction limit and bridges the resolution gap between traditional optical microscopy and electron microscopy, promoting biological research at the nanoscale (Sahl et al. 2017;Schermelleh et al. 2019;Sigal et al. 2018).
For conventional microscopes, the obtainable resolution is constrained not only by the diffraction of light but also by the presence of optical aberrations. Super-resolution microscopes, pursuing perfect images with a molecular-level resolution, are more sensitive to optical aberrations. In super-resolution microscopy, optical aberrations distort the point spread function (PSF) of the imaging systems in three dimensions, thereby reducing the image contrast and resolution. In some scenarios, aberrations can even eliminate any resolution improvements brought by the superresolution techniques. Furthermore, optical aberrations reduce the efficiency of light delivery and collection, thus decreasing the signal-to-noise ratios. In most cases, the magnitude of optical aberrations grows substantially with increasing imaging depth, thus becoming a formidable obstacle for deep-tissue imaging. To address these challenges, adaptive optics (AO), a technique widely used in astronomy, has been introduced to correct for optical aberrations and restore the image quality and resolution in microscopy, including super-resolution microscopy (Booth 2014;Booth et al. 2015). The basic idea of AO is to compensate for the estimated optical aberration by adding an equal but opposite amount of distortion to the wavefront of the imaging system through an active wavefront shaping device.
In the past decade, aberration correction in microscopy with AO has proven to be a tractable solution and has been applied to essentially all microscope modalities (Ahn et al. 2019;Ji 2017;Rodriguez and Ji 2018). In this review, we focus on the advances and applications of AO in super-resolution microscopy techniques. We briefly introduce the fundamental concepts and principles of AO as well as its implementations in super-resolution microscopy.
FUNDAMENTALS OF ADAPTIVE OPTICS
In this section, we introduce several fundamental but important concepts about adaptive optics.
Optical aberrations
In fluorescence microscopes, light from a point emitter, propagating in the form of a spherical wavefront (phase), is firstly collected by the objective lens and converted into a planar wavefront, then converges as a spherical wavefront after the tube lens, which eventually forms an ideal diffraction-limited spot (socalled PSF) at the detector. Aberrations are imperfections that cause the light to deviate from the ideal optical path, distorting the wavefront and smearing the focus, thereby reducing the image quality and resolution. In fluorescence microscopes, the aberrations shared by the illumination and detection paths can be corrected simultaneously in the common beam path (Fig. 1A). The phase information at the back pupil plane of the objective lens is commonly used to quantify the aberrations. In particular, phase aberrations can be analyzed by decomposing the pupil function into Zernike polynomials, which are an infinite set of orthogonal polynomials defined over the unit disk (normalized back aperture in microscopy) (Mahajan 1994). Zernike polynomials are closely related to classic aberrations such as spherical, coma, and astigmatism, making them the most popular and convenient choice for aberration representation (Fig. 1B).
Both optical systems and biological samples can introduce aberrations. System-induced aberrations arise from the fact that optical components are nonideal and suffer from manufacturing defects. For example, dichroic beamsplitters or mirrors may introduce a certain degree of aberrations (mainly astigmatism) if the surface is not sufficiently flat. Additionally, misalignments of the optical system can generate different kinds of aberrations. For example, making the beam not go through the lenses at the center co-axially or having a slight tip/tilt between the objective and the coverslip can introduce coma. In general, system-induced aberrations are relatively stable over time and can be minimized by using highquality optical components and careful alignment. Sample-induced aberrations, however, are much more difficult to deal with. They can arise from the refractive index mismatch between the objective immersion medium and the samples, primarily appearing as spherical aberrations. Moreover, heterogeneity within biological samples can generate complex aberrations that vary among samples and even among different regions of the same sample. While system-induced aberrations can be well characterized using calibration samples (fluorescent beads embedded in agarose, etc.), aberrations from biological specimens are almost unpredictable in practice. Therefore, when imaging thick samples, sample-induced aberrations are arguably the most prominent aberration source and are also much more challenging to correct.
Aberration measurement and correction
The first step which is also the key step in AO correction is aberration measurement. Generally, there are two main approaches: direct or indirect wavefront measurement.
In direct wavefront measurement or direct wavefront sensing, a wavefront sensor (WFS) is used to directly measure the phase aberration from the received wavefront. The Shack-Hartmann wavefront sensor (SH-WFS) is most commonly used due to its compact size, low cost, simple structure, and easy operation. A SH-WFS normally consists of a 2D microlens array that segments wavefront into subapertures and focuses them onto individual spots on a camera. A perfect wavefront forms a uniformly distributed spot pattern, while any aberration causes lateral shifts of the spots. Therefore, the aberrations in the wavefront can be extracted from the displacements of the spots from the non-aberrated positions. Typically, a guide star (a point-like light source) is required. There are two popular approaches to generate such guide stars in microscopy. The first is to use the fluorescent signal generated at the focal spot of two-photon excitation. Guide stars can also be generated by using fluorescent beads or gold nanoparticles with sizes well below the diffraction limit. Ideally, aberration information can be obtained from a single measurement, allowing for high-speed AO correction. However, the implementation of a SH-WFS adds complexity and extra cost to the optical system and guide stars are not always available. Moreover, the addition of a WFS introduces non-common path aberrations that cannot be corrected easily (Sulai and Dubra 2014). Due to the lack of effective guide stars, direct wavefront sensing is not convenient for many widefield microscopes and is more preferable in laser scanning systems.
As early adopters of AO techniques, astronomical and ophthalmological imaging typically implement the direct wavefront sensing approach in a closed-loop scheme, so that aberrated light is measured by the WFS after passing through the AO device. This approach allows incremental improvement of the wavefront correction over iterations, yielding a robust wavefront correction performance. However, close-loop AO requires splitting a fraction of the fluorescence signal for wavefront sensing, which is not ideal for scenarios with low photon budgets, such as single-molecule imaging. In many microscopy studies, aberrations are either static or slow-changing during data acquisition, so close-loop AO is not necessary. However, if aberrations in the sample or system vary temporally or spatially, direct wavefront sensing and close-loop correction are highly desirable.
Indirect wavefront measurement or indirect Adaptive optics in super-resolution microscopy REVIEW wavefront sensing is a sensorless approach that estimates the aberrations indirectly from the images produced by the microscope. Sensorless AO can be implemented in different ways (Wright et al. 2005). One approach is to use phase retrieval to reconstruct the pupil function by imaging fluorescent beads and aberrations can be extracted from the pupil function and corrected afterwards ( Fig. 2A). This approach is simple and useful for correcting system-induced aberrations.
Another sensorless approach, based on image quality metrics, requires recording a series of images while intentionally applying aberration bias for each mode. The optimal amplitude of each aberration mode is determined by maximizing the image quality metric (brightness, contrast, sharpness, and resolution etc.).
The procedure needs to be run iteratively for each aberration mode until the desired image quality is reached (Fig. 2B). This metric-based approach is conceptually compatible with any type of microscope and can be used for correcting sample-induced aberrations. With carefully chosen metrics, sensorless approaches can yield comparable performance with direct wavefront sensing in many situations (Wahl et al. 2019). However, correction speed is the main limitation as N aberration modes require at least N + 1 measurements (typically 2N + 1). This is not necessarily problematic when aberrations in the samples are relatively static over the timescale of imaging. Thus, sensorless AO approaches have been commonly used in the field of super-resolution microscopy.
Adaptive optical devices
The simplest adaptive optical device in microscopy is the objective correction collar, which allows correction for spherical aberrations induced by varying thicknesses of the coverslip, distance between the sample and the coverslip, or temperature changes. Many high-end objectives are now equipped with manual correction collars, and some offer motorized correction collars as an option. AO correction requires active wavefront shaping devices that can perform complex wavefront modulations in the conjugated pupil plane of the objective lens. These devices need to have enough degrees of freedom to compensate for complex aberrations. In general, there are two main optical devices used for active aberration corrections: deformable mirrors (DMs) and liquid crystal spatial light modulators (SLMs). Although they share the same purpose, i.e. to compensate for optical aberrations by adding an equal but opposite shape to the aberrated wavefront, their different architectures and characteristics make them suitable for different applications.
DMs are wavefront control devices widely used in astronomy to compensate for aberrations due to atmospheric turbulence. The DM surface can be either continuous or segmented. Continuous DMs usually have a thin membrane coated with a reflective metal layer. The membrane can be shaped by a number of electrically controlled actuators. For AO correction in microscope systems with prominent low-order aberration modes, a continuous surface is usually preferred. The stroke of continuous DMs ranges from a few micrometers to tens of micrometers. Large strokes are useful for correcting extreme aberrations or remote focusing but may suffer from drift and hysteresis. On the contrary, in segmented DMs, a single or small number of actuators control certain degrees of freedom of a miniature reflective surface. The coupling between adjacent actuators is minimal or none for segmented DMs, making them more suitable for generating highorder aberration modes. The actuators of DMs can be made based on the magnetic, micro-electromechanical system (MEMS), electrostatic electrodes, and piezoelectric devices. Thanks to the reflective nature of the metal coating, DMs are insensitive to polarization and wavelength, meanwhile providing high optical efficiency. This is particularly important as the fluorescence is unpolarized and broadband. Thus, DMs are widely used in fluorescence microscopes that are often designed for multiple illumination and detection wavelengths. When employed in the common path, a single DM is sufficient to correct for aberrations in both illumination and detection beam paths. However, due to coupling between adjacent actuators and manufacturing imperfections, a calibration or training step is usually necessary before a DM can be used for accurate wavefront control, especially in sensorless AO. The calibration can be done by using a wavefront sensor, either an interferometer (Antonello et al. 2020b) or a SH-WFS, in situ or ex situ.
SLM is another popular device used to modulate the wavefront of light in AO systems. In general, SLMs are devices that can manipulate properties of light, including amplitude, phase, and polarization. The most common types of SLMs are built on an array of cells of liquid crystal on silicon. They usually have a large number of cells (pixels) that can modulate the phase of the incident light over a range of at least 2Pi individually. This type of SLMs is typically used for phase-only correction. The large number of pixels provides great flexibility in phase correction and manipulation. For example, a single SLM device can be spatially divided into multiple windows and used as multiple AO devices using a multi-pass configuration (Lenz et al. 2014). Phase wrapping techniques can be used to increase the range of phase modulation (Hacker et al. 2003). Although SLMs can be used for certain applications without precise calibration, recent studies have shown that pixel-wise calibration is as important as careful alignment for optimal performance (Dai et al. 2019;Siemons et al. 2018).
In contrast to DMs, SLMs are sensitive to polarization and wavelengths. Therefore, they are mainly used for phase modulation in the illumination path, although they can also be used in the detection path for a single wavelength band at the cost of half fluorescence. Compared to DMs, SLMs have much more actuators (~100,000 vs ~100) that can generate high-order wavefront aberrations but at much lower refresh rates (~100 Hz vs >2 kHz). It is worth noting that both DMs and SLMs should be placed at the plane conjugated to the back pupil of the objective lens, and their effective working aperture should match the pupil size for best performance. Since all DMs and most SLMs are used in a reflection configuration, it is common to have a small angle between the incident and reflected beam, making the effective projection of the beam slightly elliptical rather than circular. For this reason, in practice, the angle should not be too large (typically less than 15 degrees), and the elliptical projection effect can be handled by calibrations. While both DM and SLM are typically implemented as reflective devices in AO systems, transmissive devices, such as adaptive lenses or liquid lenses, have recently become available for microscopy, which allow for easier integration of AO optics to existing microscope systems (Banerjee et al. 2018;Chiu et al. 2012;Pozzi et al. 2020).
ABERRATION CORRECTION IN SUPER-RESOLUTION MICROSCOPY
Super-resolution microscopy has revolutionized biological imaging over the past two decades. However, higher resolutions make it highly dependent on the optimal performance of the imaging systems and thus more susceptible to optical aberrations. In this section, we briefly describe the basic principles of three main super-resolution imaging techniques and the implementation of adaptive optics in these approaches.
Single-molecule localization microscopy
Single-molecule localization microscopy (SMLM) is an umbrella term for a series of methods that share the same operating principles such as (F) PALM, (d) STORM, GSDIM, and PAINT (Baddeley and Bewersdorf 2018;Möckl and Moerner 2020;Sauer and Heilemann 2017). In conventional fluorescence microscopy, all fluorophores emit photons simultaneously, and their PSFs overlap with each other, forming a diffractionlimited image. The key of SMLM is to separate molecules with overlapped PSFs in time rather than in Adaptive optics in super-resolution microscopy REVIEW space. That is, at each time point, only a few emitters are switched to the "on" state and become visible (or show detectable signals over the background as in PAINT), while most of the remaining emitters stay in the "off" state. The fluorescence signal from the emitters is typically recorded by a camera. This on-off switching (so-called blinking) cycles for thousands of frames until most fluorophores are photobleached or the desired localization density is reached. The on/off contrast is critical and to a great extent determines the resolution of SMLM. During data processing, emitters at each frame are identified and localized with nanometer precision and eventually combined to render a superresolution image. The attainable resolution of SMLM depends on how well we can estimate the positions of the emitters from the emission PSFs. It has been shown that optical aberrations have a strong impact on the resolution of SMLM (Coles et al. 2016;Deng and Shaevitz 2009).
SMLM is normally based on widefield illumination. The illumination profile typically follows a Gaussian distribution but can be shaped into a uniform distribution. The illumination intensity can affect the blinking properties of fluorophores but doesn't introduce aberrations. Thus, aberration correction is only necessary for the emission path, and DMs are often used for this purpose due to their zero chromatic aberration and insensitivity to polarizations, although SLMs have been used for PSF engineering and aberration correction in SMLM (Siemons et al. 2018;Wang et al. 2018).
Aberration correction in SMLM by DMs is relatively simple and straightforward, but aberration estimation is much trickier. While system-induced aberrations can be corrected in a predictable manner (Izeddin et al. 2012), correction for sample-induced aberrations is not trivial. Luckily, the blinking images of SMLM are essentially the convolution of the emitters with the system PSF, making it possible to estimate the optical aberrations directly from the raw images. Thus, aberration correction can be performed during image acquisition. Burke et al. designed a sensorless AO scheme capable of performing feedback correction for sample-induced aberrations on a dSTORM microscope . They established an image-based metric in Fourier space (M1) and estimated the aberrations from the first few hundreds of blinking images and then performed the model-based correction through the rest of the data acquisition ( Fig. 3A and 3B). Another group combined an intensity-insensitive Fourier Metric with a genetic algorithm to correct for the aberrations and optimize the PSFs in real-time (Tehrani et al. 2015). However, this approach requires a few thousand frames to converge as it needs random Burke et al. (2015). C SMLM reconstruction of microtubules in COS-7 cells through a 50-μm thick brain section without/with AO correction. Inset shows the widefield image before correction. Scale bar, 2 μm. D SMLM reconstruction of a layer 5 pyramidal neuron AIS stained for V-spectrin in a rat brain slice at 50-μm depth. Scale bar, 2 μm. E The average autocorrelation shows a clear periodicity with the peak at 203 nm. F Cross-section of the rectangular area indicated in D. Scale bar, 500 nm. C-F are adapted from Siemons et al. (2021) mutation to avoid local minima. Therefore, they adopted a particle swarm optimization algorithm (M2) to speed up the convergence procedure by an order of magnitude (Tehrani et al. 2017). Mlodzianoski et al. developed an approach that combines adaptive PSF shaping with an efficient sensorless AO method based on simplex optimization (M3) to allow robust volumetric 3D imaging through thick specimens ). Siemons et al. systematically compared the three metrics mentioned above (M1, M2 and M3) and found M3 was able to achieve consistent correction (Siemons et al. 2021). They further improved the M3 metric by combining it with modelbased optimization to robustly correct aberrations in realistic signal and noise levels up to a depth of 50 μm in tissue (Fig. 3C-3F).
As the final super-resolution image is constructed from localizations in SMLM, aberrations can also be handled offline during post-processing in the absence of active adaptive optical elements. Several groups have implemented different strategies to deal with depthdependent aberrations (spherical, coma, etc.) (Cabriel et al. 2018;Carlini et al. 2015;McGorty et al. 2014) or fielddependent aberrations (von Diezmann et al. 2015), but other more complex aberrations require more sophisticated algorithms. One approach is to use an experimental PSF model that contains system-specific aberrations instead of a theoretical one (like Gaussian) . Alternatively, Liu et al. reported a procedure to retrieve the pupil function from fluorescent beads images and generate PSFs for accurate 3D single-molecule localization (Liu et al. 2013). Although their approach can also be modified to include depth-dependent aberrations, it can not account for sample-induced aberrations due to heterogeneity within the specimen. To address this challenge, Xu et al. proposed a novel phase retrieval strategy (INSPR) that enables the construction of an in situ 3D PSF of single emitters directly from single-molecule blinking images . They further demonstrated that their approach can correct for both system-and sampleinduced aberrations, thus resolving ultrastructures within whole-cell and tissues with high resolution and fidelity. In principle, aberrations estimated from the blinking images by INSPR can be immediately corrected by the DM, producing distortion-free raw images for subsequent data analysis.
Structured illumination microscopy
Structured illumination microscopy (SIM) is another widefield-based super-resolution approach that theoretically doubles the resolution of a fluorescence microscope with standard fluorescent probes (Heintzmann and Huser 2017;Prakash et al. 2021;Wu and Shroff 2018). In the most common type of SIM, sinusoidal stripe illumination patterns at different orientations and phases are generated to illuminate the sample, which shifts higher spatial frequency information of the structure into the observable region of the microscope. To cover the entire expanded optical transfer function (OTF) range, nine images (three phases and three structure illumination orientations) are typically required to reconstruct a super-resolution image in 2D-SIM, while 15 images (five phases and three orientations) are required for 3D-SIM. Since only a small number of acquisitions are required, SIM provides a good balance between spatial and temporal resolution. Higher resolution can also be achieved in saturated SIM (SSIM) or nonlinear SIM (NLSIM) but at the cost of lower imaging speeds and higher phototoxicity.
Unlike SMLM, the illumination profile in SIM is critical as the image is essentially the multiplication of the sinusoidal stripe pattern and the distribution of fluorophores in the target structure. Aberrations in the excitation beam path can distort and smear the illumination pattern, thereby reducing image contrast and resolution, introducing artifacts, and even causing complete failure of image reconstruction (Arigovindan et al. 2012;Liu et al. 2020). Therefore, aberration corrections are required in both illumination and detection beam paths, which can be performed simultaneously by using a single DM in the common beam path. Both direct and indirect wavefront sensing methods have been applied to SIM to correct for optical aberrations and improve image quality. Turcotte et al. implemented a direct wavefront sensing module using multiphoton guide stars to facilitate super-resolution imaging of the brains in live zebrafish larvae and mice (Turcotte et al. 2019). They observed the dynamics of dendrites and dendritic spines at nanoscale resolution with the help of AO to correct for sample-induced aberrations (Fig. 4A). Similarly, Zheng et al. used a nonlinear guide star in two-photon instant SIM (2P-ISIM) to measure optical aberrations in both excitation and emission baths and correct them by a DM (Zheng et al. 2018). They demonstrated up to 40-fold intensity enhancement and substantial resolution recovery in cells and tissues at depths up to 250 μm (Fig. 4B). Using an indirect wavefront sensing method, Debarre et al. investigated the effect of different aberration modes on the illumination patterns in SIM and corrected for each mode independently using an image quality metric (Debarre et al. 2008). Thomas et al. introduced a phase retrieval approach to correct for aberrations in SIM, which improved the image contrast of fluorescent beads and achieved a resolution of 140 nm through 35 μm of tissue (Thomas et al. 2015). Zurauskas et al. reported a sensorless AO strategy based on image quality with improved sensitivity and reliability for aberration correction in 2D-SIM (Zurauskas et al. 2019). They combined it with a customized illumination pattern to enhance the sampling of OTF, producing more isotropic and better overall correction results (Fig. 4C). Lin et al. further extended the sensorless approach to 3D-SIM to recover information severely distorted by optical aberrations and to restore image quality and resolution when imaging a variety of biological samples (Fig. 4D) (Lin et al. 2021).
Stimulated emission depletion microscopy
Stimulated emission depletion (STED) microscopy is a point-scanning approach that uses a nonlinear saturation process to induce transitions between the on and off states (Blom and Widengren 2017;Egner et al. 2020;Vicidomini et al. 2018). A STED microscope is essentially adding a depletion laser to a confocal microscope. The off-switching is done by the depletion laser that features an intensity minimum (ideally zero) at the focus. In 2D-STED, the depletion laser creates a doughnut-shaped focus by using a vortex phase mask. To produce a depletion focus in 3D-STED, a top-hat phase mask is used to deplete the fluorescence above and below the focal plane in addition to the vortex phase mask. In STED, only fluorophores in the focus center where the depletion laser shows zero intensity minima are allowed to emit detectable fluorescence (normal emission) while all other fluorophores under the excitation focus are subjected to stimulated emission. Thus, the effective PSF size (i.e. resolution) is determined by the width of the central intensity minima at the depletion focus rather than the excitation focus.
A STED microscope consists of three independent beam paths, excitation, emission, and depletion, all of which suffer from optical aberrations and therefore Lin et al. (2021) require correction. Among the three beam paths, the profile of the depletion beam is arguably the most critical one, and it is sensitive to many types of aberrations ( Fig. 5A) (Antonello et al. 2016;Antonello et al. 2017;Deng et al. , 2010. In general, aberration modes that distort the focal distribution but maintain zero intensity will reduce the STED resolution towards the confocal level. In contrast, aberration modes that destroy the zero intensity will deplete most fluorescence with little or no improvement in resolution. Several studies in STED microscopy dealt with spherical aberration as it is one of the most common aberration modes. By simply using a glycerol objective and its correction collar to correct for spherical aberrations, Urban et al. pushed the imaging depth of STED microscopy up to 120 μm (Urban et al. 2011). Angibaud et al. used a high refractive index mounting medium (CFM3) as a clearing reagent for fixed samples, which greatly increased the penetration depth and performance of STED microscopy (Angibaud et al. 2020).
As an SLM is commonly used to provide phase masks for creating the intensity minima in depletion beams, it is most suitable to correct for the aberrations in the depletion beam path. Lenz et al. implemented a noniterative approach to correct the aberrations in the depletion beam path by generating a look-up table considering the depth and the specific refractive index Bancelin et al. (2021). C Images of H2B-GFP within the nucleus located 62 μm below the tissue surface with different imaging modalities. Scale bars, 1 μm. D 2PE-STED images of a dendrite 76 μm below the cortical surface with AO correction. Scale bar, 1 μm. C and D are adapted from Velasco et al. (2021) under the study (Lenz et al. 2014). Similarly, Bancelin et al. measured the aberrations from agarose beads samples as a function of depth and used them as prior information to correct the aberrations when imaging living brain tissue (Fig. 5B) (Bancelin et al. 2021). Although these non-iterative approaches are simpler and faster, they tend to work well only in certain scenarios. One challenge in sensorless AO-STED is the lack of a universal image quality metric that works reliably in different situations. For example, while the image brightness is an effective metric for many kinds of microscopy techniques, it is not quite suited for STED as the stimulated emission is not detected, resulting in a dimmer image. Thus, several groups introduced iterative strategies based on combined image quality metrics (brightness and sharpness) that allow correction for both system-induced and sample-induced aberrations in 3D-STED microscopy (Gould et al. 2012;Zdankowski et al. 2019;Zdankowski et al. 2020). Using the same metric, Gould et al. demonstrated the use of an SLM for automatic alignment of the depletion focus to the excitation focus in both 2D-and 3D-STED microscopes (Gould et al. 2013). If placed in the common beam path, the SLM can be used to correct for the aberrations in the excitation as well (Gorlitz et al. 2018 (Antonello et al. 2020a). STED is naturally compatible with multiphoton excitation, which is the preferred modality for imaging deep in scattering tissues. Velasco et al. built a 2PE-STED microscope by combining 3D-STED with 2PE, redemitting organic dyes, and WFS-based aberration correction (Velasco et al. 2021). They demonstrated aberration-corrected 3D super-resolution imaging at 62-μm depth in fixed mouse brain tissue (Fig. 5C) and 76-μm depth in living mouse brain (Fig. 5D).
DISCUSSIONS
Aberration correction in microscopy with AO is still a fast-growing field. Established AO methods are being combined with other techniques to improve the aberration correction accuracy and speed, indicating that AO correction is entering the application phase.
Deep learning is a rapidly emerging technique that has been applied to almost every corner of microscopy (Belthangady et al. 2019;Tian et al. 2021). Recently, it has been introduced to aberration correction in fluorescence microscopy. Convolutional neural networks (CNNs) have been employed to estimate the aberrations from an intensity image (Nishizaki et al. 2019;Saha et al. 2020) or a SH-WFS pattern Hu et al. 2021). Additionally, CNNs were applied to aberration correction in super-resolution microscopy for accurate localization of single molecules in SMLM (Zhang et al. 2018), recovery of the doughnutshaped focus in STED microscopy , and phase prediction in SIM (Zheng et al. 2021). With the fast evolving in the deep learning field, we expect to see robust and effective AO correction with CNNs in super-resolution microscopy in the foreseeable future.
Most AO techniques apply phase correction in the conjugated pupil plane of the objective lens, which is most straightforward and easy to implement. However, pupil AO may not provide optimal correction over a large field of view with spatially variable aberrations, in which cases another technique that involves placing the AO device conjugate to the main source of aberrations, called conjugate AO, has its advantages (Mertz et al. 2015). Applying conjugate AO to multiphoton neuroimaging, Park et al. demonstrated dynamic imaging of neural dendrites and microglia dynamics through extremely turbid biological tissue, intact mouse skulls, over an extended corrected field of view (Park et al. 2015). Moreover, Park et al. developed a multi-pupil AO strategy to expand the correction area by nine-fold using a multifaceted prism array (Park et al. 2017).
The potential of AO in microscopy has not yet been fully unleashed. With the maturing of AO techniques that allow imaging deeper in tissues, we expect to see highorder aberration correction or scattering correction to become more important. Although scattering correction and aberration correction share some basic principles, scattering correction typically requires adaptive optical devices with significantly greater degrees of freedom as well as more sophisticated image metrics and algorithms, which is beyond the scope of this review. Besides aberrations, the imaging quality and resolutions of fluorescence microscopy are also affected by other factors such as photobleaching, mechanical stabilities, and detector noise. However, we believe that with the development of brighter dyes and more sensitive detectors, AO-assisted fluorescence microscopy will play an increasingly important role in interrogating cutting-edge biological questions. | 2021-11-04T00:08:29.923Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "d75e4fd553053016c5fb324d40df92a2c3251712",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "186aabfc118aaca43422eb1b2547e8a0f32f43e1",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
} |
15880370 | pes2o/s2orc | v3-fos-license | Healing of Early Stage Fatigue Damage in Ionomer/Fe3O4 Nanoparticle Composites
This work reports on the healing of early stage fatigue damage in ionomer/nano-particulate composites. A series of poly(ethylene-co-methacrylic acid) zinc ionomer/Fe3O4 nanoparticle composites with varying amounts of ionic clusters were developed and subjected to different levels of fatigue loading. The initiated damage was healed upon localized inductive heating of the embedded nanoparticles by exposure of the particulate composite to an alternating magnetic field. It is here demonstrated that healing of this early stage damage in ionomer particulate composites occurs in two different steps. First, the deformation is restored by the free-shrinkage of the polymer at temperatures below the melt temperature. At these temperatures, the polymer network is recovered thereby resetting the fatigue induced strain hardening. Then, at temperatures above the melting point of the polymer phase, fatigue-induced microcracks are sealed, hereby preventing crack propagation upon further loading. It is shown that the thermally induced free-shrinkage of these polymers does not depend on the presence of ionic clusters, but that the ability to heal cracks by localized melting while maintaining sufficient mechanical integrity is reserved for ionomers that contain a sufficient amount of ionic clusters guaranteeing an acceptable level of mechanical stability during healing.
Introduction
Polymer based composites are susceptible to many different types of mechanical damages which reduce their reliability and potentially decreases the overall lifetime of the material. By implementation of self-healing technologies the overall lifetime of polymer composites can be prolonged [1,2]. Within self-healing composites, most attention so far has been on extrinsic healing strategies where an external (liquid) healing agent capable of restoring either the matrix or the filler-matrix interface is encapsulated and embedded in the matrix [3][4][5]. The mechanism certainly works but there are many issues still to be resolved. Even when solved, the fact remains that the healing reaction locally works only once and this is a major shortcoming [2]. The use of intrinsically healing polymer matrices in such composites is considered to be more optimal because it has the potential of an infinite amount of healing cycles. Additionally, intrinsic healing leaves the optimized macroscopic fiber and ply architecture required for high level mechanical properties unaffected [2,6,7].
Ionomers are among the most frequently studied polymer matrices for intrinsic self-healing particulate [8,9] or fiber reinforced composites [10]. Ionomers have pendant acid groups distributed along the polymer backbone that are neutralized by ionic metal salts. These ionic groups have the tendency to form ionic clusters which create additional physical crosslinks within the polymer network [7]. Ionomers have proven to be capable of restoring mechanical stability by healing of ballistic impact damage using a combination of shape recovery (sometimes called "shape memory") and re-bonding across former damage site surfaces [11][12][13][14][15]. The combination of the shape recovery effect and healing is not exclusive for ionomers but is also found in other polymer systems [16,17]. Besides healing after ballistic impact, which is investigated in the majority of self-healing ionomer studies, ionomers were also used to heal scratch damage [18] and damage on composite toughening interlayers [19]. The shape restoration of the polymer after puncture is made possible by the heat that is generated upon impact [13,14]. Strain recovery after deforming a polymer beyond its yield strain and subsequent heating is found to be typical for all semi-crystalline polyethylene-based polymers [20,21] and is attributed to the dominance of the decrease in the carbon bond angle over the overall carbon-carbon stretching when these polymers are deformed and heated consecutively [22,23]. Semi-crystalline ionomers were also reported to behave like traditional shape memory polymers by Dolog and Weiss [20]. Since this form of thermal contraction after deformation does not correspond to the definition of shape memory polymers, the phenomenon was more accurately defined as free-shrinkage [21] and we will use this terminology in the present work. Although there seems to be consensus about the mechanisms responsible for the restoration after polymer deformation, there is currently no general agreement on the role that the ionic clusters have on the healing effect [24], i.e., the reformation of mechanical strength across a former crack.
As is the case for the majority of the intrinsically healing polymers, ionomers need a thermal stimulus to activate their healing behavior. This poses a direct disadvantage in future applications when the intended energy input has to be delivered from the nearby environment to the composite structure (e.g., by using an oven) [25][26][27]. To overcome this disadvantage the energy input can be delivered locally from within the structure by making the ionomer suitable for inductive heating. In recent years this concept was explored by adding ferromagnetic particles to thermoplastic matrices [28][29][30]. However, within these studies the thermoplastic material was simply melted and restored to its initial shape and lost all mechanical stability throughout the process. Recently, Hohlbein et al. demonstrated the concept of inductive healing in a new family of ionomers [8]. Although this study showed the great potential of inductive heating for intrinsically self-healing polymers, their experimental ionomers still had rather low tensile properties.
In most studies on self-healing polymer composites, the research focused on the healing of damage after cutting or static overloading [2,7]. However, when a self-healing polymer is incorporated into a structural composite it is crucial to understand how the material behaves under dynamic fatigue loading and what types of damage are formed during the early stages of this process when the likelihood of complete healing is highest. Multiple studies describe the self-healing of fatigue induced mechanical damage in extrinsic healing composites [31][32][33][34][35]. A recent study focused on the partial restoration of the functional piezoelectric properties in a lead zirconium titanite (PZT) ionomer composite [9]. Nevertheless, to the best of our knowledge, the restoration of mechanical properties after fatigue in intrinsically self-healing polymers has not been investigated.
This study is the first investigation on the self-repair of mechanical properties of intrinsically self-healing polymer particulate composites after fatigue loading conditions. In this work, poly(ethylene-co-methacrylic acid) zinc ionomer/Fe 3 O 4 nanoparticle composites were developed and subjected to different levels of fatigue loading. The initiated early stage fatigue damage was then healed upon localized heating of the particles by exposure of the composites to an alternating magnetic field. For a proper understanding of the mechanisms involved in the healing process a detailed thermo-mechanical investigation was performed on a set of poly(ethylene-co-methacrylic acid) based polymer blends with varying amounts of ionic clusters. Such an approach allowed the identification and separation of the two stages involved in the healing process: (i) the residual strain and network restoration; and (ii) the macroscopic crack sealing. A temperature window for the different stages of early stage damage healing in ionomer composites was thereby identified.
Materials
In order to evaluate the effect of cluster content, four different poly(ethylene-co-methacrylic acid) (EMAA) zinc ionomer blends were prepared based on a previous study that investigated the role of free carboxylic content and cluster state on the healing of surface scratches [18]. The four chosen blends resulted in polymer systems with high (Zn-EMAA), medium (Zn-EMAA/EMAA), no ionic groups (EMAA) and a blend where a relatively high amount of ionic clusters is neutralized (ZnEMAA/AA). In order to make the blends susceptible to inductive heating Fe 3 O 4 particles (10 vol %, 50-100 nm, Sigma Aldrich, Zwijndrecht, The Netherlands) were added to the polymers based on previous healing studies. More information about the nature of the polymers and particles and full characterization can be found elsewhere [8,14,20,21,29]. The selected blends were prepared with the following materials: • In this blend the ionic clusters are destroyed by the adipic acid as is described by Varley et al. [36].
Polymer composites were prepared by mixing all components (polymer pellets, particles and additives) using a twin screw mini-extruder. The extruder volume was 15 mL and a temperature of 200 • C and a torque of 50 rpm were applied. The residence time in the extruder was 5 min. After extrusion, the resulting products were compression moulded at 150 • C with a pressure of 4 MPa using a hot press resulting in 100 ± 5 µm freestanding films. Teflon films were used to separate the polymer films from the pressing plates. After moulding, the films were given a 15 min heat treatment at 80 • C in a preheated convection oven to equilibrate the thermal effects induced due to the rapid cooling after moulding. Films were stored at room temperature for at least 21 days to equilibrate the polymer microstructure prior to further testing. Dog-bone shaped specimens (ASTM D1708) were pressed from the prepared films.
Mechanical Testing
To study the deformation before and after free shrinkage, different levels of quasi-static strain (25%-100%) were applied to deform the polymer composites using an Instron Model 3365 universal testing systems equipped with a 1 kN load cell. Dog-bone micro-tensile specimens were stretched at 1 mm/s at room temperature. The average value of 3 experiments was reported.
Fatigue experiments were conducted on dog-bone shaped specimens at room temperature on an MTS 831 Elastomer test system equipped with a 1 kN load cell. The specimens were fatigue tested under different prestrain levels of 25% and 50% from which a sinusoidal waveform with an amplitude of 2.3% and a frequency of 1 Hz was employed. The amount of applied strain cycles ranged from 500 to 50,000. Full fracture tensile tests were performed on different Zn-EMAA specimens at different stages of the fatigue restoration process using the same equipment and conditions as for the deformation experiments. True stress and true strain were calculated from these tests via: where, σ T = true stress in MPa; P = measured load in N; A 0 = Area of the cross-section of the dog-bone in mm 2 ; ε = engineering strain in percent; and ε T = true strain in percent. The tensile tests were performed 7 days after the fatigue and healing treatments which allows the polymer crystalline phases to fully recover prior to further testing.
Thermomechanical Testing
The effect of cluster content on the deformation that occurs during fatigue and the thermal contraction upon heating was investigated using a Perkin-Elmer Sapphire differential scanning calorimeter (DSC). Samples were heated and cooled between −50 • C and 150 • C at a rate of 20 • C/min under a nitrogen atmosphere.
To obtain a deeper understanding of the effect of the clusters on the self-healing mechanism, the macroscale network mobility of the non-particulate polymer blends was investigated by oscillatory shear rheology. Experiments were performed with a Haake Mars III rheometer. An 8 mm diameter (stainless steel) parallel plate geometry was used throughout. For all the samples, the polymer thickness was between 0.9-1.2 mm, and a constant shear strain γ of 1%, which was within the linear viscoelastic regime of the materials, was applied. Frequency sweep experiments between 10 2 and 10 −2 Hz were performed at temperatures of 80 and 110 • C, with an isothermal hold for 20 min prior to each temperature step. The supramolecular bond lifetime (τ b ) at different temperatures was then calculated as inverse of the frequency at which the storage and loss moduli crossover in a frequency sweep experiment.
Thermally Induced Healing Process and Evaluation
Induction heating was applied for 15 min using a single-turn hairpin induction coil mounted on an Ambrell Easyheat device. The coil and specimen were separated by Teflon foil and the coupling distance was fixed at 1 mm. A frequency of 350 kHz and currents between 200 and 250 A were applied to reach the intended temperatures. Healing temperatures were selected based on the different thermal transitions of the polymer as shown in Figures 1 and 2. As such, the selected healing temperatures are located below the secondary thermal transition (50 • C), in between the secondary transition and the overall melting of the polymer (80 • C) and above the overall melting of the polymer (100-110 • C). The specimen temperature upon inductive heating was monitored with a FLIR A655sc infrared camera. Since this method only detects the surface temperature of the ionomer composites, a COMSOL Multiphysics model was used to derive a relation between the measured surface temperature and the desired healing temperature within the bulk of the polymer sample. The used model is a stationary heat transfer model that correlates the measured surface temperature to the bulk healing temperature based on the thermal conductivity of the materials used. The model assumes a uniform distribution of particles within in cubic geometry corresponding to the used particle concentration of 10 vol %. Full information on the applied model (geometry, input parameters and calculations) can be found in Supplementary Materials S1.
The closure and sealing of fatigue induced cracks was monitored with a digital microscope Keyence VHX2000 with a wide-range zoom lens (100×-1000× magnification). For the optimal illumination of the black surfaces the microscope was equipped with a OP-87229 short ring-light. The length of the samples before and after the thermal treatment was measured with a digital caliper and the residual strain was calculated based on this data.
Thermal and Thermomechanical Analysis
DSC thermograms for all composite grades during heating from 25 to 125 °C are shown in Figure 1. This temperature region shows a broad melting range which includes a low temperature endotherm that typically appears between 50 and 75 °C for all four compositions. Figure 2 shows the thermograms of the Zn-EMAA/Fe3O4 composite in four different stages of the deformation and thermal treatment process: (i) material in its pristine state; (ii) after 100% strain deformation; (iii) after 100% strain and 15 min furnace annealing at 80 °C; and (iv) after 100% straining, 80 °C annealing and 1 week storage at room temperature. Figure 1 and 2 show the effect the low temperature endotherm upon ionic cluster concentration and during the process of free-shrinkage respectively. In recent literature, the endotherm has often been attributed to a declustering of the ionic clusters which would lead to enough mobility within in the polymer network to support healing [8,9,37,38]. Other studies claim that the endotherm
Thermal and Thermomechanical Analysis
DSC thermograms for all composite grades during heating from 25 to 125 °C are shown in Figure 1. This temperature region shows a broad melting range which includes a low temperature endotherm that typically appears between 50 and 75 °C for all four compositions. Figure 2 shows the thermograms of the Zn-EMAA/Fe3O4 composite in four different stages of the deformation and thermal treatment process: (i) material in its pristine state; (ii) after 100% strain deformation; (iii) after 100% strain and 15 min furnace annealing at 80 °C; and (iv) after 100% straining, 80 °C annealing and 1 week storage at room temperature. Figure 1 and 2 show the effect the low temperature endotherm upon ionic cluster concentration and during the process of free-shrinkage respectively. In recent literature, the endotherm has often been attributed to a declustering of the ionic clusters which would lead to enough mobility within in the polymer network to support healing [8,9,37,38]. Other studies claim that the endotherm
Thermal and Thermomechanical Analysis
DSC thermograms for all composite grades during heating from 25 to 125 • C are shown in Figure 1. This temperature region shows a broad melting range which includes a low temperature endotherm that typically appears between 50 and 75 • C for all four compositions. Figure 2 shows the thermograms of the Zn-EMAA/Fe 3 O 4 composite in four different stages of the deformation and thermal treatment process: (i) material in its pristine state; (ii) after 100% strain deformation; (iii) after 100% strain and 15 min furnace annealing at 80 • C; and (iv) after 100% straining, 80 • C annealing and 1 week storage at room temperature. Figures 1 and 2 show the effect the low temperature endotherm upon ionic cluster concentration and during the process of free-shrinkage respectively. In recent literature, the endotherm has often been attributed to a declustering of the ionic clusters which would lead to enough mobility within in the polymer network to support healing [8,9,37,38]. Other studies claim that the endotherm corresponds to the glass transition temperatures of the various phases within the polymer (matrix phase T g < 0 • C) that are linked to the ionic cluster concentration [39,40]. Figure 1 shows that this endotherm also exists in the non-ionic EMAA and is only intensified upon the addition of ionic groups within the polymer grade. The addition of adipic acid results in the diminishing of this endotherm as was reported previously [14]. Figure 2 shows that the endotherm disappears upon straining and returns only after one week of annealing at 80 • C and is therefore not present during the process of free-shrinkage. Figure 3 shows the storage (G') and loss moduli (G") of Zn-EMAA in the frequency range of 10 2 -10 −2 Hz obtained by frequency sweep rheology. Similar curves were obtained for the other polymer grades. For all polymer grades it is found that at 80 • C the storage and loss moduli curves do not intersect and therefore no values of τ b can be determined. G' and G" were found only to intersect at temperatures close to of 110 • C which is the overall melting temperature of the polymer grades. The plateau modulus (G N ), which is taken as the high frequency plateau of the G' curve was used to compare the mechanical robustness of each sample. In one of our recent publications we showed the connection between the macroscopic network mobility of ionomers with varying amounts of ionic clusters to the supramolecular bond lifetime (τ b ). It was then proposed that a polymer system with 10 s < τ b < 100 s and 10 5 Pa < G N < 10 7 Pa would show good healing behavior combined with strong mechanical properties [41]. The values for G N and τ b of the four polymer blends are presented in Table 1. Because the final plateau modulus at 110 • C was beyond the high frequency range of the rheometer, the G' values for the highest measured frequency are reported instead. corresponds to the glass transition temperatures of the various phases within the polymer (matrix phase Tg < 0 °C) that are linked to the ionic cluster concentration [39,40]. Figure 1 shows that this endotherm also exists in the non-ionic EMAA and is only intensified upon the addition of ionic groups within the polymer grade. The addition of adipic acid results in the diminishing of this endotherm as was reported previously [14]. Figure 2 shows that the endotherm disappears upon straining and returns only after one week of annealing at 80 °C and is therefore not present during the process of free-shrinkage. Figure 3 shows the storage (G') and loss moduli (G'') of Zn-EMAA in the frequency range of 10 2 -10 −2 Hz obtained by frequency sweep rheology. Similar curves were obtained for the other polymer grades. For all polymer grades it is found that at 80 °C the storage and loss moduli curves do not intersect and therefore no values of τb can be determined. G' and G" were found only to intersect at temperatures close to of 110 °C which is the overall melting temperature of the polymer grades. The plateau modulus (GN), which is taken as the high frequency plateau of the G' curve was used to compare the mechanical robustness of each sample. In one of our recent publications we showed the connection between the macroscopic network mobility of ionomers with varying amounts of ionic clusters to the supramolecular bond lifetime (τb). It was then proposed that a polymer system with 10 s < τb < 100 s and 10 5 Pa < GN < 10 7 Pa would show good healing behavior combined with strong mechanical properties [41]. The values for GN and τb of the four polymer blends are presented in Table 1. Because the final plateau modulus at 110 °C was beyond the high frequency range of the rheometer, the G' values for the highest measured frequency are reported instead. Table 1 shows that at 110 °C, the τb and GN values of the Zn-EMAA ionomer meet the demands for good healing (10 < τb < 100 s) and good mechanical properties (G' > 10 5 Pa and is expected not to exceed 10 7 Pa) [41]. Experiments at higher temperatures (>130 °C) move the value of τb towards the Table 1 shows that at 110 • C, the τ b and G N values of the Zn-EMAA ionomer meet the demands for good healing (10 < τ b < 100 s) and good mechanical properties (G' > 10 5 Pa and is expected not to exceed 10 7 Pa) [41]. Experiments at higher temperatures (>130 • C) move the value of τ b towards the regime of viscous flow (τ b < 10 s) indicating that good healing conditions are not met at temperatures well above the overall melting point of the polymer. The values found for the EMAA polymer at a temperature of 110 • C are also typical for viscous flow of a molten polymer and therefore the damage recovery cannot be classified as healing. The τ b and G N values for the Zn-EMAA/EMAA and Zn-EMAA/AA show that the thermomechanical behavior at the measured temperatures is in between that of Zn-EMAA and EMAA indicating that the difference in viscoelastic behavior is linked to the presence of ionic clusters.
Effect of Temperature Post-Treatment after Static and Dynamic Loading
All prestrained polymer composites were post treated at different temperatures. As a consequence, a macroscopic shrinkage was observed and quantified. Figure 4 shows the influence of temperature on the free-shrinkage behavior of the Zn-EMAA particulate composite as function of the applied strain. Different levels of initial quasi-static strain levels were applied and the residual strain (at room temperature) after annealing at various temperatures was determined as described. regime of viscous flow (τb < 10 s) indicating that good healing conditions are not met at temperatures well above the overall melting point of the polymer. The values found for the EMAA polymer at a temperature of 110 °C are also typical for viscous flow of a molten polymer and therefore the damage recovery cannot be classified as healing. The τb and GN values for the Zn-EMAA/EMAA and Zn-EMAA/AA show that the thermomechanical behavior at the measured temperatures is in between that of Zn-EMAA and EMAA indicating that the difference in viscoelastic behavior is linked to the presence of ionic clusters.
Effect of Temperature Post-Treatment after Static and Dynamic Loading
All prestrained polymer composites were post treated at different temperatures. As a consequence, a macroscopic shrinkage was observed and quantified. Figure 4 shows the influence of temperature on the free-shrinkage behavior of the Zn-EMAA particulate composite as function of the applied strain. Different levels of initial quasi-static strain levels were applied and the residual strain (at room temperature) after annealing at various temperatures was determined as described. Figure 4 shows that, with certain annealing conditions, the residual strain of the Zn-EMAA polymer grade can become near zero up to an applied strain level of about 50%. This upper limit for full strain recovery turns out to be applicable for all polymer composite grades and was therefore used as the maximum prestrain level for the fatigue experiments. Figure 5 shows the residual strain of all polymer grades after different fatigue treatments before and after heating. This figure shows that the residual strain after fatigue increases when the prestrain and the number of cycles are increased. Upon a healing treatment of 15 min at 80 °C, the residual strain is reduced to levels below 5% for all investigated blends. The EMAA composite without ionic clusters has the lowest levels of residual strain before and after healing. The levels of residual strain for the Zn-EMAA, Zn-EMAA/EMAA and the Zn-EMAA composites are fairly comparable with the exception of the value for the Zn-EMAA/AA blend for which the level of contraction could not be measured after 50,000 strain cycles as complete sample failure occurred at this level of cyclic loading. Figure 6 shows the stress strain curves of a Zn-EMAA/Fe3O4 composite after several treatments: the quasi-static tensile behavior of a pristine specimen, two fatigued specimens at 1000 and 50,000 strain cycles with a strain amplitude of 2.3% on top of a prior 50% static strain and two specimens that were subjected to 1000 fatigue cycles and subsequently heated to either 80 or 110 °C. The Figure 4 shows that, with certain annealing conditions, the residual strain of the Zn-EMAA polymer grade can become near zero up to an applied strain level of about 50%. This upper limit for full strain recovery turns out to be applicable for all polymer composite grades and was therefore used as the maximum prestrain level for the fatigue experiments. Figure 5 shows the residual strain of all polymer grades after different fatigue treatments before and after heating. This figure shows that the residual strain after fatigue increases when the prestrain and the number of cycles are increased. Upon a healing treatment of 15 min at 80 • C, the residual strain is reduced to levels below 5% for all investigated blends. The EMAA composite without ionic clusters has the lowest levels of residual strain before and after healing. The levels of residual strain for the Zn-EMAA, Zn-EMAA/EMAA and the Zn-EMAA composites are fairly comparable with the exception of the value for the Zn-EMAA/AA blend for which the level of contraction could not be measured after 50,000 strain cycles as complete sample failure occurred at this level of cyclic loading. Figure 6 shows the stress strain curves of a Zn-EMAA/Fe 3 O 4 composite after several treatments: the quasi-static tensile behavior of a pristine specimen, two fatigued specimens at 1000 and 50,000 strain cycles with a strain amplitude of 2.3% on top of a prior 50% static strain and two specimens that were subjected to 1000 fatigue cycles and subsequently heated to either 80 or 110 • C. The obtained results indicate that a fatigued ionomer system shows strain hardening and becomes slightly less ductile. The first effect can be explained by an alignment of polymer chains that were originally packed in the secondary clusters. This explanation is supported by the DSC thermograms in Figure 2 that show that this phase disappears upon straining. This effect increases the tensile strength of the polymer composite and can therefore by itself not be seen as a damaging event. However, the second effect is an indication for the loss of mechanical integrity and can be a result of local mechanical damage in the form of random chain scission [42] which could potentially occur upon the application of multiple fatigue cycles. The figure shows that the strain hardening increases when the number of applied fatigue cycles is increased from 1000 to 50,000. Figure 6 also shows that the original stress-strain relation can be restored when a suitable heat treatment is applied. A heat treatment of 80 • C already shows a big reduction of the strain hardening effect and after a 110 • C treatment the initial tensile behavior is almost completely restored. obtained results indicate that a fatigued ionomer system shows strain hardening and becomes slightly less ductile. The first effect can be explained by an alignment of polymer chains that were originally packed in the secondary clusters. This explanation is supported by the DSC thermograms in Figure 2 that show that this phase disappears upon straining. This effect increases the tensile strength of the polymer composite and can therefore by itself not be seen as a damaging event. However, the second effect is an indication for the loss of mechanical integrity and can be a result of local mechanical damage in the form of random chain scission [42] which could potentially occur upon the application of multiple fatigue cycles. The figure shows that the strain hardening increases when the number of applied fatigue cycles is increased from 1000 to 50,000. Figure 6 also shows that the original stress-strain relation can be restored when a suitable heat treatment is applied. A heat treatment of 80 °C already shows a big reduction of the strain hardening effect and after a 110 °C treatment the initial tensile behavior is almost completely restored. . Stress-strain curves taken during the several stages of the fatigue damage-recovery process. It shows that the strain hardening increases when the number of applied fatigue cycles is increased from 1000 to 50,000 and that the original stress-strain relation can be restored when a heat treatment is applied.
Optical microscopy images of the surface of a fatigued Zn-EMAA specimen (1000 strain cycles, 50% prestrain) before and after inductive heating are shown in Figure 7. The analysis showed that some of the nanoparticles formed micron-sized agglomerates rather than being homogeneously distributed which suggests that the results currently obtained are not fully optimal. The agglomerates promote the crack initiation upon straining and fatigue loading, but their presence does not disturb the mechanism of fatigue healing to be demonstrated in this work. The images show that fatigue loading led to the formation of microcracks close to clusters of Fe3O4 particles which act as stress concentrators in the composite. A similar fatigue treatment on a Zn-EMAA specimen without particles did not show any microcracks. The images also show that inductive heating at 80 °C closes the cracks but does not seal the edges of the crack back together. Upon a second fatigue treatment of 1000 cycles, it is shown that these cracks propagate into larger cracks. On the other hand, inductive annealing at 110 °C shows a complete sealing of the crack edges and results in the effective disappearance of the crack. In this case, a follow-up fatigue treatment only leads to reopening of these cracks. This is expected since the crack locations remain the weak spots of the composite. However, after a 110 °C induction treatment the cracks have not propagated as is seen for the specimens that are only healed at 80 °C. These observations are in line with the results of the frequency sweep experiments obtained in Figure 3. Figure 8 shows the decay in maximal stress during 1000 fatigue cycles for Zn-EMAA preloaded to an initial strain of 50%. Figure 8a shows an ionomer composite specimen that was tested twice without any healing treatment in between. In this figure the strain hardening effect that is also visible in Figure 6 can be observed. Figure 8b,c shows a similar set of experiments, however, with an inductive heat treatment at 80 or 110 °C, respectively, in between cyclic loading. The figures show that both heat treatments restore the initial fatigue response and delete the strain hardening effect as a result of the recovery of the original network properties. It shows that the strain hardening increases when the number of applied fatigue cycles is increased from 1000 to 50,000 and that the original stress-strain relation can be restored when a heat treatment is applied.
Optical microscopy images of the surface of a fatigued Zn-EMAA specimen (1000 strain cycles, 50% prestrain) before and after inductive heating are shown in Figure 7. The analysis showed that some of the nanoparticles formed micron-sized agglomerates rather than being homogeneously distributed which suggests that the results currently obtained are not fully optimal. The agglomerates promote the crack initiation upon straining and fatigue loading, but their presence does not disturb the mechanism of fatigue healing to be demonstrated in this work. The images show that fatigue loading led to the formation of microcracks close to clusters of Fe 3 O 4 particles which act as stress concentrators in the composite. A similar fatigue treatment on a Zn-EMAA specimen without particles did not show any microcracks. The images also show that inductive heating at 80 • C closes the cracks but does not seal the edges of the crack back together. Upon a second fatigue treatment of 1000 cycles, it is shown that these cracks propagate into larger cracks. On the other hand, inductive annealing at 110 • C shows a complete sealing of the crack edges and results in the effective disappearance of the crack. In this case, a follow-up fatigue treatment only leads to reopening of these cracks. This is expected since the crack locations remain the weak spots of the composite. However, after a 110 • C induction treatment the cracks have not propagated as is seen for the specimens that are only healed at 80 • C. These observations are in line with the results of the frequency sweep experiments obtained in Figure 3. Figure 8 shows the decay in maximal stress during 1000 fatigue cycles for Zn-EMAA preloaded to an initial strain of 50%. Figure 8a shows an ionomer composite specimen that was tested twice without any healing treatment in between. In this figure the strain hardening effect that is also visible in Figure 6 can be observed. Figure 8b,c shows a similar set of experiments, however, with an inductive heat treatment at 80 or 110 • C, respectively, in between cyclic loading. The figures show that both heat treatments restore the initial fatigue response and delete the strain hardening effect as a result of the recovery of the original network properties.
Discussion
The optical microscopy images in Figure 7 show a clear distinction between the closure and the sealing of fatigue induced cracks at the two healing temperatures. Although there seems to be an
Discussion
The optical microscopy images in Figure 7 show a clear distinction between the closure and the sealing of fatigue induced cracks at the two healing temperatures. Although there seems to be an
Discussion
The optical microscopy images in Figure 7 show a clear distinction between the closure and the sealing of fatigue induced cracks at the two healing temperatures. Although there seems to be an agreement on the mechanisms that are responsible for the contraction/closure [20,21], there is still an ongoing debate on the mechanism that is responsible for the crack sealing behavior of ionomers. The main discussion revolves around the low-temperature endotherm that is visualized by DSC in Figure 2. The majority of studies on self-healing ionomers attribute this endotherm to a declustering of the ionic multiplets that are formed within the polymer microstructure as was described by Tadano et al. [38]. It is reported that the declustering of these multiplets would create sufficient mobility for the polymer to heal at temperatures below the melting point [8,9,14,18,37]. Another theory, posed by Eisenberg, describes the clusters of multiplets as a thermally stable phase with its own T g that is higher than that of the surrounding non-ionic polymer phases. In their work, the origin of the low temperature endotherm is attributed to the crystallization of secondary crystals which form in between the primary crystal lattices over time [39,40]. In a recent publication by Kalista et al., it is reviewed that the most experimental evidence points towards the latter explanation for the thermomechanical behavior of ionomers. However, the precise mechanism responsible for the self-healing of ionomers is still under discussion [24].
The results that are depicted in this work support the theory of Eisenberg over that of Tadano. A first indication is the fact that the DSC spectrum in Figure 2 shows a low temperature endotherm for the EMAA polymer. Since there are no ionic clusters present in this polymer, the endotherm in this spectrum cannot be attributed to ionic multiplet formation within the structure. The presence of ionic clusters does, however, affect the formation of the low temperature endotherm and the secondary crystalline phase as is described by Loo et al. [40]. In a similar fashion, the addition of adipic acid restricts the formation of this secondary crystalline phase as the corresponding endotherm peak around 50 • C in Figure 1 flattens out completely. The results depicted in Figure 3 also contradict the declustering concept since no crossover point between G' and G" can be found at 80 • C [8,9,37].
The fact that the low temperature endotherm disappears upon straining ( Figure 2) indicates that the molecular origin of this endotherm is not the sole explanation of the ionomer healing characteristics. The free-shrinkage behavior that is observed in this temperature range is most likely a result from the overall melting peak in the polymer which is very broad and starts at the onset of the low temperature endotherm. This statement is supported by the temperature dependency of the residual strain shown in Figure 4. The smaller crystals melt at lower temperatures while the larger crystals remain crystallized and serve as a rigid internal structural entity as is also common in shape memory polymers [20].
The thermal contraction is shown to be independent of the presence of clusters and the low-temperature endotherm, since Figure 5 shows that the strain restoration after fatigue is clearly present for all compositions. As a matter of fact, these diagrams show that the contraction is highest in the non-ionic EMAA material as only in this material 100% restoration is observed after an applied strain of 50%. This is an indication that the presence of clusters might even restrict the mobility of the reforming secondary crystal phases of the polymer and thereby hindering the free-shrinkage capacity which is supported by the studies of Loo et al. [40].
The optical microscopy images show that sealing of the fatigue induced microcracks only occurs when the ionomer is heated above 110 • C. This is in line with rheological data obtained in Figure 3 and Table 1. Here it is shown that the viscous component of the polymer does not get dominant over the elastic component before the overall melting point is reached. However, both thermal treatments lead to a full restoration of the original tensile behavior and fatigue response. This indicates that the polymer network is effectively repaired at temperatures below the melt temperature and that the formation and presence of microcracks does not directly affect the mechanical properties in the early stages of the damage formation. Nevertheless, the healing of the early stage damage will be necessary to extend the lifetime of the ionomer composites since these unsealed microcracks will eventually propagate into larger cracks, as was shown in Figure 7. These propagated cracks will ultimately induce the destructive failure of the material as was observed for the 50,000-cycle fatigue treatment of the Zn-EMAA/AA blend.
The addition of the nanoparticles (10 vol %) was found to barely affect the overall tensile properties of the polymer. Full information on the impact of the nanoparticles on the tensile properties can be found in Supplementary Materials S2. Besides a slight increase in yield strength and Young's modulus, the main effect was an increased brittleness which is considered to have no effect on the applicability of the polymers since a tensile strain of at least 100% can still be achieved for the composites, as is depicted in Figure 4. Nevertheless, it was found that the Fe 3 O 4 particles induce microcracks that are not observed in the pure polymer films. Based on this, it could be reasoned that the particles only weaken the material and no additional mechanical benefits are obtained. However, since the microcracks do not affect the overall tensile properties and fatigue response of the polymer composite and can be fully healed by heating at 110 • C there is a zero net negative effect of the particles on the polymer behavior. On the other hand, the ferromagnetic particles allow the polymer to be healed by inductive heating which is crucial for larger composite structures that cannot be heated by external contact heating and therefore require internal heating.
Although the thermal behavior below the overall melting temperature is comparable for all investigated blends and therefore independent of cluster content, there is a clear difference in the region above the melt. Table 1 shows different values for τ b and G N for the four polymer systems which can be explained by the presence of the ionic clusters. These create an additional phase in the polymer microstructure which has higher thermomechanical stability than the surrounding polymer phase. As a result, the non-ionic phase can flow in between the ionic clusters and thereby heal cracks and interfaces at a temperature above its melting point, while the overall polymer system maintains its required level of mechanical stability. When the cluster concentration is not high enough, the polymer will show melt flow and is therefore not considered to be a self-healing polymer. Based on the current observations it is possible to propose an ionomer healing temperature dependency scheme. Figure 9 shows a two-step healing mechanism in which the thermally induced free-shrinkage is independent of cluster content and can be triggered by applying a temperature between the two main melting points of the polymer. At this temperature, the residual strain and the strain hardening that occur upon deformation are fully restored. Early stage damage in the form of fatigue induced microcracks can be subsequently healed by melting the polymer while the ionic clusters act as a stable phase providing sufficient mechanical properties for good healing conditions. The addition of the nanoparticles (10 vol %) was found to barely affect the overall tensile properties of the polymer. Full information on the impact of the nanoparticles on the tensile properties can be found in Supplementary Materials S2. Besides a slight increase in yield strength and Young's modulus, the main effect was an increased brittleness which is considered to have no effect on the applicability of the polymers since a tensile strain of at least 100% can still be achieved for the composites, as is depicted in Figure 4. Nevertheless, it was found that the Fe3O4 particles induce microcracks that are not observed in the pure polymer films. Based on this, it could be reasoned that the particles only weaken the material and no additional mechanical benefits are obtained. However, since the microcracks do not affect the overall tensile properties and fatigue response of the polymer composite and can be fully healed by heating at 110 °C there is a zero net negative effect of the particles on the polymer behavior. On the other hand, the ferromagnetic particles allow the polymer to be healed by inductive heating which is crucial for larger composite structures that cannot be heated by external contact heating and therefore require internal heating.
Although the thermal behavior below the overall melting temperature is comparable for all investigated blends and therefore independent of cluster content, there is a clear difference in the region above the melt. Table 1 shows different values for τb and GN for the four polymer systems which can be explained by the presence of the ionic clusters. These create an additional phase in the polymer microstructure which has higher thermomechanical stability than the surrounding polymer phase. As a result, the non-ionic phase can flow in between the ionic clusters and thereby heal cracks and interfaces at a temperature above its melting point, while the overall polymer system maintains its required level of mechanical stability. When the cluster concentration is not high enough, the polymer will show melt flow and is therefore not considered to be a self-healing polymer. Based on the current observations it is possible to propose an ionomer healing temperature dependency scheme. Figure 9 shows a two-step healing mechanism in which the thermally induced free-shrinkage is independent of cluster content and can be triggered by applying a temperature between the two main melting points of the polymer. At this temperature, the residual strain and the strain hardening that occur upon deformation are fully restored. Early stage damage in the form of fatigue induced microcracks can be subsequently healed by melting the polymer while the ionic clusters act as a stable phase providing sufficient mechanical properties for good healing conditions. Figure 9. Ionomer healing temperature dependency scheme. In the 1st healing phase, the thermally induced free-shrinkage restores the residual strain and the polymer network at temperatures in between the melting point of the secondary crystal phase (Tm1) and the overall melting point (Tm2). In the 2nd healing phase, microcracks are closed due to localized melting above Tm in which the ionic clusters act as a stable phase providing sufficient mechanical properties for good healing conditions. Figure 9. Ionomer healing temperature dependency scheme. In the 1st healing phase, the thermally induced free-shrinkage restores the residual strain and the polymer network at temperatures in between the melting point of the secondary crystal phase (T m1 ) and the overall melting point (T m2 ). In the 2nd healing phase, microcracks are closed due to localized melting above T m in which the ionic clusters act as a stable phase providing sufficient mechanical properties for good healing conditions.
Conclusions
This work reports on the healing of early stage fatigue damage in poly(ethylene-co-methacrylic acid) based nanoparticulate composites upon localized inductive heating. It is found that there are three main damage modes that occur in the early stage of the fatigue process: residual strain, strain hardening and the formation of microcracks. Although the residual strain and strain hardening are a result of the nature of the polymer phase, the formation of microcracks is only observed upon the addition of the particulate phase.
It is demonstrated that healing of this early stage fatigue damage occurs in two different steps. Firstly, the deformation is restored by the free-shrinkage of the polymer. At temperatures below the melt temperature, the polymer network is healed and the fatigue induced strain hardening is reset. Secondly, only at temperatures above the melting point of the polymer phase, microcracks are sealed. It is shown that the thermally induced free-shrinkage in these polymers does not depend on the presence of ionic clusters, but that the ability to heal cracks in composite structures is reserved for ionomers that contain a sufficient amount of ionic clusters which guarantees an acceptable level of mechanical stability during healing. This implies that ionomers need to be thermally treated at above-the-melt temperatures in order to heal all the early stage damage that is induced upon fatigue loading. | 2017-01-07T08:35:44.032Z | 2016-12-01T00:00:00.000 | {
"year": 2016,
"sha1": "42d9df6d9613a35b695de29da9c8315cf32dbd06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4360/8/12/436/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "42d9df6d9613a35b695de29da9c8315cf32dbd06",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
218825595 | pes2o/s2orc | v3-fos-license | Use of Heavy Metal Content and Modified Water Quality Index to Assess Groundwater Quality in a Semiarid Area
: Groundwater is a major source of drinking and agricultural water supply in arid and semiarid regions. Poor groundwater quality can be a threat to human health especially when it is combined with hazardous pollutants like heavy metals. In this study, an innovative method involving entropy weighted groundwater quality index for both physicochemical and heavy metal content was used for a semiarid region. The entropy weighted index was used to assess the groundwater ’ s suitability for drinking and irrigation purposes. Thus, groundwater from 19 sampling sites was used for analyses of physicochemical properties (electrical conductivity — EC, pH, K + , Ca 2+ , Na + , SO 42 − , Cl − , HCO 3 − , TDS, NO 3 − , F − , biochemical oxygen demand — BOD, dissolved oxygen — DO, and chemical oxygen demand — COD) and heavy metal content (As, Ca, Sb, Se, Zn, Cu, Ba, Mn, and Cr). To evaluate the overall pollution status in the region, heavy metal indices such as the modified heavy metal pollution index (m-HPI), heavy metal evaluation index (HEI), Nemerow index (NeI), and ecological risks of heavy metals (ERI) were calculated and compared. The results showed that Cd concentration plays a significant role in negatively affecting the groundwater quality. Thus, three wells were classified as poor water quality and not acceptable for drinking water supply. The maximum concentration of heavy metals such as Cd, Se, and Sb was higher than permissible limits by the World Health Organization (WHO) standards. However, all wells except one were suitable for agricultural purposes. The advantage of the innovative entropy weighted groundwater quality index for both physicochemical and heavy metal content, is that it permits objectivity when selecting the weights and reduces the error that may be caused by subjectivity. Thus, the new index can be used by groundwater managers and policymakers to better decide the water ’ s suitability for consumption.
Introduction
Groundwater plays a major role in supplying water for drinking, agricultural, and industrial uses [1][2][3]. For arid and semiarid regions, groundwater resources are especially important in terms of quantity and quality. Under these climatic conditions, groundwater overconsumption has led to decreased quality or contamination that may impose hazards to society [4,5]. Especially, heavy metals are important to monitor due to their toxicity.
There are various methods for dealing with heavy metal pollution in groundwater resources. These could be pumped and treated [6], be absorbed [7] by various kinds of absorbents [8], captured by nanoparticles [9] in micromixers [10,11], and removed by more natural solutions like wetlands [12]. However, implementing any remedy measure needs sufficient understanding of the situation and a reliable inclusive assessment of the potential risk. Evaluating water quality is of paramount importance in this sense.
Many methods such as multivariate statistical techniques (e.g., cluster analysis, principal component analysis, and factor analysis) [13,14], hydro-geochemical evaluation [15,16], heavy metal indices (e.g., heavy metal pollution index, degree of contamination, heavy metal evaluation index, contamination factor, and health risk assessment) [17][18][19], and water evaluation indices [13] have been developed for assessing water quality considering physicochemical parameters. Grading water quality indicators largely depends on indicator concentration and the rate of relative toxicity. One of the most applicable methods is Water Quality Index (WQI) that summarizes the quality of water for drinking and other purposes [3,20,21]. This index provides a single number as a measure of overall water quality at a specific location and time. However, WQI needs weights for the different chemical elements. These are usually assigned subjectively by experts [15,22]. Additionally, various water quality indices have been proposed for evaluation of water quality based on heavy metals [23,24]. One of these indices is heavy metal pollution index (HPI). This method considers maximum desirable limit and maximum permissible limit of each heavy metal for water quality characterization. According to recent regulatory guidelines, a number of heavy metals are now being considered under the nonrelaxation category [25]. Hence, HPI cannot be calculated using the latest regulatory guidelines. However, a modified heavy metal pollution index (m-HPI) method [26] overcomes this and other limitations of previous methods. This index is based on only highest desirable concentration (Ii) and does not depend on the maximum permissible concentration (Si). Furthermore, similar indices are the heavy metal evaluation index (HEI), the Nemerow index (NeI), and the ecological risks of heavy metals in groundwater [23,[25][26][27].
Although several studies have assessed the groundwater quality based on heavy metal pollution for different purposes [4,23,[28][29][30][31], there are only a few studies in arid and semiarid regions [22,24,32]. Considering this, the main objective of the current study is to test an innovative method involving entropy weighted groundwater quality index (EWQI) for both physicochemical and heavy metal content in a semiarid region that can be used by decision and policymakers for improving water resources management. Thus, the EWQI was used and compared to other pollution indices such as m-HPI, HEI, NeI, and ERI to evaluate the status of the overall pollution level of groundwater in the study area with respect to physicochemical properties (electrical conductivity-EC, pH, K + , Ca 2+ , Na + , SO4 2-, Cl − , HCO3 − , TDS, NO3 − , F − , biochemical oxygen demand-BOD, dissolved oxygen-DO, and chemical oxygen demand-COD) and nine important heavy metals (As, Ca, Sb, Se, Zn, Cu, Ba, Mn, and Cr). The outcomes provide essential information on the suitability of the water source for different uses. The results can be used by decision-makers as a guide for managing the aquifer from both quantitative and qualitative viewpoints.
Study Area
The Imam Zadeh Jafar Aquifer is located in Gachsaran City, southwest Iran, between longitude 50°50´ and 51°09´ E and latitude 30°13´ and 30°28´ N ( Figure 1). The average elevation of the area is 720 m above mean sea level. The mean annual precipitation and temperature are 395 mm and 23 °C, respectively. The average thickness of the aquifer is approximately 80 m. The geological material is composed of course material like cobblestone, sandstone, gravel, and sand in the northern parts, gravel, and sand in central parts, and finer material like silt and clay in the southern parts [33]. There are no clay lenses within this unconfined aquifer.
Generally, groundwater is the only resource of water for drinking and irrigation purposes in the study area. However, there are several industrial pollution sources such as slaughterhouses, industrial parks, and beverage and asphalt plants in the study area. Another important contaminant source in the area is agriculture. The intense agricultural activity has led to overuse of pesticides, herbicides, and fertilizers. Groundwater polluted with heavy metals may have severe effects on public health. Hence, monitoring and studying the potential sources of water pollution from metal sources are necessary in the study area. Figure 1. The Imam Zadeh Jafar Aquifer located in Gachsaran City, southwest Iran.
Sample Collection and Analytical Procedure
In 2009, nineteen existing wells were selected in the region and sampled for different groundwater quality parameters. The wells are used for drinking, irrigation, and industrial purposes depending on location in the region (Figure 1). Electrical conductivity (EC) and pH were measured on site. Other parameters such as potassium (K + ), calcium (Ca 2+ ), magnesium (Mg 2+ ), sodium (Na + ), sulfate (SO4 2-), chloride (Cl − ), bicarbonate (HCO3 − ), total dissolved solids (TDS), nitrate (NO3 − ), fluoride (F − ), biochemical oxygen demand (BOD), dissolved oxygen (DO), chemical oxygen demand (COD), arsenic (As), cadmium (Cd), antimony (Sb), selenium (Se), zinc (Zn), copper (Cu), barium (Ba), manganese (Mn), and chromium (Cr) were analyzed in the laboratory. Additionally, the concentration of other heavy metals such as Pb, Ni, Hg, and Fe was analyzed as well; however, the content of these was insignificant or close to zero. Hence, these elements were not included in the study. The analytical method for each chemical parameter is presented in Table 1. The accuracy of the chemical analysis was validated by calculating charge balance errors (CBE) using: where CBE is in percent and concentration of all cations and anions are in meq/l. The ion balance error for all groundwater samples ranged from 1% to 6.2%.
Water Quality Index (WQI) and Entropy Weight Method
Water Quality Index (WQI) is a useful method that has been widely used for assessing groundwater quality for drinking water use, with reference to hydro-geochemical parameters and heavy metal pollution [34]. The index provides a single number that is considered as an overall quality index of a sampled water. This can provide insights for deciding if the water needs to be used with special care or caution or if it needs treatment [28,35]. Herein, World Health Organization (WHO) standards (2011) [36] were used to compute quality rate of the hydro-geochemical parameters and heavy metals.
An essential step for using WQI is to assign weights. A common way to assign these weights is to allocate them subjectively based on experience [22] or reference literature [37]. This can lead to over or underemphasizing some parameters and affecting the outcome. In order to avoid subjectivity, entropy-weighted water quality index (EWQI) is employed in this paper [15,22]. Improving objectivity results in reduction of errors that may be caused by subjectivity when choosing the weights [38].
The entropy method was first proposed by Shannon [39] for reducing subjectivity in allocating weights to parameters of different nature. Shannon entropy expresses the degree of uncertainty concealed in a probabilistic or uncertain event [22]. When a parameter is precisely predicted and shows little change, the Shannon entropy weight will be small. Hence, a large change in concentration of a parameter will lead to a larger Shannon weight. This is especially important in water quality assessment when sudden changes occur in the water quality. Otherwise, the natural situation for aquifer water quality is to be almost constant [40,41].
Calculation of entropy weighted water quality index (EWQI) follows four steps according to the below. The first step is to construct the performance matrix. The initial matrix X shows a summary of chemical analysis data when m (i = 1, 2, …, m) wells are monitored to evaluate the water quality, and each well has n measured parameters (j = 1, 2, …, n). Then, represents the value of parameter j in the ith well. The matrix X can be obtained as: The second step is to normalize the performance matrix. This step is essential for eliminating errors when there are different units of measurement for different parameters and different quantity grades [22]. To do so, in normalized matrix of νij, each array is divided by the sum of arrays for each column: The third step is to calculate the entropy value. The entropy value of the jth measured parameter is calculated as [42]: where zj is the entropy value of the jth parameter. The fourth step is to calculate the objective weights of each parameter using: where Wj is the weight for the jth parameter. Another quality scaling factor according to [36] is: where is the quality rating for the jth water parameter, is the measured jth parameter, Sj is the standard permissible value for the jth parameter assigned by WHO [43], and Vid is the ideal value of jth parameter in pure water (i.e., 0 for all other parameters except for pH = 7). The overall water quality index can thus be estimated by combining the quality scaling factor with the unit entropy weight using: Based on the results of EWQI, water quality can be classified into five classes for drinking water purposes (Table 2).
Evaluation of Groundwater Quality for Irrigation Purposes
We examined irrigation water suitability of the sampled groundwater for Sodium Adsorption Ratio (SAR; in association with electrical conductivity), sodium percentage (Na%), total dissolved solids (TDS), permeability index (PI), total hardness (TH), and magnesium ratio (MR) as calculated by the following formulas, 1. The Sodium Adsorption Ratio (SAR) was calculated by [44]: 2. The sodium percentage is computed with respect to relative proportions of cations present in water using: 3. Doneen [45] classified irrigated water based on permeability index (PI) according to: 4. Total hardness (TH) was calculated by [46]: 5. Magnesium Ratio (MR) was calculated by [46]: We used ArcGIS software (10.3) (Esri, Redlands, CA, US) to demarcate sampling locations and spatial distribution of groundwater quality indices throughout the study area. Inverse distance weighting was applied for interpolation [47].
Modified Heavy Metal Pollution Index (m-HPI)
To improve the shortcomings of the heavy metal pollution index (HPI) and heavy metal evaluation index (HEI), Chaturvedi et al. [26] defined a modified heavy metal pollution index (m-HPI) for better evaluating water quality for drinking purposes. The m-HPI is calculated as: where n is the number of heavy metals considered in the evaluation. The m-HPI i is the modified heavy metal pollution index for ith heavy metal ion defined as: where ωi is the relative weightage factor defined as: where Wi is the unit weighting factor defined as: where Ii is maximum permissive level of ith heavy metal concentration (WHO standard). Sub index Qi for the ith heavy metal is defined as: where Mi is the observed concentration of the ith heavy metal. The m-HPI can be divided into metals exceeding or not exceeding maximum permissible level. The former m-HPI is called positive index (PI of m-HPI) and the latter, negative index (NI of m-HPI). Thus, a pair of indices may be computed for each water sample. Based on both of indices, each sample's water quality related to heavy metal pollution is classified as follows: excellent (−1 ≤ NI ≤ 0 and PI = 0), very good (−1 < NI ≤ 0 and 0 < PI ≤ UL/2), good (−1 < NI ≤ 0 and UL/2 < PI ≤ UL), and unacceptable (NI ≤ 0 and PI > UL), where UL is upper limit of positive index.
Heavy Metal Evaluation Index (HEI)
As for m-HPI, HEI provides an overview of the water quality with respect to heavy metals. The HEI index is calculated based on maximum permissible concentration (MAC) for each target heavy metal using: where HEI i is the pollution index corresponding to ith heavy metal calculated as: where H i mac is the maximum permissible concentration of ith heavy metal. This method divides the water quality into three classes to demarcate the different level of contamination including: low (HEI < 10), medium (HEI = 10-20), and high (HEI > 20).
Nemerow Index (NeI)
This method is a multifactorial and integrated assessment approach where the index is calculated using [48,49]: where (Mi/Ii)mean is the average value of (Mi/Ii) of all target heavy metals of a water sample and (Mi/Ii)max is the maximum value of (Mi/Ii) among all target heavy metals detected in the water sample. This method classifies the water quality into four categories: insignificant (NeI < 1), slightly (1 ≤ NeI < 2.5), moderately (2.5 ≤ NeI < 7), and heavily (NeI ≥ 7) contaminated.
Ecological Risks of Heavy Metals in Groundwater
We used the ecological risk index (ERI) [50,51] to evaluate the potential ecological hazards associated with heavy metals in groundwater. The ecological risk index was calculated as: (21) where Ti is the biological toxicity factor of the ith target heavy metal. The toxic-response factor of heavy metals is given as: As = 10; Cd = 30; Sb = 7; Cu = 5; Cr = 2; and Zn and Mn = 1 [52,53]. The index classifies the groundwater quality into four groups, low (ERI < 110), moderate (110 ≤ ERI < 200), considerable (200 ≤ ERI < 400), and very high (ERI ≥ 400) risk.
Results and Discussions
In the below, we group results in three main parts following a general statistical analysis. The first and second demonstrate the sustainability of groundwater for drinking and irrigation purposes, respectively. In the third, heavy metal pollution indices were used to identify status of the overall pollution level of groundwater resources in the study area.
Statistical Analysis
Descriptive statistics for observed water quality data are presented in Table 3. The table presents minimum, maximum, mean, standard deviation, permissible limits for drinking water set by World Health Organization, and entropy weight for each parameter used for EWQI assessment. Groundwater in this area is slightly alkaline to neutral, as the recorded pH ranges from 7.1 to 8. Table 3 illustrates that mean concentration of heavy metals As, Cd, Sb, Se, Zn, Cu, Ba, Mn, and Cr was 10, 3, 20, 10, 500, 50, 700, 100, and 50 µg/L, respectively. The maximum value of Cd, Sb, and Se was above the permissible limits for drinking water purposes.
Entropy Weighted Water Quality Index (EWQI)
To assess the groundwater quality, some researchers have used the EWQI [54][55][56]. The EWQI is an innovative tool that tests multivariable water quality data against specified water quality standards determined by the user [56]. Entropy weight improves WQI since it does not rely on subjective judgement in assigning weights. The entropy weights of hydro-chemical parameters show that the concentrations of Cd play the leading role in affecting the groundwater quality based on WQI index in the study area. EWQI range and type of water are presented in Table 4 and Figure 2. In total, 24 water quality variables were used for the EWQI. The calculated EWQI range was between 13 and 198. Among these, Table 4 illustrates that most of the samples were classified as excellent water (68%). Three wells were classified in the category good water for drinking purposes. However, as shown in Table 4 and Figure 2, the WQI for wells 8, 11, and 19 is classified as poor water. This figure represents the eastern part of the study area (Figure 2), whereas the value of chemical parameters such as Mg 2+ , Ca 2+ , HCO3 − , Cl − , SO4 2− , NO3 − , F − , BOD, and COD, which were greater than the permissible limits for drinking purpose, belonged to the other samples. This indicates that heavy metals such as Cd, Sb, and Se play a major role in the groundwater quality assessment.
The US Salinity Laboratory's Diagram and Sodium Percentage (Na%)
The SAR is a characterization of sodium hazards; it is an important parameter for determining the suitability of groundwater for irrigation purposes [1,57]. The rating of groundwater samples in relation to salinity hazard and sodium hazard can be explained by plotting the chemical data in a U.S. Salinity Laboratory (USSL) diagram. The plot of conductivity versus SAR in the Wilcox log diagram shows that out of the 19 samples, five samples fall in the medium salinity and low alkalinity (C2S1) category, which are suitable for irrigation purposes (Figure 3). Thirteen samples belong to the high salinity and low alkalinity (C3S1) category, which moderately fit irrigation purposes. This indicates that when using the water, attention should be paid on having proper drainage system and selecting proper crops that can tolerate salt, otherwise crops and soil may be damaged (Figure 3). Only one sample is categorized as very high salinity and low alkalinity (C4S1). Hence, this water is not suitable for irrigation.
The sodium percentage is an indicator to demonstrate the sodium hazard for irrigation purposes. Irrigation with a high Na content may deteriorate the soil structure and reduce its aeration and permeability, causing adverse impacts on crop growth. As shown in Figure 4, most of the groundwater samples fall in the excellent to good category, while well 13 (similar to the USSL), belongs to the unsuitable category for irrigation purposes.
Total Dissolved Solids (TDS)
A salinity problem exists if salt accumulates in the crop root zone to a concentration that causes a loss in yield. In irrigated areas, these salts often originate from a saline soil, high water table, or from salts in the applied water [58]. Yield reductions occur when salt accumulates in the root zone to such an extent that the crop is no longer able to extract sufficient water from the salty soil solution, resulting in a water stress for a significant period. If water uptake is reduced, the plant slows its rate of growth [59].
Groundwater in the study area shows a variation of TDS from 165 to 3440 mg/L. The spatial classification of TDS based on irrigation purposes is shown in Figure 5. Wells 12 and 13 are classified as unsuitable and questionable for irrigation, respectively. Wells 1, 2, 5, 6, 10, and 17 are classified as good to excellent. Other wells are classified as permissible for irrigation. Figure 5. Spatial distribution of total dissolved solids (TDS) in the study area.
Permeability Index (PI)
Permeability index (PI) is an important parameter for groundwater use in agriculture. Sodium, bicarbonate, calcium, and magnesium concentrations in the soil may influence soil permeability [15]. In the study, the suitability of groundwater for irrigation based on PI was determined. This criterion categorizes the water into three classes. Based on the classification, water with PI > 75% (Class I) is good, 25-75% (Class II) is suitable, and PI < 25% (Class III) is unsuitable for irrigation [15]. As can be seen in Figure 6, all samples fall in Class I. This indicates that all wells are suitable for irrigation based on PI in the study area. Figure 6. USSL Classification of irrigation water in study area, based on permeability index.
Total Hardness (TH)
Total hardness (TH) is caused primarily by the presence of cations such as calcium and magnesium and anions such as carbonate and bicarbonate [60]. The maximum permissible limit of TH for drinking purposes is 500 mg/L and the most desirable limit is 100 mg/L as per the WHO standard (2011). However, for irrigation purposes, up to 1000 mg/L of hardness is accepted [61].
Total hardness is commonly classified in terms of degree of hardness as (1) soft: 0-75 mg/L; (2) moderate: 75-150 mg/L; (3) hard: 150-300 mg/L; and very hard >300 mg/L. In the groundwater samples, TH varied from 225 to 2245 mg/L with an average of 551 mg/L. Figure 7 shows the spatial distribution of total hardness in the studied aquifer. As seen from the figure, TH in well 12 and 13 is classified as unacceptable for irrigation. In addition, the TH in wells 1 and 7 is more than 500 mg/L. A high level of TH in water can cause cardiovascular diseases, stunted growth, reproductive failure, and many more diseases due to the prevalence of magnesium and calcium in water [62].
Magnesium Ratio (MR)
Magnesium ratio is important for assessing suitability of water for irrigation. Magnesium damages soil structure when water contains a lot of sodium and high salinity [63]. If magnesium ratio exceeds 50, the water is considered to be harmful to crops and hence it is unsuitable for irrigation [64]. The residual Mg/Ca ratio [65] and the method established by Szabolcs and Darab (1964) can be used to estimate the magnesium ratio (MR) for irrigation. According to this indicator, water with an MR greater than 50% is not suitable for irrigation [67]. The MR values obtained in this study ranged from 18% to 47%. This indicates that all wells are suitable for irrigation.
Heavy Metal Pollution Assessment
All m-HPI, HEI, NeI, and ERI indices were used to evaluate heavy metal pollution in groundwater samples for the study area. The values and spatial distribution of indices are presented in Table 5 and Figure 8, respectively. Heavy metal pollution evaluated by m-HPI method indicated more serious contamination than that of EWQI method. The values of m-HPI based on PI were in the range of 0-4.78 in the study area (Figure 8a). Based on the m-HPI water quality scale, nearly 16% of the samples were unacceptable for drinking whereas approximately 5-52% of samples were ranked as excellent to very good in the study area. The worst pollution status was recorded for wells 8, 11, and 19, which are in the eastern part of the area. The high m-HPI may be due to wastewater from industrial activities and domestic sewage. The m-HPI values of the samples in the western part of the study area were found below the critical pollution index (excellent to very good).
Additionally, the HEI, NeI, and ERI indices were used for a better understanding of the pollution status. The HEI, and NeI values ranged from 0.58 to 31 and 0.07 to 3.55 with mean of 6.11 and 0.76, respectively (Figure 8b,d). Based on water quality classification of HEI, approximately 79%, 16%, and 5% of sampling wells were classified as low, medium, and high heavy metal pollution, respectively. Based on the NeI index, one, three, and fifteen wells were categorized as slight, moderate, and insignificant heavy metal pollution, respectively.
The potential ecological risk of groundwater in the study area in terms of ecosystem services was assessed using the ERI method. The ERI values of the study area varied from 4 to 308 with a mean of 59 (Figure 8c). Like the HEI index, about 79% of sampling wells from the area were found to expose low ecological risk to the groundwater system. However, the other samples were classified in the category of moderate to considerable ecological risks. Due to higher occurrence and biological toxicity, Cd and Sb were the dominant contributors to the risk with an average contribution of nearly 91% and 6%, respectively. The spatial distribution of each heavy metal is depicted in Figure 9. It shows that higher concentration of Cd occurred in wells 8, 11, and 19, with 0.024, 0.027, and 0.030 mg/L, respectively, which are used as agricultural, industrial, and drinking purposes, respectively (Figure 9b). The permissible limit of Cd for drinking purposes is 0.003 mg/L based on WHO standards (2011). These areas are contaminated with sewage water as well as industrial effluents. Exposure to high concentration of Cd may cause liver and kidney damage as well as producing acute health effects [68,69].
Furthermore, high concentration of Sb is exhibited in wells 3 and 4 with agricultural use and in wells 16 and 18 with drinking water use that is much higher than 0.02 mg/L, which is the permissible limit of Sb for drinking purposes as recommended by WHO (2011). However, the highest Se concentration was found in well 7 with 0.013 mg/L, which is used for agricultural purposes, while the permissible limit for drinking purposes is 0.01 mg/L. Although all wells 3, 4, 7, 16, and 18 based on WQI, m-HPI, HEI, NeI, and ERI indices are classified as excellent, very good, low, insignificant, and low, respectively, the concentration of Sb and Se was above the permissible limits for drinking purposes. Antimony is a dangerous substance with chronic toxicity and potential carcinogenicity [70]. Sb poisoning can cause liver cirrhosis, muscle necrosis, nephritis, and pancreatitis. According to this fact, monitoring the concentration of these heavy metals is an essential measure to protect residents who use these sources of groundwater.
Conclusions
In this study, an innovative method involving entropy weighted groundwater quality index for both physicochemical and heavy metal content was used for a semiarid region. Using this index, the suitability of the groundwater for drinking and irrigation purposes was assessed for the Imam Zadeh Jafar Aquifer in southwestern Iran. We used the entropy method for assigning weights to water quality parameters in the WQI calculation to prevent subjectivity. This provides more reliability for the final output from the WQI. The entropy weights showed that concentration of Cd plays a substantial role in affecting the groundwater quality. Based on the EWQI, well 8, 11, and 19 were classified as poor water, while other wells were classified as excellent for drinking purposes.
Regarding water quality for irrigation, EC and SAR results reveal that all samples except for well 13 fall in C3S1 followed by C2S1 category. This denotes that the water is well suited for irrigation. However, MR and PI outcomes indicate that all samples are suitable for irrigation purposes. The results of TDS and TH show that well 12 and 13 are unsuitable for irrigation.
Heavy metal pollution indices, m-HPI, HEI, NeI, and ERI, showed that a majority of investigated wells have medium level of pollution in the study area. However, the values of these four indices in most parts of the area were below the critical levels. Wells 8,11,and 19, are classified as unsuitable for drinking purposes due to an excessive amount of Cd. Additionally, well 7 was polluted by Se, while concentration of Sb in wells 3, 4, 16, and 18 showed higher than the permissible limits. According to the results, the regional groundwater system is most likely impacted by anthropologic and industrial activities in the area. Heavy metal pollution indices showed reliability in characterizing the groundwater pollution with respect to heavy metals. Entropy weights helped with avoiding personal judgement in calculating the weights in all stages that could lead to more transparency and reliability of the results. However, continuous monitoring of groundwater quality with respect to heavy metals is needed. Monitoring is especially important here due to a sharp increase in population.
The current study shed light on a potentially vital problem. Groundwater is under pressure in many parts of the world, especially in arid and semiarid regions. It is important to develop methods that reduce the complexity of data to clearly understandable numbers that managers and decision makers can readily use. Evaluating performance of remediation technologies is beyond the scope of this paper. However, this study can help with further planning of potential future remediation measures. Besides remediation measures, regulative processes are important to develop, especially in the developing world. However, this study may be used as a basis for further managerial actions in the field. | 2020-04-16T09:11:49.392Z | 2020-04-14T00:00:00.000 | {
"year": 2020,
"sha1": "f6bbcb98ebd8bc1795b1d798b5b17a0208b8a576",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4441/12/4/1115/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "51061cdb4cd3736e1c91c9e0bcdbcaf3a6f8c132",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
17410062 | pes2o/s2orc | v3-fos-license | The role of ultrasound in the management and diagnosis of infectious mononucleosis
Currently, infectious mononucleosis (IM) is a clinically diagnosed condition. According to the American Family Physician criteria for IM, splenomegaly is the key factor that distinguishes IM from other causes of sore throat. Though heterophile antibody tests are often ordered to confirm diagnosis of IM, this test has a high false-negative rate early in the course of the disease. This case report provides an example of how the use of ultrasound to diagnose splenomegaly and subsequently mononucleosis increases diagnostic accuracy.
Background
Bedside ultrasound is a fast, accessible, and cost-effective tool for clinical evaluation of patient symptoms. This case report investigates the use of ultrasound in the evaluation and diagnosis of splenomegaly in mononucleosis.
Typically, mononucleosis is a clinically diagnosed condition [1]. Symptoms that should arouse clinical suspicion of mononucleosis include sore throat, lymph node enlargement, fever, and tonsillar enlargement. Key physical exam findings include splenomegaly, hepatomegaly, pharyngeal inflammation, and palatal petechiae [1]. Heterophile antibody tests may also be used to confirm the diagnosis alongside clinical signs; however, false negative results are relatively common early in the course of the infection [1].
Evaluation of splenomegaly is imperative in the diagnosis and patient management of mononucleosis. Though splenic rupture is a rare complication of splenomegaly, it is a life-threatening one that patients must be made aware of. Patients with splenomegaly are therefore advised to avoid contact sports for 3 to 4 weeks until the splenomegaly has resolved [2].
Among pediatric patients with mononucleosis, up to 50% of patients will present with an enlarged spleen [3]. According to a study conducted by Marco et al., splenomegaly is most accurately diagnosed via ultrasonic measurements of spleen volume [4]. As demonstrated by this case study, ultrasound was a precise and costeffective method to diagnose splenic enlargement in a patient with mononucleosis.
Case presentation
Mr. F was a 16-year-old previously healthy male with a 4-day history of sore throat. He had a dry cough and nausea, but denied fever, voice changes, or sick contacts.
On initial presentation to the Emergency Department (ED), the patient's vital signs were as follows: pulse 98 beats per minute (bpm), respiratory rate 18 cycles per minute (cpm), temp 36.7°C, and blood pressure 133/ 74 mmHg. Physical exam revealed bilateral tonsillar exudate and swelling (R > L). He had tender cervical lymph nodes. The uvula was midline. No palatal petechiae were noted. No splenomegaly was appreciated on physical exam.
Based on presenting symptoms, the patient received two points on the centor criteria scale (+tonsillar exudate, +cervical adenopathy) [5]. He therefore received a rapid strep test and a throat culture at his initial visit to the ED. Both tests were negative and the patient was discharged.
At a follow-up visit to the ED 2 days later, the patient complained of progressively worsening pain in his throat. He reported that the pain peaked to a 10/10 while swallowing. The patient could only tolerate a liquid diet and had significantly decreased his oral intake. He also reported nausea and intermittent dry cough. The patient denied chest pain, shortness of breath, vomiting, diarrhea, or abdominal pain.
On second presentation to the ED, the patient was now tachycardic (112 bpm) with blood pressure reduction to 111/64 mmHg. He was afebrile. Physical exam was largely unchanged from previous. Abdomen was soft and nontender. Again, no splenomegaly was appreciated on exam.
Given the negative strep throat culture from the patient's prior visit, mononucleosis was high on the differential. Since splenic enlargement was not detected on physical exam, ultrasound imaging was performed to rule out splenomegaly.
A p21 transducer in the abdominal examination mode was used for measurement. The patient was placed in a supine position. The probe was placed posteriorly below 12th rib on the patient's left side and angled anteriorly. The spleen was measured to be 17.8 cm in axial length (NL 11 to 13 cm).
Based on the patient's significant splenomegaly and 4day history of sore throat, the patient was given a presumptive diagnosis of mononucleosis. He was treated symptomatically: given IVF and anti-nausea and pain medications. After IVF, the patient's HR decreased to 98 bpm. He was advised to avoid contact sports for the next 3 to 4 weeks. The patient was subsequently discharged with no further testing.
Conclusion
Sore throat is a common chief complaint. However, the presence of palatal petechiae, splenomegaly, and posterior cervical adenopathy are highly suggestive of infectious mononucleosis [1]. As was seen in the case above, both physical examinations the patient received during his two ED visits were negative for splenomegaly. However, in cases where high suspicion for splenomegaly exists, further imaging should be obtained to rule it out. A recent study conducted by UCSD has shown wide variability in the ability to appreciate an enlarged spleen by physical exam; this finding was not directly correlated with the level of clinical experience [2]. In this scenario, the splenomegaly was a significant one (17.5 cm), and its lack of detection on physical exam skewed the patient's clinical management.
The delayed diagnosis of mononucleosis had financial repercussions as well. On average, Medicare charges for a single visit to ED of sore throat or an upper respiratory tract infection are about $1,101 [4]. At his initial visit, this patient received a throat culture ($11.84) [6] and a strep A Ag test ($16.49) [6]. At his second ED visit, the patient had abdominal real time ultrasound imaging ($29.59) [5]. His entire diagnostic workup alongside his two ED visits totaled to about $2,260.
We hypothesize that an earlier ultrasound of the patient's spleen would have led to an earlier diagnosis of mononucleosis. Subsequently, the patient could have been told what to expect from the course of this disease. In UptoDate's 'Patient information: infectious mononucleosis (mono) in adults and adolescents (Beyond the Basics)' , patients are reassured that symptoms of pain in mononucleosis can last from 2 to 4 weeks [7]. This reassurance along with adequate pain control measures may have prevented the second ED visit. The cost reduction in this case would have been close to $1,000.
Given that almost 50% to 60% of adolescents with mononucleosis have splenomegaly [1], the diagnosis of splenomegaly is a crucial step in management. Mr. F was a 16-year-old athletic male, highly involved in contact sports. At his initial visit, no splenomegaly was detected, so the patient was not told to avoid contact sports at discharge. Though splenic rupture is a rare complication of splenomegaly, the risk increases significantly in patients such as Mr. F who are actively involved in contact sports [8].
It is important to note that ultrasound holds potential not only in diagnosis of mononucleosis but also in monitoring the disease. Though most cases of splenomegaly resolve within 3 to 4 weeks, some do not [9]. In patients actively involved in contact sports, ascertaining that the spleen has regressed to normal size is crucial prior to allowing the patient to return to the sport.
Consent
Phone consent was obtained from the patient for publication of this Case Report and any accompanying images. A document affirming this phone call is available for review by the Editor-in-Chief of the journal.
Competing interests
The authors declare that they have no competing interest.
Authors' contributions SF performed the ultrasound scan, researched the cost-benefit analysis based on medicare prices, researched the association of mononucleosis and splenomegaly, and drafted the manuscript. CF supervised and reconfirmed the ultrasound scan, participated in the cost benefit analysis research, and helped draft the manuscript. Both authors read and approved the final manuscript.
Authors' information SF is a fourth year medical student at the University of California, Irvine. She has received extensive training through the course of her medical school career in ultrasound technique. She has also participated in research studies analyzing the cost effectiveness of ultrasound in medical care. JF is an Emergency Medicine Physician at the University of California, Irvine Medical Center with fellowship training in Emergency Ultrasound. He is a world-renowned expert in the field of ultrasound technology with over 40 publications discussing the use of ultrasound in various aspects of clinical management. JF is a professor of ultrasound at UC Irvine School of Medicine as well as the Dean of Academic Affairs. He is a member of the American Institute of Ultrasound in Medicine. | 2016-05-04T20:20:58.661Z | 2014-02-28T00:00:00.000 | {
"year": 2014,
"sha1": "8ffd0f60bd61ce54b6b58f32bf06a40d47dd85ed",
"oa_license": "CCBY",
"oa_url": "https://criticalultrasoundjournal.springeropen.com/track/pdf/10.1186/2036-7902-6-4",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d5dae83d6b1a52061ef75735f41b89896cfc9c85",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
250311043 | pes2o/s2orc | v3-fos-license | Human placental hematopoietic stem cell derived natural killer cells (CYNK-001) mediate protection against influenza a viral infection
ABSTRACT Influenza A virus (IAV) infections are associated with a high healthcare burden around the world and there is an urgent need to develop more effective therapies. Natural killer (NK) cells have been shown to play a pivotal role in reducing IAV-induced pulmonary infections in preclinical models; however, little is known about the therapeutic potential of adoptively transferred NK cells for IAV infections. Here, we investigated the effects of CYNK-001, human placental hematopoietic stem cell derived NK cells that exhibited strong cytolytic activity against a range of malignant cells and expressed high levels of activating receptors, against IAV infections. In a severe IAV-induced acute lung injury model, mice treated with CYNK-001 showed a milder body weight loss and clinical symptoms, which led to a delayed onset of mortality, thus demonstrating their antiviral protection in vivo. Analysis of bronchoalveolar lavage fluid (BALF) revealed that CYNK-001 reduced proinflammatory cytokines and chemokines highlighting CYNK-001’s anti-inflammatory actions in viral induced-lung injury. Furthermore, CYNK-001-treated mice had altered immune responses to IAV with reduced number of neutrophils in BALF yet increased number of CD8+ T cells in the BALF and lung compared to vehicle-treated mice. Our results demonstrate that CYNK-001 displays protective functions against IAV via its anti-inflammatory and immunomodulating activities, which leads to alleviation of disease burden and progression in a severe IAV-infected mice model. The potential of adoptive NK therapy for IAV infections warrants clinical investigation.
Introduction
Influenza A virus (IAV) infections are associated with a high healthcare burden around the world. 1 Globally, its epidemics typically occur during the cold season in temperate regions when low humidity and temperature ambient conditions are suggested to prolong virus shedding and transmission, while in subtropical and tropical regions the less clearly defined influenza seasons allow recurrent infections through the year. Overall, seasonal IAV affects up to 10% of the adult population and 20% of children annually and displays a substantial morbidity. 2 Vaccination remains the most effective means to prevent and control IAV infections; however, annual vaccinations are limited in efficacy due to rapid antigenic evolution in the hemagglutinin (HA) glycoprotein. Influenza vaccine effectiveness in the 2018-2019 influenza season in the United States was 47% overall and 46% against IAV (H1N1), 3 respectively. Alternative therapies to control emerging IAV are urgently needed.
Natural Killer (NK) cells are innate immune cells with an important role in early host response against various pathogens. Multiple NK cell receptors are involved in the recognition of infected cells, including NKG2D, DNAM-1 and the natural cytotoxicity receptors (NCRs) NKp30, NKp44 and NKp46, which bind common stress ligands or pathogen-associated molecules. 4 Using these immune receptors, NK cells are able to recognize and spontaneously kill 'stressed' cells, such as virally infected or tumor cells, without prior sensitization. In addition, superiorities of NK cellbased therapies over T cells include better safety such as absence or minimal cytokine release syndrome (CRS) and graft-versus-host disease (GVHD), engaging various mechanisms for stimulating cytotoxic function, and high feasibility for 'off-the-shelf' manufacturing. 5 Abnormal cells trigger NK cell activation either through the loss of self-identifying molecules, such as major histocompatibility complex (MHC) class I, that bind to inhibitory receptors on the NK cells or by upregulating the expression of ligands for activating receptors on NK cells that can overcome the inhibitory signals. Various viral glycoproteins expressed by enveloped viruses are specifically recognized by NCRs. 6,7 Activated NK cells release cytokines with potent antiviral activity, such as interferon gamma (IFNγ) and tumor necrosis factor alpha (TNFα), as well as cytotoxic granules containing perforin and granzyme B. 8 Recent studies have demonstrated that there is robust activation of NK cells during viral infection, and that the depletion of NK cells aggravates viral pathogenesis. [9][10][11][12][13] Many mouse models of influenza infection also implicate a protective role for NK cells during infection. [11][12][13][14][15] However, in high dose severe infection models, murine NK cells appeared to play a detrimental role, contributing to influenza pathogenesis. 16,17 In humans, it has been reported that the number of NK cells decreased upon seasonal IAV infection in peripheral blood 18 and NK cell lymphopenia in the peripheral blood and lung was associated with disease severity of 2009 pandemic H1N1 infection. 19,20 These findings not only suggest that further studies are required to fully understand important roles of NK cells against IAV infection, but also highlight adoptive NK cell therapy may provide clinical benefits against IAV infection.
Since little is known about the therapeutic potential of adoptively transferred human NK cells against IAV infections, here we investigated antiviral function of adoptively transferred CYNK-001, a culture-expanded NK cell population derived from human placental hematopoietic stem cells, and its effects on host immune responses to IAV infection. CYNK-001 is currently being studied in four ongoing clinical trials: Phase I study in patients who have relapsed and/or refractory AML (NCT04310592), Phase I/II study for multiple myeloma (MM) (NCT04309084), Phase I study for glioblastoma multiforme (GBM) (NCT04489420) and Phase I/II study for coronavirus disease 2019 (COVID-19) (NCT04365101). In this study by using an immunocompetent mouse model with severe IAV-infection, we found that adoptive transfer of CYNK-001 displays protective functions against IAV, by suppression of inflammation and immunomodulation in disease-injured lungs without causing host immunotoxicity.
CYNK-001 cell culture
CYNK-001 was derived by expanding and differentiating placental hematopoietic stem/progenitor CD34+ cells in a 35-day culture process. Placental CD34+ cells were cultivated in the presence of various human cytokines for 35 days to generate CYNK-001 under current good manufacturing practices standards, followed by release testing. Cytokine cocktail containing IL-2, IL-15, SCF and IL-7 were used for placental CD34+ cells expansion and differentiation as described before. 21 Cells were harvested following the 35-day expansion and differentiation process and then frozen as a drug substance.
Murine influenza model
Animal study was conducted by contract research organization, Pharmaseed Ltd, and study was approved by the institutional IACUC and safety committees. Female Balb/c mice, 6-8 weeks old, were obtained from Envigo RMS (Israel) Ltd and maintained under pathogen-free conditions with a 12-hour light cycle. On day 0, mice were anesthetized using Ketamine/ Xylazine injection (90/10 mg/kg, SC). A total volume of 50 µL of IAV PR8 suspension containing 2500 PFU was administered intranasally. Weight was recorded daily, and mice were euthanized either on day 7 or when euthanasia criterion was met with more than 20% weight loss from day 0. The animals were observed for clinical symptoms daily with special attention to piloerection, hunched posture and hindlimb paralysis. Scoring of the disease progression was performed according to Table 1.
CYNK-001 preparation and administration
On day 1 and day 3, cryopreserved CYNK-001 cells were thawed in a 37°C water bath. After centrifugation, CYNK-001 cells were resuspended in PBS. Cell viability was determined by trypan blue, with an average of viability of approximately 95%. PBS or 1 × 10 7 CYNK-001 cells were intravenously administered into the tail vein at a dose volume of 200 µL on day 1 and day 3. A total of 26 mice were randomly assigned to PBS control (n = 13) and CYNK-001 (n = 13) groups. Five mice from each group were euthanized on Day 3, 4 hours post the second dose of PBS or CYNK-001.
Measurement of cytokines in bronchoalveolar lavage fluid (BALF)
BALF was collected from PBS and CYNK-001-treated mice. Lungs were flushed multiple time with a total volume of 1.3 ml ice-cold sterile Hanks' Balanced Salt Solution (HBSS) with 3 mM EDTA, pH 7.2. The obtained fluid was centrifuged at 800 g for 10 min at 4 ℃. The cells were cryopreserved in CryoStor 10 cryopreservation media (Sigma, C2874) and the supernatant was stored in −80 ℃. Mouse cytokines in BALF were measured using a Milliplex MAP Cytokine/Chemokine Magnetic Bead Panel from Millipore Sigma (MYCTOMAG-70K) and analyzed using Belysa curve fitting software. The presence of human cytokines was evaluated using a Milliplex MAP Human CD8+ T Cell Magnetic Bead Panel from Millipore Sigma (HCD8MAG-15K). . Cell profiles were acquired on a BD LSR Fortessa X20 with beads used for compensation of spectral overlap. Fluorescence Minus One (FMO) for markers were used as gating controls. Immune cell populations were identified based on previously described cell surface markers 23 with slight modifications (Table 2). Data were analyzed using FlowJo (V10, TreeStar).
Immunohistochemistry
Formaldehyde fixed, paraffin embedded lung samples were sectioned to 4-micron thickness and placed on slides for immunohistochemical evaluation. Sectioned lung tissues were stained for the following murine cell markers: CD3, CD4, CD8, and CD68. Alkaline phosphatase (AP)-and 3,3′-Diaminobenzidine (DAB)-based methods were used for single and/or dual staining. Quantitative analysis of the number of positive cells in separate fields was reported.
Statistical analysis
GraphPad Prism 9.3.1 (GraphPad Prism Software, Inc.) was used to calculate statistics one-way ANOVA, paired and unpaired t test methods. Data were expressed as the mean ± SEM. Statistical significance was shown as *, P < .05; **, P < .01 and ***, P < .001.
In vitro characterization of CYNK-001 cells
CYNK-001 is a culture-expanded NK cell population derived from human placental hematopoietic stem cells with the nominal NK surface phenotype of CD3-CD14-CD19-CD56+ ( Figure 1a). The activating receptors of NK cells such as NKp46, NKp44, and NKp30, have been implicated in functionally arming NK cells following influenza virus infection via binding with influenza virus hemagglutinin (HA), 7 and our CYNK-001 exhibited strong cytolytic activity against a wide range of tumor cell lines. 24 Therefore, we further validated high expression levels of activating receptors including NKp30, NKp44, NKp46, DNAM-1 and NKG2D as a basal phenotype for CYNK-001 NK cells (Figure 1b and 1c). Taken together, these data suggest that CYNK-001 cells may recognize virally infected cells through the binding of IAV HA with NK cell receptors such as NKp44, and exert cytotoxic elimination of the source of infection. Further studies are required to explore this hypothesis.
CYNK-001 confers in vivo resistance to severe IAV infection
To evaluate in vivo effects of CYNK-001 on acute and severe lung injury and inflammation, we chose a high-dose IAVinfection mouse model, in which CYNK-001 (1 × 10 7 cells/ mouse) was intravenously infused at 1-and 3-days post infection (dpi) as described in Figure 2a. As early as 3 dpi, infection caused rapid weight loss (Figure 2b) with increased clinical symptoms characterized by hunching and ruffled fur (Figure 2c and Table 1). Notably, mice received CYNK-001 cells had reduced weight loss and milder clinical symptoms compared to PBS-treated mice. Consistent with these findings, mice treated with CYNK-001 showed a delayed onset of mortality: At 4 dpi, all mice in the CYNK-001-treated group were alive whereas 37.5% mortality rate occurred to the PBS-treated group (Figure 2d), although the difference was not statistically significant (p = .0547, χ 2 test). These results indicate CYNK-001-mediated resistance to the disease progression caused by severe IAV infection in mice.
The presence of CYNK-001 in Balb/c lungs was only detectable at 4 hours post i.v. infusion by digital PCR with an hTERT primer/probe specific for human gDNA (Fig. S2), while human cytokines secreted by CYNK-001 cells were undetectable in the BALF and plasma samples at 3 and 6 dpi (data not shown).
CYNK-001 reduces lung inflammation induced by IAV infection
Inflammation is one of the essential contributors of IAVinfection disease severity. [25][26][27] Therefore, we next examined cytokines and chemokines in the BALF as indications of CYNK-001's impacts on lung inflammation (Figure 3a). At 3 dpi, no effect of CYNK-001 on murine cytokines and chemokines was observed (Figure 3b). With progression of the disease, the levels of proinflammatory murine cytokines and chemokines showed intensive increase by 6 dpi. For example, IFN-γ level increased from 76 ± 22 pg/ml at 3 dpi to 5633 ± 255 pg/ml at 6 dpi in PBS-treated and IAV-infected mice. Strikingly, CYNK-001-treated mice produced significantly lower level of IFN-γ (1065 ± 367 pg/ml) at 6 dpi, 5-fold less than PBS control group. Consistently, CYNK-001 treatment also reduced other proinflammatory cytokines and chemokines such as MCP-1 (p < .05) and IL-6 (p = .056). In contrast, levels of TNF-α in BALF remained unchanged upon CYNK-001 treatment.
To further elucidate effects of CYNK-001 in the host immune responses, we profiled immune cell populations in BALF at 3 and 6 dpi. First, unchanged murine CD45+ cell populations indicated that CYNK-001 treatment did not alter the total number of immune cell infiltrates in the BALF (Fig. S4). Among murine CD45+ cells, host neutrophils and macrophages were most abundant at 3 dpi whereas T cells became the largest immune cell population at 6 dpi in lung (Figure 3c), suggesting the shift from innate to adaptive immune responses. At 3 dpi, CYNK-001-treated mice had 60.6 ± 2.2% neutrophils, significantly higher than 49.2 ± 2.9% in PBS-treated mice. By 6 dpi, however, significantly less neutrophils were observed in CYNK-001-treated mice (12.4 ± 2.1%) compared to PBS control (19.6 ± 1.4%) (Figure 3d), suggesting a larger reduction of neutrophils in CYNK-001-treated mice along with disease progression. In contrast, CYNK-001 treatment resulted in a significant increase in CD8+ T cells at 6 dpi, suggesting its effect on adaptive immune response. As a result, neutrophil-to-CD8+ T cell ratio (N8 R), a diagnostic and prognostic marker for severe COVID-19 respiratory disease, [28][29][30] was significantly lower in the CYNK-001-treated mice compared to the PBS control (Figure 3e). We also noticed mouse endogenous NK cells were significantly less by CYNK-001 treatment at 3 and 6 dpi (Figure 3d). No significant differences were detected in the number of CD4+ T cells and total macrophages, alveolar as well as interstitial macrophages (Fig. S5) at 3 and 6 dpi. Collectively, these data demonstrate the CYNK-001-altered immune responses to viral infection in immunocompetent mice, which overall alleviated inflammation induced by IAV challenge.
CYNK-001 alters murine lung immune cell populations
Compared to uninfected lung tissue, histopathological analysis of the IAV infected lung revealed that the changes in the lungs consisted of necrosis and ulceration of the bronchial lining epithelium, presence of inflammatory exudate in the bronchial lumen, and post-necrotic regenerated epithelium in the bronchi (Fig. S6). No significant difference in microscopic lesions was observed between the groups treated with CYNK-001 and PBS (Table S1). Nevertheless, immunohistochemical analysis of the lungs confirmed a significant increased infiltration of CD3+/CD8+ T cells co-stained by the two markers in CYNK-001-treated lungs compared to PBS treatment, corroborating with the findings in the BALF, while the amount of CD4+/CD8+ T cells were comparable between both groups (Figure 4a and 4b). Interestingly, murine CD68+ lung macrophages were also significantly increased upon CYNK-001 infusion. These observations suggest the alteration of murine immune cell profiles in lung tissue due to CYNK-001 treatment may contribute to its anti-inflammatory effects post IAV infection.
Discussion
NK cells are critical for innate regulation of the acute phase of IAV infection through cytolytic activity and production of cytokines to directly eliminate virus infected cells 6,11,31 and for regulation of adaptive immunity. 32 In humans, most studies investigating NK cell responses to IAV infection analyzed peripheral blood and lung NK cells from IAV patients or healthy donors in in vitro infection models. [33][34][35] Here, we report for the first time to our knowledge that adoptive transfer of human NK cells derived from placental hematopoietic stem cells provides protection against severe IAV infection in mice. This effect is presumably via CYNK-001-mediated alleviation of lung inflammation and immunomodulation. It has been reported that NK cell cytolytic activity against influenza virus is triggered by the recognition of viral HA and stress ligands by NKp44 and NKp46 receptors. 6,31,36 In the current study, antiviral activity of CYNK-001 against IAV infection was not addressed. However, high expression levels of activating receptors in CYNK-001 cells such as NKp44, which is unique compared to peripheral blood NK cells with low or undetectable expression of NKp44, 24,37-39 suggest our cell product may exhibit cytotoxic response against virus-infected cells through binding activating receptors to viral HA. This hypothesis will be evaluated in further investigations.
NK cell response to IAV has been largely studied in mice, in which protective or detrimental effects were reported due to differences in influenza strain, dose, and genetic background of mice. [11][12][13][15][16][17] The mechanisms underlying different roles that NK cells play in response to IAV infection remain to be elucidated. It is speculated that secretion of cytokines and chemokines from activated NK cells may be a double-edged sword that could promote an antiviral microenvironment but could also induce an intense inflammatory response. The primary mechanisms of IAV pathophysiology include virus replicationinduced damage to the respiratory epithelium, the immune responses recruited to handle the spreading virus and subsequent inflammation-induced injuries. 40 Therefore, one of the keys to combat influenza virus infection is to suppress inflammation without induction of an excessive immune response.
To better understand the effect of CYNK-001 on IAVinduced lung inflammation, an acute severe IAV infection model in Balb/c mice was used in this study. Repeat dosing regimen of CYNK-001 at 1 and 3 dpi was applied to overcome short persistence of CYNK-001 in immunocompetent mice. As observed, vehicle-treated mice started to die as early as 4 dpi, leading to a limited time window for treatment. Therefore, two injections of CYNK-001 within 3 days post infection appeared to be optimal for both dosing frequency and duration in this model. CYNK-001 treatment reduced body weight loss and clinical symptoms and delayed the onset of mortality, thus demonstrating its protective functions in vivo. Since the highly dynamic changes of lung viral load and immune cell influxes were presented recently in a murine IAV infection model, 41 the lung viral load examinations in a time-course manner will be helpful to better understand CYNK-001's in vivo antiviral activity.
Immune cell infiltration is crucial for control of virus replication and resolution of infection. However, this response often contributes to pathogenesis and morbidity. 40 In the case of highly pathogenic IAVs such as the 1918 pandemic H1N1 strain and the recent avian H5N1 and H7N9 strains, an excessive inflammatory response caused irreparable damage to the lungs resulting in high mortality rates. [25][26][27] To better understand the mechanism underlying CYNK-001-mediated protection against IAV infection, we further investigated the production of cytokines and chemokines as well as immune cell infiltration in the BALF and the lungs. Mice treated with CYNK-001 had decreased proinflammatory cytokines and chemokines compared to PBStreated mice, of which reduction of IFNγ was the most dramatic and significant. In fact, IFNγ-/-mice infected with the H1N1 pandemic virus A/California/04/2009 had decreased immunopathology and enhanced survival, 42 while a lack of IFNγ signaling in IFNγR-/-mice infected with the H1N1 virus A/WSN/33 also resulted in decreased virus replication and reduced disease symptoms. 43 In addition, IFNγ signaling induces the production of other proinflammatory cytokines and chemokines, including TNFα and MCP-1 in macrophages, 43,44 further suggesting IFNγ as a major driver of inflammatory responses. 43,[45][46][47] It is reported that MCP-1 recruits monocytes to the lung tissue further worsening the immunopathology. 48 In line with IFNγ reduction, MCP-1 level was also significantly decreased in CYNK-001-treated mice. Taken together, the significantly reduced proinflammatory cytokines and chemokines such as IFNγ and MCP-1 likely contributed to CYNK-001-mediated anti-inflammatory protection.
High neutrophil-to-lymphocyte ratio (NLR) as well as N8 R have been reported as useful prognostic biomarkers correlated with severe disease and fatality in patients infected with IAV 49,50 and the SARS-CoV-2 coronavirus. 28,29 Here, we found that CYNK-001-treated mice had significantly lower N8 R at 6 dpi with reduced neutrophil and increased CD8+ T cells compared to PBS control in BALF, and an increased infiltration of CD3+/CD8+ T cells was also observed in lung tissue post CYNK-001 infusion. These findings suggest a dynamic impact of CYNK-001 treatment on both innate and adaptive immune responses.
With the focus on acute lung injury and inflammation induced by IAV infection, the Balb/c mouse model used in our study was limited in its short in-life duration due to severe symptoms post infection, which prevents the monitoring of chronic impacts of CYNK-001 treatment. In a recent chronic mouse model of mouse-adapted IAV infection, the disease progression and follow-up symptoms like IAV-associated neuroinflammation, were examined by up to 120 dpi. 51 In addition, the short persistence of CYNK-001 cells when infused in a xenogeneic setting into fully immune competent Balb/c mice was another limitation. Human CD34+ hematopoietic stem cell-engrafted NSG mouse model with transgenic human cytokine expression 52 may address this issue, and thus better serve the purpose of analyzing long-term therapeutic effects of CYNK-001 cells in future.
In conclusion, we demonstrate that adoptive transfer of CYNK-001 reduces acute lung injury by suppression of inflammation and immunomodulation in a severe IAV-infection model. Our results suggest that CYNK-001 may offer a novel approach to the treatment of viral infections and provide a cohesive scientific rationale for the ongoing Phase I clinical study in COVID-19 patients. | 2022-07-07T05:06:34.323Z | 2022-04-11T00:00:00.000 | {
"year": 2022,
"sha1": "ceec4d11e49e7fd38398e71efd4d352ca2070a2e",
"oa_license": "CCBYNCND",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/21645515.2022.2055945?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ceec4d11e49e7fd38398e71efd4d352ca2070a2e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
} |
238741730 | pes2o/s2orc | v3-fos-license | Mebendazole Mediates Proteasomal Degradation of GLI Transcription Factors in Acute Myeloid Leukemia
The prognosis of elderly AML patients is still poor due to chemotherapy resistance. The Hedgehog (HH) pathway is important for leukemic transformation because of aberrant activation of GLI transcription factors. MBZ is a well-tolerated anthelmintic that exhibits strong antitumor effects. Herein, we show that MBZ induced strong, dose-dependent anti-leukemic effects on AML cells, including the sensitization of AML cells to chemotherapy with cytarabine. MBZ strongly reduced intracellular protein levels of GLI1/GLI2 transcription factors. Consequently, MBZ reduced the GLI promoter activity as observed in luciferase-based reporter assays in AML cell lines. Further analysis revealed that MBZ mediates its anti-leukemic effects by promoting the proteasomal degradation of GLI transcription factors via inhibition of HSP70/90 chaperone activity. Extensive molecular dynamics simulations were performed on the MBZ-HSP90 complex, showing a stable binding interaction at the ATP binding site. Importantly, two patients with refractory AML were treated with MBZ in an off-label setting and MBZ effectively reduced the GLI signaling activity in a modified plasma inhibitory assay, resulting in a decrease in peripheral blood blast counts in one patient. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.
Introduction
The Hedgehog (HH) signaling pathway is a highly conserved signaling cascade that plays a critical role during embryogenesis and is strongly involved in many basic cellular functions, including cell differentiation, proliferation and stem cell maintenance [1]. The main receptor for HH ligands is Patched (Ptch), a 12-pass transmembrane protein. Upon ligand binding Ptch releases SMO, a seven-transmembrane domain G-protein coupled receptor-like protein, which then activates the GLI transcription factors representing the main effectors of the HH signaling pathway. In addition to this canonical HH pathway, numerous signaling cascades result in non-canonical activation of the GLI transcription factors, including FLT3/STAT5, RTK/RAF/MEK/ERK and PI3K/AKT/mTOR [2,3].
It is well established that aberrant activation of HH signaling is associated with a wide variety of neoplasms [4]. Activated GLI transcription factors drive a transcriptional program that promotes survival, growth, migration and stemness [2,4,5]. Expression of GLI1 is associated with a poor prognosis in a wide variety of cancers [6,7]. Moreover, GLI transcription factors play a fundamental role in the maintenance of leukemia, initiating cells that are responsible for therapy failure and tumor relapse due to their chemotherapy resistance [2]. In a previous work, we showed that a high GLI1 and GLI2 expression represents a negative prognostic marker in AML, and that targeted inhibition of GLI1 and GLI2 mediates anti-leukemic effects in vitro and in vivo [7].
Current treatment strategies aim to inhibit GLI signaling by targeting SMO in cancer cells. SMO inhibitors have been tested in AML, where Glasdegib is an approved treatment in conjunction with low-dose cytarabine [8]. However, due to the frequent non-canonical activation of the HH pathway, the inhibition of GLI transcription factors may represent a better choice.
For decades the synthetic benzimidazole Mebendazole (MBZ) has been an approved anthelminthic drug, effective against a broad spectrum of intestinal helminthiasis with a favorable toxicity profile. Indications include short-term and low-dose treatments, as well as high-dose long-term treatments (e.g., 50 mg/kg bodyweight for several months) [9,10]. Besides its anthelmintic activity, MBZ exhibits strong anti-tumor effects in different cancer entities [9]. MBZ's mechanisms of action are manifold-including anti-angiogenic properties, and inhibition of microtubule depolymerisation and signaling cascades (e.g., BRAF, MEK) [9]. Walf-Vorderwülbecke et al. proposed that MBZ induced c-MYB degradation by inhibiting protein folding through blockade of HSP70 in AML [11]. Herein, we show that MBZ mediates strong anti-leukemic effects by promoting the degradation of GLI transcription factors through inhibition of HSP70/90 chaperone activity, and that MBZ sensitizes AML cells to chemotherapy. Furthermore, two patients with refractory AML were treated with MBZ in an off-label setting, and the clinically achievable MBZ plasma concentrations effectively reduced the GLI signaling activity in a modified plasma inhibitory assay. Our data prove that MBZ is an effective GLI inhibitor that should be evaluated in combination to conventional chemotherapy in the clinical setting.
MBZ Inhibits SMO Independent Non-Canonical GLI Signaling Predominant in AML
Since the 1987 discovery of GLI1 in human glioma cells [12], the role of the three members GLI1, GLI2 and GLI3 in a variety of cancers has become increasingly apparent [4], with GLI1 expression specifically identified as a negative prognostic factor in numerous cancers [6,7]. Previously, we demonstrated that the treatment of GLI reporter AML cell lines with SMO-inhibitor cyclopamine did not lead to a reduction in GLI promoter activity [3]. We hypothesized that this might be due to the predominant expression of the GLI2∆N isoform in AML cells. GLI2∆N represents a constitutively active GLI2 isoform that lacks the amino-terminal repressor domain [13] and has the ability to induce target genes several fold stronger in comparison to the GLI2 full length (GLI2FL) [14]. Expression of GLI2∆N results in a constitutively active GLI signaling cascade even in the presence of SMO inhibitors, providing an important mechanism for resistance to SMO inhibitors in cancer [15]. Consequently, we analyzed the expression of GLI2∆N and GLI2FL in samples from 47 newly diagnosed AML patients by qPCR. GLI2 expression was detected in 16 of the 47 samples (34%). GLI2∆N mRNA expression was 29.5-fold higher than the expression of GLI2FL mRNA (with a range of 0.8-to 111.5-fold; Figure 1A). Moreover, protein levels of GLI2∆N were considerably higher than those of GLI2FL in the AML cell lines used herein as determined by western blot (Figures 2C and 3B, Supplementary Materials Figure S1I). This indicates that GLI2∆N is the predominantly expressed isoform, relative to GLI2FL, in AML. The anthelmintic MBZ has shown to exhibit strong anti-tumor effects in preclinical studies [9]. In order to investigate if MBZ inhibits the GLI cascade, we treated seven AML GLI reporter cell lines with increasing MBZ concentrations for 48 h. Endogenous expression of GLI transcription factors has been detected in leukemic blasts from a large proportion of AML patients and cell lines [7,16] and was also found in all AML cell lines used herein. Treatment with MBZ led to a strong dose-dependent reduction in all analyzed AML reporter cell lines ( Figure 1C). The MBZ concentration required to inhibit the GLI promoter activity was within clinically achievable concentrations, with an IC50 ranging from 32 ± 20 to 267 ± 71 nM after 48 h in the AML cell lines tested ( Figure 1C). In contrast to MBZ, the active metabolite of albendazole (i.e., Albendazole sulfoxide (ABZ-S)), a closely related benzimidazole-derived anthelmintic agent [17], had no effect on the GLI reporter activity (Supplementary Materials, Figure S2).
MBZ Promotes Proteasomal Degradation of GLI
To analyze the effects of MBZ on GLI expression, AML cell lines MV4-11, MOLM-13, THP-1 and OCI-AML3 were treated with MBZ in increasing concentrations from 100 nM to 500 nM, followed by western blot and RT-qPCR analyses. We found that GLI1 and GLI2 protein levels were clearly reduced in 24 h after MBZ exposure (Figure 2A-C), whereas GLI1 and GLI2 mRNA levels did not decrease (Supplementary Materials Figure S1A-H). Incubation of AML cells with MBZ for 48 h strongly affected GLI2∆N protein levels and thus could overcome SMO inhibitor resistance (Supplementary Materials Figure S1I).
We hypothesized that MBZ decreases the GLI protein levels by promoting their proteasomal degradation. Therefore, we evaluated the influence of the 26S proteasome inhibitor Bortezomib (BTZ) on GLI protein levels and signaling activity after MBZ treatment. AML GLI reporter cell lines THP-1 and OCI-AML3 were treated with 5 or 10 nM BTZ for 24 h. As anticipated, MBZ inhibited GLI signaling activity in a dose-dependent manner. However, BTZ fully reversed MBZ-mediated inhibition of the GLI promoter activity ( Figure 2D,E). Consistent with these results, 10 nM BTZ abolished MBZ's effect on GLI1 and GLI2 protein levels in THP-1 and OCI-AML3 cells in western blot analysis (Figure 2A-C). Taken together, these results strongly suggest that MBZ mediates proteasomal degradation of GLI1 and GLI2.
MBZ Promotes Degradation of GLI Transcription Factors via Inhibition of HSP70/90-Chaperone Activity
The heat shock proteins 70 (HSP70) and 90 (HSP90) tightly cooperate in the protein stabilization of a wide spectrum of client substrates, including transcription factors. Inhibition of either HSP70 or HSP90 disrupts this chaperone machinery and leaves a client protein prone to misfolding, resulting in its ubiquitination and proteasomal degradation. Walf-Vorderwülbecke et al. reported that MBZ promotes proteasomal degradation of transcription factor c-MYB by interaction with HSP70 [11]. However, GLI protein stability has never been associated with heat shock proteins. Inhibition of either HSP70 or HSP90 with small-molecule inhibitors VER-155008 and PU-H71, respectively, resulted in significant reduction in GLI1 and GLI2 protein levels in western blot analysis. Inhibition of both HSP70 and HSP90 by combination of both agents increased the effect considerably ( Figure 3A,B). In accordance with the effects mediated by MBZ, GLI1 and GLI2 mRNA levels did not decrease (Supplementary Materials Figure S3). We also demonstrated that treatment with 500 nM MBZ did not alter HSP70/HSP90 protein expression in MV4-11, MOLM-13, THP-1 and OCI-AML3 after 24 h using western blot analysis (Supplementary Materials Figure S4). However, HSF-1, the major transcription factor of the heat shock response genes, was heavily phosphorylated on Serine 326 in MBZ-treated MV4-11 and THP-1 cells-reflecting an active state of HSF-1 ( Figure 3C). Interestingly, the total HSF-1 protein levels were reduced by MBZ treatment compared to control ( Figure 3D).
To further investigate if MBZ directly inhibits the enzymatic activity of the HSP chaperone machinery in AML cells, we generated a MOLM-13 cell line constitutively expressing a firefly luciferase (MOLM-13 luc+ ). Following heat-shock, refolding of heatdenatured firefly luciferase depends on cooperative chaperone activity of both HSP70 and HSP90. We treated MOLM-13 luc+ cells with 10 µM MBZ, 25 µM VER-155008, 1 µM PU-H71 or DMSO as a solvent control. The luciferase signal was recovered without an inhibitor in MOLM-13 luc+ following a heat-shock, but incubation of AML cells with MBZ or specific HSP inhibitors significantly impaired recovery of the signal ( Figure 3E,F), suggesting a direct inhibition of HSP70/HSP90-mediated luciferase refolding.
In Silico Modeling of MBZ Bound to HSP90
Six molecular models were created to explore how MBZ might bind to HSP90 based on the inhibition data shown above. Using three different HSP90 protein crystal structures to diversify the starting coordinates, MBZ was placed into their ATP binding site. MD simulations were subsequently performed under physiological conditions, allowing MBZ to sample different interactions within the binding pocket ( Figure 4). A total of nine poses were identified with distinct orientations and significant populations ( Figure 4). Throughout all simulations MBZ remained in the ATP binding site, forming nonbonded interactions with 10-16 amino acids, with the residues Asn51, Ala55, Ile96, Gly97, Met98, Asn106, Leu107, Phe138, Thr184 and Val186 being most frequently involved. A free energy analysis revealed two poses that are mostly likely for being experimentally observed. An extensive analysis of the nine binding poses can be found in reference [18] Manuscript is in preparation, which includes an examination of water involvement, how MZB binding effects amino acid motion and a thorough quantum mechanical study of the possible conformations that MBZ can adopt.
Mebendazole and the GLI Inhibitor GANT-61 Exhibit Synergistic Anti-Leukemic Effects
We treated AML cell lines and primary AML samples with different MBZ concentrations, resulting in a dose-dependent effect on the proliferation, colony formation and apoptosis ( Figure 5A-D).
To evaluate the anti-leukemic activity of MBZ upon combined inhibition of GLI, we also investigated its usage with the small molecule GLI inhibitor GANT-61. We treated the AML cell lines MV4-11, MOLM-13, THP-1 and OCI-AML3 with combinations of MBZ and GANT-61, and analyzed cell proliferation and colony formation. In all cell lines tested, MBZ treatment alone already resulted in decreased proliferation and colony forming capacity in a dose dependent manner ( Figure 5A-C). The combination of MBZ with the GLI inhibitor GANT-61 synergistically increased MBZ's anti-proliferative effects on all three AML cell lines ( Figure 6A). Therapeutic synergy between MBZ and GANT-61 was indicated by a combination index < 1 calculated using the Chou-Talalay-Method ( Figure 6A). In colony formation assays, treatment with high MBZ concentrations, in particular, resulted in significant reduction in colony numbers of MV4-11, MOLM-13 and THP-1 cells. Furthermore, GLI inhibition by GANT-61 increased the effect of MBZ on colony formation significantly ( Figure 6B). Moreover, inhibition of HH signaling using shRNA targeting GLI1 sensitized THP-1 cells to anti-proliferative effects by MBZ compared to control cells transduced with a scrambled shRNA ( Figure 6C). Freshly isolated primary AML cells from 13 patients were investigated for anti-proliferative effects of MBZ treatment alone and in combination with GANT-61. MBZ mediated a strong and significant inhibitory impact on primary AML cell growth ( Figure 5A), which was further increased by combination with GANT-61 ( Figure 6D). Additionally, we treated the GLI luciferase reporter AML cell line THP-1 with MBZ or GANT-61 alone and in combination for 24 h and measured the GLI promoter activity. As expected, MBZ and GANT-61 reduced the GLI promoter activity compared to the untreated control. The inhibitor combination resulted in a more pronounced decrease relative to the single agent treatment ( Figure 6E).
MBZ Sensitizes AML Cells to Chemotherapy
Cumulating evidence suggests that active GLI signaling plays a fundamental role in the maintenance of leukemia initiating cells, which evade chemotherapy due to their high drug resistance against cytotoxic drugs [19]. Consequently, they are associated with residual disease, relapse and therapy failure. Therefore, we examined the combinational effect of cytarabine and MBZ on the cell growth of AML cell lines MV4-11, MOLM-13 and OCI-AML3. As shown in Figure 7A, combined effect of cytarabine and MBZ induced a significant reduction in cell growth compared to each agent alone. To quantify if the combination of MBZ and cytarabine represents a favorable drug combination, data were analyzed using CompuSyn to compute the dose-reduction index (DRI) values for the drug combination tested at a constant dose ratio. DRI values represent the fold decrease in the dose of a drug needed in a combination to achieve the same efficacy as the drug alone. DRI values > 1 are considered favorable with regard to the predicted reduction in toxicity of a drug therapy. These parameters are particularly relevant for the analysis of drug combinations in the context of cancer treatment. When combined with MBZ, cytarabine doses can be reduced by 288.0-, 2.7-and 4.5-fold in MV4-11, MOLM-13 or OCI-AML3, respectively, to meet the same anti-proliferative effect level ( Figure 7B). In all three cell lines, the DRI values increased with the rising effect level (Fraction affected (Fa)), suggesting the beneficial effect is particularly pronounced in the relevant effect level of a cancer therapy ( Figure 7B,C).
MBZ Effectively Inhibits GLI Signaling in Clinically Achievable Plasma Levels
To further evaluate if MBZ is a suitable therapeutic agent to inhibit GLI, we transferred these findings into the clinical setting by treating two refractory AML patients with MBZ monotherapy in an off-label setting. Using a modified plasma inhibitory assay (PIA) by incubating an indicator cell line carrying the GLI luciferase promoter transgene with the patients' plasma, a reduction in GLI promoter activity was detected for both samples ( Figure 8A). Moreover, a 62-year-old male healthy volunteer ingested MBZ at a dose of 50 mg/kg divided over two ingestions at time 0 h and 12 h. Blood was drawn at 4 h and at 24 h. PIA results indicated a biological active plasma concentration ( Figure 8A). Two patients with refractory AML received MBZ monotherapy after informed consent: Patient 1, a 66-year-old female, with normal karyotype and NPM1, FLT3-TKD and IDH1 mutations (ELN favorable risk). This patient had received 2 induction cycles of cytarabine and daunorubicine followed by three consolidation therapies with cytarabine and in relapsed setting, mitoxantrone, cytarabine and etoposide (MEC) with no response to treatment); Patient 2, a 74-year-old female, had adverse risk (ELN criteria) secondary AML after MDS according with a complex aberrant karyotype (deletion 5, deletion 8 and monosomy 17 with no additional AML specific mutations). This patient has been treated non-intensively with low-dose cytarabine and venetoclax with no response to therapy prior to MBZ treatment. In patient 1, a clear and continuous decrease in leukemic blasts in peripheral blood and a fast reduction in GLI2 levels in peripheral leukemic blood cells could be shown whereas patient 2 did not respond. (Figure 8B,C). Isolated plasma samples were incubated GLI luciferase reporter cell line OCI-AML3 for 24 h before GLI promoter activity was measured. A healthy volunteer ingested MBZ at a dose of 50 mg/kg divided into two doses at time 0 h and 12 h. Blood was drawn at 4 h and at 24 h. In a modified Plasma Inhibitory Assay (PIA) MV4-11 GLI luciferase reporter cells were incubated with isolated plasma samples and GLI promoter activity measured after 24 h using the Dual-GLO Luciferase Assay Kit and the Infinite F200 PRO reader. Patient 1 (B) and patient 2 (C) with refractory AML were treated with MBZ monotherapy as described above. Under MBZ therapy, the blast counts in the peripheral blood of the AML patients were measured. Peripheral mononuclear cells (PBMC) of patient 1 and patient 2 were isolated from several samples throughout the entire treatment period and GLI2 analyzed by western blot analysis. β-Actin was used as a loading control. Error bars represent the mean values ± standard deviation.
Discussion
MBZ is a broad spectrum benzimidazole used for several decades in human and veterinary medicine to treat a variety of parasitic worm infections [17]. Lately, MBZ gained attention as a promising candidate for drug repurposing in oncology due to multiple studies reporting substantial in vitro and in vivo anticancer effects [9]. Besides neuroblastoma, hematological malignancies-including leukemia, lymphoma and multiple myelomawere identified as the most sensitive cancers to treatment with the MBZ analogue flubendazole, as shown in a screen of 321 cell lines from 26 cancer entities [20].
In this study, the anti-leukemic effects of MBZ were confirmed in AML cell lines and primary blasts from AML patients. Most importantly, we revealed that MBZ's antileukemic effects were, at a minimum, partly due to a significantly reduced activity of the HH transcription factors GLI1 and GLI2 in AML. Furthermore, our data strongly indicate that MBZ mediates its anti-leukemic effects by promoting the degradation of GLI transcription factors through inhibition of HSP70/90 chaperone activity. Interestingly, MBZ sensitized AML cells to chemotherapy.
We previously demonstrated the importance of GLI1 and GLI2 in AML pathophysiology [7], and showed that inhibition of GLI activity resulted in pronounced anti-leukemic effects in vitro and significantly prolonged survival in a leukemic mouse model. We also showed that high expression of GLI represents a negative prognostic factor in two independent AML patient cohorts [7]. The potent anti-cancer effects mediated by inhibition of GLI transcription factors have also been demonstrated in numerous other studies [21].
Previously, Larsen et al. showed that MBZ inhibits canonical HH signaling in Shh Light2 fibroblasts by inhibiting the formation of the primary cilium [22], which is required for SMO-mediated GLI activation [23]. However, in previous work we could show that the treatment with the SMO inhibitor cyclopamine had no impact on the GLI promoter activity in AML reporter cell lines-leading to the hypothesis that the activation of GLI proteins in AML cells occurs independently of SMO [3]. In line with this theory, Chaudhry et al. also demonstrated that GLI signaling occurs independently of SMO [16]. Another study showed that primary cilia, which are essential for functional SMO signaling, are absent in most AML cells [24]. Mounting evidence implicates SMO-independent, non-canonical ways of GLI activation by alternative oncogenic pathways in AML, including FLT3-ITD, PI3K/AKT/mTOR and RAS/RAF/MEK/ERK signaling cascades [2,3]. These results support our hypothesis that MBZ mediates its inhibitory potential against the HH pathway in an SMO-independent way by inhibiting GLI downstream from SMO.
We could show that the decrease in GLI promoter activity resulted from a reduction in GLI1 and GLI2 protein levels. Inhibition of the 26S proteasome by Bortezomib abolished the effect of MBZ on GLI protein levels, indicating that the reduced GLI protein levels are caused by degradation via this proteasome.
Heat shock proteins act as molecular chaperones that are involved in the folding, activation and assembly of a variety of proteins. HSP70 and HSP90 are believed to act as the core chaperone system regulating stability, trafficking and degradation of signaling proteins [25,26] and therefore maintaining the activity of a variety of protein kinases, transcription factors and steroid hormone receptors [26]. Based on our hypothesis that MBZ promotes the proteasomal degradation of GLI transcription factors, we investigated the effect of inhibition of HSP70 and HSP90 on GLI transcription factors. Treatment of AML cells with small molecule HSP70 and HSP90 inhibitors resulted in a marked decrease in GLI1 and GLI2 protein levels without reducing mRNA levels. Walf-Vorderwülbecke et al. extensively demonstrated the ability of MBZ to inhibit HSP70 [11]. However, such an effect of MBZ on HSP90 has not yet been demonstrated. Based on the strong effects of MBZ on GLI1 and GLI2 protein levels, which were comparable to dual inhibition with an HSP90 and an HSP70 inhibitor in our experiments, we hypothesized that MBZ might be an inhibitor of HSP70 and HSP90. To further support this hypothesis, in silico molecular models were created by binding MBZ to the ATP binding site. The modeling predicts that MBZ forms short-range nonbonded interaction with at least 10-16 amino acids within the binding site. Thus, the combined experimental and theoretical data strongly support the idea that MBZ binds to heat shock proteins and inhibits their binding to other proteins.
For binding to HSP90, its client proteins require other chaperones and co-chaperones since they are unable to be bound by HSP90 directly. Certain client proteins, such as transcription factors, have to be bound by HSP70 and its co-chaperone HSP40 first before being delivered to HSP90 [26]. Disruption of the HSP70-HSP90 chaperone cascade results in misfolding and degradation of those client proteins [27]. The strong sensitivity to inhibition of HSP70 and HSP90 in our study suggests that GLI transcription factors rely heavily on the HSP70-HSP90 chaperone cascade for protein folding and stability [28].
Similar to inhibitors of HSP70 or HSP90, MBZ was able to inhibit refolding of heatdenatured luciferase in MOLM luc+ AML cells. This suggests that MBZ mediates its effect, at least in part, by disrupting the cellular protein folding machinery. In line with our findings, Walf-Vorderwülbecke et al. demonstrated that MBZ is able to interfere with different members of the HSP70-HSP90 chaperone family in AML cells using nematic protein organization technique (NPOT ® ) analysis and DAVID analysis [11]. They indeed showed that the association of c-MYB with the HSP70 complex was lost after MBZ exposure, leading to misfolding and subsequent proteasomal degradation of MYB [11].
Treatment of AML cell lines with MBZ had no effect on the HSP70 or HSP90 protein expression, as shown by western blot analysis. However, HSF-1, the major transcription factor of the heat shock response genes, was heavily phosphorylated at Serine 326 in MBZtreated AML cells, a modification that is critical for stress-induced HSF-1 activation [29]. Triggering of the HSF-1 stress response by MBZ could be due to proteotoxic stress that is also caused by other HSP inhibitors [29]. On the other hand, MBZ induced a reduction in total HSF-1 protein levels in AML cells. A possible explanation might be the depletion of factors that stabilize HSF-1 protein levels, such as Bcl-2 interacting cell death suppressor (BIS) [27,[30][31][32].
It should be noted that for MBZ-in addition to the inhibition of HSP70 and HSP90, and subsequent degradation of transcription factors (e.g., GLI1, GLI2, MYB) and other HSP client proteins (e.g., FLT3)-a variety of other mechanisms have been identified that can mediate both inhibitory effects on GLI signaling and generate anticancer effects. Observed MBZ-mediated anticancer effects include the induction of anti-tumor immune response, sensitization to radiation and chemotherapy, inhibition of angiogenesis, induction of apoptosis, and inhibition of proliferation [9]. For instance, MBZ was shown to inhibit several important signaling kinases, including VEGFR2, FAK, GTPases Rho-A and Rac1 [9], ABL, JNK3 and KIT [33]. Furthermore, it was revealed that MBZ inhibits BRAF and MEK by blocking their ATP binding pocket [34]. Furthermore, MBZ inhibits tubulin depolymerization in several tumor models, including non-small cell lung cancer and glioblastoma [35,36]. However, in AML, MBZ did not induce microtubule depolymerization in concentrations of up to 10 µM [11].
We could show that in AML cells, the combination of MBZ with the small-molecule GLI inhibitor GANT-61 leads to synergistic anti-leukemic effects. This could be used as a potential treatment strategy in the future to enhance the efficacy of a pharmacological HH blockade. Several HH pathway inhibitors are in development for AML treatment and are being considered as a new class of therapeutics [37]. MBZ represents a promising candidate to potentiate the effect of these agents and maximize their therapeutic success. Moreover, MBZ sensitized AML cell lines MV4-11, MOLM-13 and OCI-AML3 to cytarabine as indicated by large positive DRI values. This possible dose reduction could lower the therapy's toxicity while maintaining the same anti-leukemic effect. Consequently, this would be a suitable approach to achieve an improved treatment for the elderly who cannot tolerate higher doses. [38,39]. The prognosis for elderly, unfit patients is still bleak with current treatments. Although, fortunately, in recent years, the prognosis for these patients has brightened-up with the introduction of new targeted agents such as Bcl-2, FLT3 and IDH1/2 inhibitors. Especially the combination of azacitidine and venetoclax has become a wide-spread regimen for elderly, unfit AML patients resulting in an increased overall survival compared to hypomethylating agents alone [40]. However, not all new treatment modalities are curative, and treatment options for patients becoming refractory to these regimens are sparse. Therefore, MBZ may represent a valuable therapeutic option in this setting because of its very favorable toxicity profile, which is well suited for the treatment of elderly patients [38,39].
Long-term, repeated administration of mebendazole results in significantly higher plasma levels compared to a single dose, possibly due to enterohepatic circulation [41]. There is a large interindividual variation in the plasma levels achieved, with one study showing a maximum plasma concentration ranging from 0.017-0.134 µM after a single 1.5 g dose and up to 0.5 µM after repeated administrations of 1 g [42]. In another study, 12 patients with cystic disease were treated with a single or repeated dose of 10 mg/kg. Single dose administration resulted in a maximum plasma concentration of 0.24 µM on average (ranging from 0.06-1.69 µM), while repeated administration resulted in a maximum concentration of 0.47 µM and an Area Under The Curve five times higher than after a single dose [41]. We demonstrated that clinically achievable plasma concentrations of MBZ effectively inhibit GLI signaling in all three subjects (two AML patients and one healthy volunteer). Most notably, one patient with refractory AML treated with MBZ monotherapy in an off-label setting responded with a clear and continuous decrease in leukemic blasts in peripheral blood and a fast reduction in GLI2 levels in peripheral leukemic blood cells.
In agreement with previous studies on MBZ plasma levels, repeated administration of the drug resulted in a stronger inhibitory effect than a single dose. Furthermore, and most importantly, MBZ-treatment led to strong anti-leukemic effects in one patient which was consequently accompanied by reduction in GLI2 protein levels in blast lysates. This suggests that oral administration of MBZ is suitable for achieving noticeable therapeutic effects in clinical use.
In summary, our work highlights the exceptional potential of MBZ as a future therapeutic option for the treatment of diverse cancers. Based on the results herein, we are currently pursuing a clinical trial with MBZ and low dose cytarabine for the treatment of elderly, refractory AML patients.
Differentiation of GLI2 and GLI2∆N was performed according to the method of Londoño et al. [47]. The expression of the full-length transcript (GLI2-FL) and the C-terminus (GLI2-C-term) were determined in relation to total GLI2 (GLI2-ALL). The expression of GLI2∆N was calculated by subtracting the expression of GLI2-FL from the expression of GLI2-C-term. The difference corresponds to the expression of GLI2∆N.
Primers are listed in Appendix A (Table A1).
Protein Isolation and Western Blot
Protein isolation and western blots were performed as described before [48]. For analysis of phosphorylated proteins, cells were harvested and lysed with radioimmunoprecipitation assay (RIPA) buffer (89900, Thermo Fisher Scientific, Waltham, MA, USA) supplemented with protease and phosphatase inhibitors (cOmpleteTM Tablets,
GLI Reporter Assays
Stable GLI reporter AML cell lines were transduced by lentiviral constructs containing the firefly luciferase gene under the control of GLI transcriptional response elements and as internal control the renilla luciferase gene under the control of CMV promoter elements (Cig-nal™ Lenti Reporters, S-6030L, Qiagen, Venlo, The Netherlands) followed by puromycin (P7255, Sigma-Aldrich, St. Louis, MO, USA) and hygromycin (1287.1, Carl Roth GmbH, Karlsruhe, Germany) selection. Stable GLI reporter cells were treated with MBZ, GANT-61, PU-H71, VER-155008 or a solvent control, with the GLI promoter activity measured after 24 h using the Dual-GLO Luciferase Assay Kit (E2940, Promega, Madison, WI, USA) and the Infinite F200 PRO reader (Tecan, Männedorf, Switzerland). The firefly luciferasemediated GLI promoter activity was normalized to the renilla luciferase-mediated CMV promoter activity.
Plasma Inhibitory Assay
OCI-AML3 reporter cells were plated at a ratio of 1:9 in serum-free medium plus plasma sample and incubated for 24 h at 37 • C. Before measurement, cells were washed three times with serum-free culture medium and the luciferase-mediated GLI promoter activity was measured in triplicates as described above.
Proliferation Assay
AML cells were incubated with different concentrations of MBZ alone or in combination with either GANT-61 or cytarabine for 3 or 7 days. Viable cell numbers were determined with the Trypan Blue dye exclusion method using the cell viability analyzer Vi-Cell ™ XR (Beckman Coulter, Brea, CA, USA).
Colony Formation Assay
AML cell lines were incubated with different concentrations of MBZ and/or GANT-61 and cultured in methylcellulose-based semi-solid media without or supplemented with growth factors, respectively (04230, Methocult H4230, Stemcell Technologies, Vancouver, BC, Canada). After seven days, the number of colonies was counted using an inverted microscope (Axiovert 25, Zeiss, Jena, Germany).
Cloning and Lentiviral Transduction
A pLKO.1-puro vectors encoding GLI1 (TRCN0000020485, sequence 5 CCGGCCTGAT TATCTTCCTTCAGAACTCGAGTTCTGAAGGAAGATAATCAGGTTTTT-3 ), or scrambled shRNA (SHC002, non-target shRNA vector) were purchased from Sigma-Aldrich (St. Louis, MO, USA). The Lentiviral Gene Ontology Vector (LeGO) system was used for cloning and transfection into the AML cell lines (LeGO-G/Puro) [49]. Lentiviral particle containing supernatants were generated in HEK293T cells co-transfected with the plasmids LeGO-G/Puro (GLI1 shRNA) or LeGO-G/Puro (scrambled shRNA) in combination with pMD2.G-VSV-G and psPAX2-Gag-Pol using calcium phosphate co-precipitation. THP-1 cells were either transduced with non-targeting shRNA (negative control) or with shRNA against GLI1. On day 3 after transduction, transduced cells were selected by addition of puromycin (2 µg/mL; Sigma-Aldrich, St. Louis, MO, USA) for 7 days prior to functional assays. The knock-down efficiency for GLI1 was determined using quantitative PCR analysis after seven days of puromycin selection. To generate the MOLM-13 luc+ cells, MOLM-13 cells were transduced with a LeGO vector encoding for firefly luciferase using the same protocols. All work with lentiviral particles was done in a S2 facility after approval according to German law.
In Silico Modeling
All-atom molecular dynamic (MD) simulations were performed on models where MBZ was bound to HSP90 s ATP binding site using the AMBER software package [50]. A total of 6 independent simulations were performed that differed in the protein's crystal structure used (i.e., 2WI6, 2BT0 and 4W7T [51][52][53]) for model development and in the force field employed during the simulation. The proteins were modeled using the ff14SB and fb15 force fields [54,55], while MBZ was modeled using the gaff2 force field and RESP partial atomic charges [56]. The models were solvated using TIP3P or force-balanced water models, as appropriate for the protein force field. In total, 600 nanoseconds of simulation data were collected and analyzed. A complete description of the modeling can be found in reference [18].
Statistical Analysis
Data from the in vitro assays were statistically analyzed by the Welch's t-test using GraphPad Prism 7 (GraphPad Software, Inc., San Diego, CA). A p value less than 0.05 was considered to be statistically significant. The combination index (CI) and dose reduction index (DRI) were calculated using the CompuSyn program (Version 1.0, ComboSyn Inc., Paramus, NJ, USA) based on the Chou Talalay Method [57]. | 2021-10-14T05:33:29.489Z | 2021-10-01T00:00:00.000 | {
"year": 2021,
"sha1": "d55297d63a081487552deac86c0f423e4328014a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/22/19/10670/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d55297d63a081487552deac86c0f423e4328014a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258429562 | pes2o/s2orc | v3-fos-license | Amino acid metabolism regulated by lncRNAs: the propellant behind cancer metabolic reprogramming
Metabolic reprogramming is one of the main characteristics of cancer cells and plays pivotal role in the proliferation and survival of cancer cells. Amino acid is one of the key nutrients for cancer cells and many studies have focused on the regulation of amino acid metabolism, including the genetic alteration, epigenetic modification, transcription, translation and post-translational modification of key enzymes in amino acid metabolism. Long non-coding RNAs (lncRNAs) are composed of a heterogeneous group of RNAs with transcripts of more than 200 nucleotides in length. LncRNAs can bind to biological molecules such as DNA, RNA and protein, regulating the transcription, translation and post-translational modification of target genes. Now, the functions of lncRNAs in cancer metabolism have aroused great research interest and significant progress has been made. This review focuses on how lncRNAs participate in the reprogramming of amino acid metabolism in cancer cells, especially glutamine, serine, arginine, aspartate, cysteine metabolism. This will help us to better understand the regulatory mechanism of cancer metabolic reprogramming and provide new ideas for the development of anti-cancer drugs. Video Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12964-023-01116-1.
Background
Metabolic reprogramming is one of the main characteristics of cancer cells, which provide substrates and energy for the survival and proliferation of cancer cells [1,2]. Aerobic glycolysis is one of the most representative metabolic modes that the majority of glucose was metabolized through glycolytic pathway to form lactate rather than oxidative phosphorylation even in the aerobic environment, a phenomenon also known as the Warburg effect [3]. The dysregulation of amino acids metabolism is another important aspect of cancer metabolism. Amino acid metabolism plays an important role in energy generation, nucleoside synthesis and maintaining redox homeostasis in cancer cells (Fig. 1) [4,5]. For example, cancer cells utilize a large amount of glutamine to form α-ketoglutarate (α-KG) to replenish TCA cycle, which is depleted by aerobic glycolysis. Because the large demand of glutamine in cancer cells, glutamine is also called "conditionally essential amino acid". In addition, other amino acids that provide carbon and nitrogen sources, such as serine, are also essential for cancer cells survival [6]. Elucidating the mechanism of cancer metabolic reprogramming is of great significance for understanding the mechanism of tumor formation and developing anti-tumor drugs.
Long non-coding RNAs (lncRNAs) refers to a heterogeneous group of RNAs with transcripts of more than 200 nucleotides in length [7,8]. LncRNAs make up a large portion of the transcriptome but can't be translated into proteins or only into small peptides. They are generally transcribed by RNA polymerase II and undergo post-transcriptional RNA processing including 5' capping, splicing and polyadenylation [9]. LncRNAs are localized both in the cytoplasm and nucleus, and their function usually depends on subcellular localization [10]. In addition, the location of lncRNAs can be changed when environmental transitions or infections [11]. There are five types of lncRNAs: (1) intronic lncR-NAs; (2) intergenic lncRNAs; (3) antisense lncRNAs; (4) sense lncRNAs (5) divergent lncRNAs (Fig. 2) [12].
LncRNAs have been found but ignored in the last century. However, with the rapid development of whole genome sequencing technology and high-throughput sequencing technology, many lncRNAs have been found and their functions have been elucidated [13]. LncRNAs can bind to DNA, RNA, proteins [14][15][16] and regulate a broad spectrum of biological processes, including cell cycle, gene regulation, immune response, cell differentiation, post-transcriptional modification, and tumor metabolism [17][18][19][20][21]. LncRNAs regulate the function of target genes through the following four aspects (Fig. 3) [22]. (1) LncRNAs can act as "scaffoldings" in the process of protein complexes formation, facilitating the interplay between different signaling pathways, or bind to proteins to regulate their function and stability at the post-translational level [14]. (2) LncRNAs can bind to the promoter region of a gene to form DNA-RNA three-stranded heterozygous fragments, preventing the binding of transcription factors and the initiation of transcription process. LncRNAs can also act as "connecting tool" in the process of recruiting protein complexes with transcriptional regulation function to the promoter region of DNA, and According to the proximity of lncRNAs to protein-coding genes, they can be divided into the above five types Fig. 3 Mechanisms of lncRNAs function. A Some lncRNAs have short ORF regions. B LncRNAs can prevent mRNA degradation by recruiting specific proteins. C LncRNAs binding proteins to prevent or attenuate their binding to mRNAs. D LncRNAs binding to a primary RNA transcript and change the splicing pattern. E LncRNAs recruiting the mediator complex to an enhancer region for promote loop formation and transcription of target genes. F LncRNAs transcripts evicting proteins from chromatin to maintain a DNA-free methylation site for mRNA transcription. G LncRNAs can recruit specific proteins to target sites in the genome. H Some LncRNAs transcribed from an enhancer region can inhibit gene transcription process by interfering with enhancer contacting with promoter. I LncRNAs act as "scaffoldings" in the process of forming protein complexes performing functions such as chromatin reconstruction, activation or inhibition of transcriptional initiation, and epigenetic regulation [15]. (3) LncRNAs regulate the shearing, splicing, intracellular distribution and stability of mRNAs by direct binding. However, the binding between lncRNAs and miRNAs is usually more like a "sponge", which reduces the binding between miRNA and target gene, thus reducing the inhibitory effect of miR-NAs on target gene [16]. (4) Some lncRNAs have short ORF regions, which can be translated into short peptides to play their functions [23].
In recent years, the functions of lncRNAs in cancer cells has become a hot research area. Many researches have shown that lncRNAs can precisely affect the proliferation, differentiation, invasion and metastasis in cancer cells by regulating oncogenes or tumor suppressor genes, and are considered as potential tumor therapeutic targets [5,24]. LncRNAs can also be used as clinical biomarkers for cancer diagnostics and prediction. Recently studies demonstrated that lncRNAs played an important role in regulating the transcription and translation of metabolism-related genes by acting as decoy molecules, scaffold molecules and competitive endogenous RNAs (ceRNAs), ultimately leading to metabolic reprogramming in cancer cells [25][26][27]. In this review, we focus on the important roles and function of lncRNAs in amino acid metabolism of cancer cells. We believe that a better understanding of lncRNA in amino acid metabolism will have breakthrough in cancer diagnosis and treatment.
LncRNA and glutamine metabolism
Glutamine is the most abundant non-essential amino acid in human body, mainly concentrated in liver, kidney, skeletal muscle and brain tissue, and it is a precursor for the synthesis of many other amino acids, nucleotides and other important macromolecules [28]. In addition, glutamine can also activate mammalian target of rapamycin (mTOR) and maintain reactive oxygen species (ROS) homeostasis of cancer cells [29]. Glutamine is transported into cancer cells by the glutamine transporter, such as SLC7A8 and SLC7A5, or synthesize de novo in cancer cells. Then, glutamine was converted into glutamate and ammonium through glutaminase (GLS). Glutamate can be further converted into α-KG through glutamate dehydrogenase 1 (GLUD1), thus entering the TCA cycle to generate energy and intermediates for the synthesis of biological macromolecules [30]. Meanwhile, glutamate also acts as a substrate for metabolic enzymes to synthesis glutathione, which is a cellular antioxidant to help stabilize normal immune system function [31,32]. The lack of glutamine will lead to endoplasmic reticulum stress response and protein misfolding, which induces cancer cells death [33].
Glutaminase is the initiating and rate-limiting enzyme of glutamine catabolism. Two genes encode glutaminase in the human genome. The GLS gene located on chromosome 2 encodes kidney-type glutaminase, which is mainly distributed in kidney, brain, pancreas and muscle tissue; GLS2 gene located on chromosome 12 encodes liver-type glutaminase, mainly distributed in liver tissue [34]. The kidney-type glutaminase has two isoforms, exons 1-14 and 16-19 encode a longer form named KGA, and exons 1-15 encode a short form GAC, which has been demonstrated to be the main isoform in cancer cells [35,36].
Several studies have shown that lncRNAs played an important role in glutamine metabolism. There are five lncRNAs bind to miRNA like a "sponge", which reduces the binding between miRNA and GLS, thus reducing the inhibitory effect of miRNAs on GLS. Three of the lncRNAs include HOX transcript antisense intergenic RNA (HOTAIR), OIP5-AS1 and nuclear paraspeckle assembly transcript 1 (NEAT1) are highly expressed in different cancer cells compared to adjacent normal tissues [37][38][39]. These lncRNAs have similar regulatory mechanisms to upregulate GLS transcriptional levels, thus promoting the proliferation and migration of cancers cells. For example, lncRNA OIP5-AS1 can upregulate GLS mRNA expression level in melanoma cells by competitively sponge with GLS-binding miR-217 and reduced the inhibitory effect of miR-217-on GLS. These lncRNAs have potential as biomarkers for tumor diagnosis. LncRNAs can affect GLS expression not only in cancer cells but also in skin fibroblasts by competitively sponge microRNA. Researchers found that M2 macrophage can release lncRNA-ASLNC5088 into fibroblasts via exosomes. Then, ASLNC5088 can upregulate GLS mRNA expression by competitively sponge with GLS-binding miR-200c-3p at least 3 sites in skin fibroblasts. GW4869 as an exosome secretion inhibitor can reduce ASLNC5088 secretion to skin fibroblasts. Therefore, GW4869 can inhibit fibroblast over-activation and scar formation through regulate lncRNA-ASLNC5088/ miR-200c-3p/GLS pathway [40]. From these studies, we concluded that lncRNAs regulated the expression of GLS by acting as ceRNAs competitively sponging GLS-binding miRNA in a variety of cancer cells and normal cells.
In addition to act as ceRNAs, lncRNAs can also regulate GLS expression in other ways. LncRNA HOTTIP is an oncogene and upregulate GLS expression in hepatocellular carcinoma (HCC). Meanwhile, researchers also found that miR-192 and miR-204 interfering with lncRNA HOTTIP expression via the Argonaute 2 (AGO2)-mediated RNA interference (RNAi) pathway can significantly suppresses GLS expression in HCC cell lines. In addition, Mir-192 /-204-HOTTIP-GLS axis regulate HCC cells proliferation and tumor formation. Therefore, lncRNA HOTTIP played a critical role in glutamine metabolism and HCC cells growth [41]. LncR-NAs can also regulate GLS through the allele-specific manner. LncRNA colon cancer associated transcript 2 (CCAT2) regulates GLS mRNA through binding to the Cleavage Factor I (CFIm) complex with distinct affinities for the two subunits (CFIm25 and CFIm68). Allelespecific interactions between CCAT2, CFIm, and GLS pre-mRNA appear to result in the selection of the Poly (A) site within the GLS intron 14, leading to preferential splicing to the GAC isoform, which has higher catalytic activity. Therefore, CCAT2 can upregulate the expression of GAC and increase glutamine metabolism to promote the proliferation and migration of colon cancer cells [42]. In addition, lncRNAs can regulate glutamine metabolism by encoding small non-coding microRNAs. Heat Shock Factor 1 (HSF1) is an oncogene in colorectal cancer, which promotes mTOR activation and glutamine metabolism. HSF1 combine with DNA methyltransferase DNMT3a and recruits it to lncRNA MIR137 host gene (MIR137HG) promoter to inhibit the production of primary MIR137. Meanwhile, MIR137 can inhibit GLS protein expression. Therefore, HSF1 regulated DNMT3a/ MIR137HG/MIR137 / GLS axis to raise glutaminolysis and mTOR activation and promote the process of colorectal cancer, and it is a potential therapeutic target for colorectal cancer [43]. LncRNAs can regulate GLS expression at transcriptional level too. Nuclear-enriched antisense lncRNA of glutaminase (GLS-AS) binds to GLS pre-mRNA via ADAR/Dicer-dependent RNA interference to form double-stranded RNA and inhibit GLS expression in pancreatic cancer cell lines. Overexpression of GLS-AS suppress the invasion and proliferation of pancreatic cancer cell lines by repressing the Myc/GLS pathway [44]. GLS2 can also be regulated by lncRNAs. LncRNA urothelial carcinoma-associated 1(UCA1) can function as a ceRNA to sequester miR-16 via binding it. As miR-16 binds to GLS and inhibits its transcription, UCA1 promoted the proliferation and migration of bladder cancer cells through acting as a ceRNA to upregulate GLS2 expression. In addition, overexpression of UCA1 could reduce intracellular ROS levels and protect bladder cancer cells from oxidative toxicity [45].
GLUD1 is another important enzyme in glutaminolysis, which converts glutamate to α-KG. GLUD1 was demonstrated to promote the proliferation and maintain redox homeostasis of cancer cells, and it is also a potential target for lncRNAs regulation. Researchers found that lncRNA taurine upregulated gene 1 (TUG1) downregulated miR-145 expression by acting as a ceRNA. MiR-145 bind to SIRT3 mRNA to suppress its expression, then SIRT3 expression reduce leads to GLUD1 downregulation through regulates GLUD1 acetylation level. TUG1 antagonizes Mir-145 and regulates Sirt3/ GLUD1 axis, thereby affecting intrahepatic cholangiocarcinoma proliferation [46]. LncRNAs can also affect GLUD1 function via regulating key transcription factors. c-Myc could regulate the transcription of GLUD1 by targeting the promoter of GLUD1. The 5'UTR (1-772) and 3'UTR (3951-4899) of lncRNA XLOC_006390 bind to c-Myc and regulated it ubiquitination level, thereby affecting proteasome pathway degradation. Silencing lncRNA XLOC_006390 inhibited pancreatic cancer (PC) proliferation and migration by downregulating cellular α-KG levels [47].
LncRNA and serine metabolism
Serine is another non-essential amino acid which is involved in nucleotide synthesis, oxidative stress response, TCA cycle and other metabolic processes. Serine can be obtained through extracellular import or the serine synthesis pathway, which is a branch pathway of glycolysis. First, phosphoglycolate dehydrogenase (PHGDH) catalyze glycolysis intermediate metabolite 3-phosphoglycolate (3-PG) into 3-phosphate hydroxypyruvate. Then, 3-phosphate hydroxypyruvate is catalyzed via phosphoserine aminotransferase (PSAT1) to produce 3-phosphoserine, followed by dephosphorylation via 1-3-phosphoserine phosphatase (PSPH) to produce serine. Serine can be further converted to glycine and 5,10-methylenetetrahydrofolate in cytoplasm or mitochondria by serine hydroxymethyl transferase (SHMT). Methyltetrahydrofolate dehydrogenase (MTFHD) can convert me-THF into 10-methylenetetrahydrofolate which is one of the sources in one-carbon metabolism. Recent studies showed that serine metabolism and related metabolic enzymes are indispensable in tumor initiation and progression. With the rapid development of whole genome sequencing technology and high-throughput sequencing technology, many lncRNAs was demonstrated to participate in serine metabolism.
PHGDH is the first rate-limiting enzyme for the serine biosynthetic pathway. Researchers found that PHGDH was upregulated in several cancers [48][49][50]. It has been reported that several lncRNAs functioned in cancer progression by targeting PHGDH. DDX3X (DEAD-Box Helicase 3 X-Linked) belongs to the Asp-Glu-Ala-Asp (DEAD) box protein family and has splicing and translation initiation functions, it is PHGDH mRNA-binding proteins. LncRNA RMRP (RNA Component of Mitochondrial RNA Processing Endoribonuclease) was shown to facilitate the recruitment of RNA binding DDX3X to 3'UTR of PHGDH mRNA. Meanwhile, overexpression of LncRNA RMRP or DDX3X can promote cisplatin resistance and spheroid formation in platinum-resistant ovarian cancer by increasing the translation of PHGDH mRNA. Therefore, the regulation of serine metabolism by RMRP / DDX3X /PHGDH signaling pathway is a potential therapeutic target for platinum-resistant ovarian cancer [51]. Another lncRNA PlncRNA-1 could inhibit the proliferation of breast cancer cells by promoting TNF-β protein expression and inhibiting PHGDH protein expression, suggesting that PHGDH functions as an oncogene in breast cancer [52]. Serine metabolism is closely related to glycolysis. Transcription factor ATF4 can interact with linc01564 and induce in response to glucose deprivation. Meanwhile, linc01564 can promote hepatocellular carcinoma cell proliferation (HCC) survival under glucose deprivation condition by regulating PHGDH expression in mRNA and protein levels. linc01564 as a ceRNA could interact with miR-107/ miR-103a-3p at two binding sites. Then, miR-107/103a-3p can bind to PHGDH and inhibit its post-translational activity. Thereby, linc01564 facilitates serine metabolism and maintains ROS level under both glucose deprivation and normal conditions through the miR-107/103a-3p-PHGDH axis [53].
PSAT1 is another enzyme in serine synthesis pathway and reported to be regulated by lncRNAs. The expression of PSAT1 is inhibited by miR-15a-5p and miR-15b-5p in non-small cell lung cancer (NSCLC). The expression of lncRNA MEG8 is positively related to PSAT1 expression and serine synthesis, which acts as a ceRNA interacted with miR-15a-5p and miR-15b-5p [54]. LncRNA targeting PSAT1 can be found via GSE datasets (GSE94660 and GSE104310) which downloaded from the GEO. LncRNA RP4-694A7.2 can positively correlate with hepatocellular carcinoma proliferation and migration through binding to PSAT1 and promoting its expression [55]. LncRNAs also inhibited the expression of PSAT1. Epithelial-tomesenchymal transition (EMT) is positively correlated with Gsk-3 β/snail signaling pathway, which promotes cancer progression. Overexpression of LncRNA maternally expressed gene 3 (MEG3) could downregulate PSAT1 and suppress the activation of GSK-3β/Snail signaling pathway in esophageal squamous cell carcinoma (ESCC). Therefore, MEG3 could inhibited ESCC proliferation by inhibiting PSAT1/ GSK-3 β/ Snail axis. In addition, lncRNA MEG3 could be used as a biomarker in ESCC diagnosis [56].
Serine and glycine are two nonessential amino acids which contribute to the main sources of one-carbon metabolism [57]. SHMT is a key serine/glycine conversion enzyme. SHMT can be encoded by two genes: SHMT1 and SHMT2. SHMT1 protein location in the cytoplasm and SHMT2 in the mitochondria. Interestingly, SHMT2, but not SHMT1 expression is significantly increased in several cancers [58]. LncRNAs can also regulate serine metabolism by affecting SHMT2 expression. In lung cancer, the targets of miR-615-5p includes IGF2, SHMT2 and AKT2. LncRNA Gm15290 can interact with miR-615-5p and inversely correlated with miR-615-5p levels in lung cancer. Overexpression of Gm15290 can promote the proliferation of lung cancer by upregulating SHMT2 expression [59]. Similarly, LncRNA LINC01234 is significantly upregulated in colon cancer. LINC01234 knockdown suppressed SHMT2 expression at mRNA and protein levels by acting as a ceRNA molecular sponge of miR-642a-5p, and serine/glycine metabolism could be inhibited via LINC01234 knockdown. In addition, researchers found that LINC01234 could act as a potential therapeutic target and biomarker for colon cancer [60].
A lack of serine in the diet may promote the conversion of glucose to serine, and cancer cells reprogram serine metabolism to satisfy their survival needs. Meanwhile, genes expression in the serine synthesis pathway are upregulated in a variety of cancers, such as increased gene copy number for PHGDH in melanoma and triple negative breast cancer [61]. LncRNAs plays an important role in the regulation of various enzymes in the serine synthesis pathway, and functions in tumor progression by regulating serine metabolism. Therefore, it is important to further study the regulation of lncRNAs on serine metabolism.
LncRNA and arginine metabolism
Arginine is synthesized from citrulline through a twostep process, catalyzing by argininosuccinate synthase 1(ASS1) and argininosuccinate lyase (ASL). Then, arginase (ARG1) converts arginine to ornithine and urea. Ornithine is converted to citrulline for recycling in the mitochondria by ornithine transcarbamoylase (OTC).
Although there are few studies on lncRNAs in regulating arginine metabolism, it may become a research hotspot in the future because of the importance of arginine metabolism in cancer.
ASS1 catalyzes the formation of argininosuccinate from aspartate, citrulline and ATP, it is responsible for the biosynthesis of arginine with ASL in most body tissues [62]. ASS1 is a rate-limiting enzyme in the arginine synthesis pathway, which is severely reduced or absent in some types of aggressive cancers, thus exhibiting exogenous arginine dependence [63]. Two studies have found that lncRNAs affected the expression of ASS1. In renal cell carcinoma (RCC), the expression level of lncRNA00312 was lower than adjacent normal tissues. Studies have shown that miR-34a-5p can bind and negatively regulate ASS1 expression [64]. LncRNA00312 inhibited RCC proliferation and invasion by promoting miR-34a-5p expression and inhibiting ASS1. ASS1 as a tumor suppressor gene may be a potential therapeutic target for RCC [65]. ASS1 is a key link between arginine metabolism and aspartate metabolism. In 2022, researchers found that lncRNA LINC01234 could promote HCC proliferation, migration and invasion by regulating aspartate metabolic reprogramming. Overexpression of LINC01234 could directly form an RNA-DNA complex with the ASS1 promoter, reduce the enrichment of transcription factor p53 on ASS1, and inhibit the expression of ASS1 at the protein and mRNA levels. Increased expression of LINC01234 led to the accumulation of celluar aspartate levels in HCC and increased mTOR activity [66].
Arginine is a non-essential amino acid for normal tissues, but many malignant tumor cells (such as melanoma, liver cancer, etc.) have not expression ASS1 and cannot synthesize arginine, so arginine is an essential amino acid for the above malignant tumor cells [67,68]. Arginase can catalyze the hydrolysis of arginine, so arginase does not affect the growth of normal cells but inhibits the growth of tumor cells [69]. At present, amino acid deprivation has become a new method to therapy cancer, and the different tolerance to arginine deprivation between tumor cells and normal cells can be used to specifically inhibit the growth of cancer cells. LncRNAs can affect tumor associated macrophages (TAM) by regulating Arg1 expression. Arg1 is a specific marker for M2 macrophage. Several oncogenic lncRNAs can promote tumor progression by affecting M2 polarization and Arg1 expression, such as lncRNA AK0363962, lncRNA CRNED, lncRNA X inactive specific transcript (XIST), lncRNA NEAT1, lncRNA runt-related transcription factor 1 overlapping RNA (RUNXOR) [70][71][72][73][74]. For example, overexpression of lncRNA XIST downregulated the expression of Arg1 and promote M2 macrophages polarization. Conversely, several tumor suppressor lncRNAs could inhibited cancer cells survival, such as lncRNA cox-2 [75]. LncRNA cox-2 inhibits immune evasion, proliferation and migration in HCC by inhibiting Arg1 and the polarization of M2 macrophages. These lncRNAs have the potential to improve tumor immunotherapy.
LncRNA and aspartate metabolism
As one of the lowest concentrations of amino acids in the blood, aspartate is involved in the synthesis of proteins and nucleotides and therefore plays an important role in cells growth [76]. Aspartate has low circulating levels compared to other amino acids, and cancer cells reprogram aspartate metabolism to support cell growth [77]. Under the condition of enough oxygen, aspartate is synthesized from oxaloacetic acid (OAA) and L-glutamic under the action of glutamate oxaloacetate transaminase 1 (GOT1). However, aspartate synthesis is blocked under hypoxic tumor microenvironment and aspartate is a key factor in tumor proliferation [78].
The effects of lncRNAs on aspartate metabolism mainly focuses on the regulation of GOT expression. GOT is a key enzyme linked to aspartate metabolism and carbohydrate metabolism, and is mainly distributed in tissues such as heart, liver, skeletal muscle and kidney. This enzyme not only regulates amino acid metabolism, but also promotes cancer cells proliferation by maintaining redox balance. There are two genotypes of GOT in cells: GOT1 protein located in cytoplasm, and GOT2 in mitochondria. In addition, GOT1 also participates in ferroptosis mechanism. Many studies showed that lncRNA TPPO-AS1 acted as a tumor motivator in various cancers. TPPO-AS1 could act as a molecular sponge that bound to miR-429 and suppress its expression. Furthermore, miR-429 directly bind to GOT1 mRNA and negatively regulated its expression. By regulating Mir-429 / GOT1 axis, TPPO-AS1 promoted HCC progression [79]. LncRNA NEAT1 can regulate GOT1 and transferrin receptor (TFRC) expression during ferroptosis. Compared with normal Cells, more exosome-packaged NEAT1 crossed blood-brain barrier (BBB) into sepsis-induced ferroptosis cells. MiR-9-5p has binding sites with NEAT1, TFRC mRNA. Then, NEAT1 acted as a ceRNA by sponging miR-9-5p to upregulate the expression of TFRC and GOT1 [80].
As another enzyme in aspartate metabolism, GOT2 also involved in many processes except for aspartate metabolism, such as long-chain fatty acid uptake and TCA cycle [81,82]. The regulation of GOT2 by LncR-NAs in cancer cells has not been reported so far. However, researchers found that virus-induced lncRNA mediated metabolic could affect GOT2 function. Researchers found that knockdown lncRNA-ACOD1 significantly reduced vesicular stomatitis virus (VSV), vaccinia virus (VACV) and herpes simplex virus type 1 (HSV-1) infection in macrophages. Overexpression of GOT2 promoted viral infection in macrophages by upregulating lncrNA-ACOD1. Meanwhile, lncRNA-ACOD1 interacted GOT2 with at 15-residue peptide (residues 54 to 68) and simulating GOT2 activity and its metabolites. This is a novel feedback way during viral infection and lncRNA-ACOD1-GOT2 interaction network was a potential therapeutic target for viral infection [83].
LncRNA and cysteine metabolism
Cysteine is a non-essential amino acid, which provides carbon source for TCA cycle, participates in the synthesis of glutathione (GSH) to maintain redox balance, and generates hydrogen sulfide to promote the production of ATP through its sulfhydryl group [84]. Thus, cysteine plays an important role in oxidative stress, energy metabolism, ferroptosis and autophagy. Cysteine in cancer cells is mainly derived from the transport of cystine transporters (consist of light chain SLC7A11 and heavy chain SLC3A2) and the synthesis of endogenous sulfur transfer pathways. Studying the mechanism of cysteine metabolism during cancer process provides the possibility for developing cancer diagnostic tools and targeted drugs. Therefore, it is of great significance to study the effect of lncRNAs regulating cysteine metabolism on cancer cells.
SLC7A11 is known to inhibit ferroptosis by promoting GSH synthesis through the uptake of cysteine. Recent studies found that SLC7A11 is highly expressed in nonsmall cell lung cancer, oral cancer, prostate cancer, malignant glioma and other cancers, which is closely related to cancer proliferation, invasion, metastasis and drug resistance [85]. The regulation of SLC7A11 by lncRNAs achieved many important results. LncRNA SLC7A11-AS1 regulates the expression of SLC7A11 at mRNA and protein levels in a variety of tumors. SLC7A11-AS1 could be a potential therapeutic target in a variety of tumors. In 2017, Xiao et al. first reported the effect of SLC7A11-AS1 on tumor progression. The researchers found that SLC7A11-AS1 expression was significantly reduced in gastric cancer tissues compared with adjacent nontumor tissues. And SLC7A11-AS1 can as a biomarker in the diagnosis of gastric cancer (GC). Knockdown of SLC7A11-AS1 can promote the expression of SLC7A11 at transcriptional level through the ASK1-P38MAPK / JNK signaling pathway, thus inhibiting the proliferation of GC cells [86]. SLC7A11-AS1 also involved in drug resistance in GC. SLC7A11 is play an important role in intracellular redox balance and GSH synthesis. Accumulating evidence suggests that SLC7A11 is upregulated can lead to multidrug resistance in colorectal cancer. Increased expression of miR-33a-5p can reduce SLC7A11 by interacting with SLC7A11 at 3'UTR sequences. SLC7A11-AS1 regulates the expression of SLC7A11 by binding to miR-33a-5p. Silencing SLC7A11-AS1 can reduce intracellular ROS level and increase intracellular GSH level by regulating miR-33a-5p / SLC7A11 axis thus, SLC7A11-AS1 have function in cisplatin resistance in GC [87]. Meanwhile, SLC7A11-AS1 knockdown promoted ovarian cancer proliferation by upregulating SLC7A11 [88]. In addition, overexpression of SLC7A11-AS1 could promote TGF-βmediated HCC invasion and metastasis [89]. Conversely, SLC7A11-AS1 is highly expressed in pancreatic ductal adenocarcinoma (PDAC) compared with adjacent nontumor tissues. In gemcitabine-resistant PDAC cells, the intracellular level of ROS is reduced via overexpression of SLC7A11-AS1. SLC7A11-AS1 Exon 3 (440-1,725 nt of SLC7A11-AS1) binds to the E3 ubiquitin ligase SCF β − TRCP1 and blocked its function, thus SCFβ β−TRCP1 inhibits ubiquitination and proteasomal degradation of NRF2. Therefore, SLC7A11-AS1 promotes PDAC stemness and chemoresistance through SCF β −TRCP1 /NFR2 signaling axis regulation of intracelluar ROS levels [90]. In adition, SLC7A11-AS1 as a ceRNA promoted the expression of TRAIP by binding to miR-4775 and inhibits its expression in lung cancer cells. Overexpression of SLC7A11-AS1 promoted lung cancer proliferation, migration and invasion [91]. Therefore, the regulatory mechanism of SLC7A11-AS1 are different in several cancers.
In addition to SLC7A11-AS1, there are several other lncRNAs can regulate SLC7A11 and affect cysteine metabolism in cancer cells. In prostate cancer, researchers found that lncRNA SNHG3, lncRNA prostate cancerassociated transcript 1 (PCAT1) and lncRNA OIP5-AS1 could affect SLC7A11 mRNA expression through different regulatory mechanisms. The three lncRNAs were significantly overexpressed in PC tissues compared with adjacent normal tissues, and promoted PC proliferation, migration and inhibit PC apoptosis, ferroptosis by promoting SLC7A11 expression. MiR-128 can bind to SLC7A11 and suppress its mRNA expression. OIP5-AS1 acted as a ceRNA to upregulate the expression of SLC7A11 at the posttranscriptional level via competitively binding to miR-128-3, thus inhibiting Cdinduced ferroptosis in PC cells. In addition, OIP5-AS1 has the potential to be a biomarker for PC diagnosis. Similarly, overexpression of SNHG3 promotes SLC7A11 through acted as a sponge of miR-152-3p. MiR-152-3p and miR-128-3 can bind to SLC7A11 and constrained its expression in PC cells. Knockdown of transcription factor TFAP2C inhibited PCAT1 mRNA levels in PC cells. Meanwhile, PCAT1 affected SLC7A11 through two regulatory mechanisms. On the one hand, PCAT1 (1093-1367 nt) bound to c-Myc (151-202 amino acid) and inhibited its proteasome degradation, thus increasing SLC7A11 mRNA expression is upregulated by promoting c-Myc expression. On the other hand, PCAT1 functioned as a ceRNA to regulate SLC7A11 mRNA expression by sponging miR-25-3p. MiR-25-3p constrained SLC7A11 expression in PC cells. These lncR-NAs might play an important role in PC therapy [92][93][94]. LncRNA LINC00618 promoted leukemia apoptosis and ferroptosis by regulating SLC7A11 at posttranscriptional level and increases the expression of BAX and cleaved caspase 3. Meanwhile, LNC00618 could bind to lymphoid-specific helicase (LSH) and inhibited it to regulate SLC7A11 transcription [95]. LncRNAs often acted as a sponge of miRNA to promote SLC7A11 expression at mRNA and protein levels, such as lncRNA UC.339 / miR-339 /SLC7A11 axis promoted lung adenocarcinoma proliferation, migration and invasion by inhibiting ferroptosis [96]. Similar mechanisms were also observed, for lncRNA ADAMTS9-AS1 / miR-5887 /SLC7A11 axis in epithelial ovarian cancer [97], and lncRNA SLC16A1-AS1 / miR-143-3p/SLC7A11 axis in renal cancer [98].
SLC3A2 is responsible for maintaining the stability and membrane positioning of the cysteine transporters. In addition, SLC3A2 can also form a polymer with other amino acid transporters, such as an amino acidpolyamine-organic cation transporter jointly formed with SLC7A5, which can catalyze the transmembrane transport of thyroid hormones, drugs and hormone precursors [99]. The effect of lncRNAs on SLC3A2 plays an important role in regulating cysteine metabolism. LncRNA small nucleolar RNA host gene 1 (SNHG1) was significantly overexpressed in several tumor tissues compared with adjacent normal tissues, and it promoted the proliferation of a variety of tumors, such as lung cancer, colorectal cancer and hepatocellular carcinoma. The regulatory mechanism was that SNHG1 can promote SLC3A2 transcription by directly binding the Mediator complex to facilitate enhancer-promoter. ALC3A2 could contribute to adhesion-induced signaling and activate to FAK and phosphoinositide-3 kinase (PI3K). Thus, SNHG1 can phosphorylation PI3K downstream target AKT and activate AKT signaling pathway by promote SLC3A2 mRNA transcription in H1299 and HCT116 cell lines. In addition, FUBP1 interplays with FIR and TFIIH to affect MYC gene expression, SNHG1 could reduce the binding of FUBP1 to FIR, thus regulating the transcription of c-Myc [100]. Researchers found that the translocation of miR-21 to the nucleus induced by Sorafenib could promote SNHG1 expression by binding to SNHG1 and promoting its transcription in HCC cells [101] (Tables 1, 2, 3, 4).
Conclusions
Cancer cells metabolic reprogramming has been studied for more than 100 years since Otto Warburg discovered the Warburg effect in 1920. It is one of the key features in tumorigenesis and tumor progression. Many studies have shown that in addition to abnormal glucose metabolism, amino acid metabolism is also an important research area of cancer cells metabolic reprogramming. Studies on the functions of metabolism-related lncRNAs can help us better understand the regulatory mechanism of cancer. The function of these lncRNAs plays a key role in tumorigenesis, progression and metabolic reprogramming. LINC01234 ↑ miR-642a-5p, SHMT2 colon cancer [60] With the development of biotechnology, it has become easier to study the role of lncRNA-mediated metabolic reprogramming. In this review, we focused on the effect of lncRNA-mediated amino acid metabolism on tumor progression. We found that lncRNAs regulates amino acid metabolism mainly by affecting the expression of metabolism -related enzymes and amino acid transporters. LncRNAs are involved in regulation of gene expression at various levels, including epigenetic modification, transcriptional regulation, RNA splicing, nuclear shuttle, post-transcriptional level, translational level and posttranslational level, basically running through all the currently known regulatory levels in the body. In addition, LncRNAs can act as endogenous competing RNA-competitive sponge miRNAs, thereby inhibiting the regulation of target genes by miRNAs. Studies shown that lncRNAs have been used as biomarkers in a variety of cancers. However, the current research on lncRNAs is still in its infancy, and there are still many issues need to be discussed: (1) The secondary structure, function and molecular mechanism of lncRNA are not fully studied. (2) Current studies on the effect of lncRNAs on tumor metabolism are not comprehensive, especially the precise mechanisms of lncRNAs on key enzymes at multiple levels (transcription, translation and post-translational modification). (3) Current studies on lncRNAs are mostly at the experimental stage, and there is still a long way to go for clinical application. In conclusion, exploring lncRNAs-mediated regulation of amino acid metabolism will contribute to a deeper understanding of tumorigenesis and the vulnerability of tumor cells. This will provide theoretical basis for developing anticancer drugs and exploring potential biomarkers for cancer diagnosis. | 2023-05-02T13:31:25.664Z | 2023-05-01T00:00:00.000 | {
"year": 2023,
"sha1": "8f7215f3457e618cc1ef9edd2bd2db86e84ed8cc",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "8f7215f3457e618cc1ef9edd2bd2db86e84ed8cc",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
} |
235396908 | pes2o/s2orc | v3-fos-license | Divergences in Macrophage Activation Markers Soluble CD163 and Mannose Receptor in Patients With Non-cirrhotic and Cirrhotic Portal Hypertension
Introduction Macrophages are involved in development and progression of chronic liver disease and portal hypertension. The macrophage activation markers soluble (s)CD163 and soluble mannose receptor (sMR), are associated with portal hypertension in patient with liver cirrhosis but never investigated in patients with non-cirrhotic portal hypertension. We hypothesized higher levels in cirrhotic patients with portal hypertension than patients with non-cirrhotic portal hypertension. We investigated sCD163 and sMR levels in patients with portal hypertension due to idiopathic portal hypertension (IPH) and portal vein thrombosis (PVT) in patients with and without cirrhosis. Methods We studied plasma sCD163 and sMR levels in patients with IPH (n = 26), non-cirrhotic PVT (n = 20), patients with cirrhosis without PVT (n = 31) and with PVT (n = 17), and healthy controls (n = 15). Results Median sCD163 concentration was 1.51 (95% CI: 1.24–1.83) mg/L in healthy controls, 1.96 (95% CI: 1.49–2.56) in patients with non-cirrhotic PVT and 2.16 (95% CI: 1.75–2.66) in patients with IPH. There was no difference between non-cirrhotic PVT patients and healthy controls, whereas IPH patients had significantly higher levels than controls (P < 0.05). The median sCD163 was significantly higher in the cirrhotic groups compared to the other groups, with a median sCD163 of 6.31 (95% CI: 5.16–7.73) in cirrhotics without PVT and 5.19 (95% CI: 4.18–6.46) with PVT (P < 0.01, all). Similar differences were observed for sMR. Conclusion Soluble CD163 and sMR levels are elevated in patients with IPH and patients with cirrhosis, but normal in patients with non-cirrhotic PVT. This suggests that hepatic macrophage activation is more driven by the underlying liver disease with cirrhosis than portal hypertension.
INTRODUCTION
Liver macrophages play a significant role in chronic liver disease development and progression, and are also suggested to play a role in portal hypertension (Steib, 2011). The macrophages may be activated by the specific liver disease (e.g., virus, alcohol, steatosis, and drugs) where damage associated molecular patterns (DAMPs) activate macrophages accompanied by inflammation, fibrosis, and finally cirrhosis. Further, patients with liver cirrhosis and portal hypertension have intestinal edema and a leaky gut wall resulting in translocation of endotoxins and gut bacteria, e.g., pathogen associated molecular patterns (PAMPs), stimulating gastrointestinal and liver macrophages (Wiest and Garcia-Tsao, 2005;Wiest et al., 2014;Seitz et al., 2018) with secretion of inflammatory and vasoactive cytokines (Steib et al., 2007(Steib et al., , 2010a. Similar mechanisms of macrophage activation especially PAMPs may be involved in patients with non-cirrhotic portal hypertension; however, most often, without underlying liver disease. Non-cirrhotic portal hypertension is mainly caused by vascular disorders, especially portal vein thrombosis (PVT). However, a number of other conditions are associated with noncirrhotic portal hypertension (Wanless, 1990;Strauss and Valla, 2014;Hernandez-Gea et al., 2018), and yet in some patients, a specific cause for the portal hypertension cannot be identified and these patients are classified as having idiopathic portal hypertension (IPH) currently also known as porto sinusoidal vascular liver disease (Schouten et al., 2012b;Hernandez-Gea et al., 2018). Patients with non-cirrhotic portal hypertension may also display macrophage activation due to portal hypertension and PAMPs; however, it is unknown how macrophage activation differs between patients with non-cirrhotic portal hypertension e.g., PVT and IPH and patients with cirrhosis with and without PVT. Divergences may partly explain differences in disease severity and prognosis in patients with non-cirrhotic portal hypertension compared to patients with liver cirrhosis, who may develop acute decompensation with risk of progression toward acute-on-chronic liver failure.
As recently reviewed macrophage activation markers soluble (s)CD163 and soluble mannose receptor, sMR, are associated with chronic liver disease severity (Child-Pugh and MELD scores) and portal hypertension (Møller et al., 2016). However, the macrophage activation markers sCD163 and sMR have never been studied in the setting of non-cirrhotic portal hypertension. We therefore aimed to evaluate the role of macrophage activation Abbreviations: IPH, idiopathic portal hypertension; PVT, portal vein thrombosis; PAMPs, pathogen associated molecular patterns; TLR, toll like receptors; sCD163, soluble CD163; sMR, soluble mannose receptor; HVPG, hepatic venous pressure gradient. by sCD163 and sMR in patients with non-cirrhotic portal hypertension (PVT and IPH) and compare this to patients with cirrhotic portal hypertension with and without PVT and in healthy controls. We hypothesized higher levels in patients with cirrhosis and portal hypertension than patients with noncirrhotic portal hypertension, which suggest that the underlying liver disease is the main driver of macrophage activation.
Patients and Healthy Controls
Ninety-four patients were included in the study from 2003 to 2015 from the outpatient clinic in Barcelona. The patients were divided into four groups according to their underlying disease. Twenty-six patients had IPH, 20 patients had non-cirrhotic PVT, 31 patients had cirrhosis without PVT and 17 patients had cirrhosis with PVT. All patients with IPH reached this diagnosis after discarding other etiologies for portal hypertension with CT-examination, biochemical screening and liver biopsy. In general, the histological changes were subtle and diverse. The most pronounced histological features present in 48% of the IPH patients were portal tract vascular abnormalities (including vascular multiplication, periportal vascular channels and aberrant vessels). Hepatic sinusoidal dilation, architectural disturbance (irregular distribution of central veins and portal tract) and regenerative nodules was present in 38%, 21%, and 21%, respectively. Two patients with IPH had histological features of obliterative portal venopathy. In 21% of IPH patients there was mild perisinusoidal fibrosis. None of the patients with IPH showed histological features of inflammation in the liver biopsy. Fifteen healthy human subjects were included at Hvidovre Hospital in Denmark. Liver cirrhosis was diagnosed in patients with underlying chronic liver disease (e.g., alcohol, HCV, and HBV) combined with imaging showing nodular surface and collaterals including clinical complications to portal hypertension (e.g., ascites, varices, and hepatic encephalopathy).
All patients and healthy controls had physical examination, measurements of additional biochemical parameters and underwent hemodynamic investigation with liver vein catheterization for measurement of hepatic venous pressure gradient (HVPG), physical examination and measurements of additional biochemical parameters ( Table 1). HVPG was determined as the difference between the wedged and the free hepatic venous pressure. No patients or healthy subjects had fever or other signs of infections. Blood samples were collected from a peripheral vein for measurements of sCD163 and sMR and Values are reported as median with 25 and 75% interquartile range in brackets unless other is specified. * Significantly different from patients with IPH (P < 0.05). # Significantly different from patients with non-cirrhotic PVT (P < 0.05). € Significantly different from patients with cirrhosis without PVT (P < 0.05). & Significantly different from patients with cirrhosis with PVT (P < 0.05).
frozen at −80 • C until analysis. Informed consent was obtained from all participants according to the Helsinki Declaration.
Soluble CD163 and Soluble MR
Levels of sCD163 and sMR in plasma samples were measured by an in-house sandwich enzyme-linked immunosorbent assay as previously described (Moller et al., 2002;Rodgaard-Hansen et al., 2014).
Statistics
Normality was assessed visually by using quantile-quantile-plots and histograms. The values of the biomarkers were not normally distributed. To obtain normal distribution, the data were logtransformed. Accordingly, data are presented as median with a 95% CI of the median. Multiple linear regression was used to test if the values of sCD163 and sMR were different between the groups. Age was used as a control variable in the regression of both sCD163 and sMR, as sCD163 is known to increase with age (Moller, 2012). Model assumptions was checked and fulfilled. For all other parameters the Mann-Whitney U test was used to test statistical differences between groups. A p-value < 0.05 was considered to indicate statistical significance. Statistical analyses were performed using STATA software, release 11 (StataCorp, College Station, TX, United States).
Patient Groups
Gender was evenly distributed for the healthy controls and the patients with cirrhosis, but in the groups with IPH and non-cirrhotic PVT there was an overweight of males (75%). Although HVPG was significantly higher in the patients with IPH compared to healthy controls and patients with non-cirrhotic PVT, there was no difference in the clinical signs of portal hypertension, with no significantly difference in the degree of varices or the degree of ascites between the groups (P = 0.26 and P = 0.73, Table 1). HVPG was significantly higher in the patient groups with cirrhosis compared to the non-cirrhotic patients (P < 0.05) and they also had a significantly higher degree of ascites (P < 0.05 for all), whereas there was no significantly difference in the grade of varices (P > 0.05, for all, Table 1). Both HVPG and the degree of ascites was significantly higher in the cirrhotic group with PVT compared to the cirrhotic group without PVT (P = 0.02 and P = 0.006, Table 1).
Soluble CD163
In the healthy controls and in patients with non-cirrhotic PVT, where the liver can be assumed to be normal or near normal, the plasma concentration of sCD163 was low and within the normal range (0.69-3.86 mg/L). The median plasma concentration was 1.51 mg/L (1.24-1.83) in healthy controls and 1.96 mg/L (1.49-2.56) in patients with non-cirrhotic PVT ( Figure 1A); and with no significant difference between the two groups (P = 0.09).
In the patients with IPH sCD163 was slightly but significantly elevated [2.16 mg/L (1.75-2.66)], when compared to the healthy controls (P = 0.007) (Figure 1A). The median plasma sCD163 concentration in patients with IPH was slightly but insignificantly higher than patients with non-cirrhotic PVT (P = 0.35).
In patients with cirrhosis sCD163 levels were high with a median of 6.31 mg/L (5.16-7.73) in the patients without PVT and 5.19 mg/L (4.18-6.46) in the patients with PVT ( Figure 1A). The two patient groups with cirrhosis had significantly elevated sCD163 compared to healthy controls, patients with IPH and patients with non-cirrhotic PVT (all P < 0.001). There was no difference between the concentration of sCD163 in the cirrhotic patients with or without PVT (P = 0.23).
Soluble MR
Soluble mannose receptor was not measured in healthy controls, but reference values have been established with a mean of 0.28 mg/L and 95% reference interval of 0.10 mg/L to 0.43 mg/L (Rodgaard-Hansen et al., 2014). The median concentration of sMR was 0.27 mg/L (0.22-0.33) in patients with IPH and 0.24 mg/L (0.19-0.31) in the patients with non-cirrhotic PVT ( Figure 1B). In both cirrhotic patient groups, median sMR was approximately two times higher than the concentration in the patients without cirrhosis with 0.58 mg/L (0.49-0.68) in cirrhotic patients without PVT and 0.51 mg/L (0.39-0.66) in cirrhotic patients with PVT ( Figure 1B).
Similar, to the results of sCD163 the median concentration of sMR was higher but not significantly different in patients with IPH compared to patients with non-cirrhotic PVT (P = 0.46) and there was no difference between the cirrhosis groups (P = 0.41). The concentration of sMR was significantly higher in both patient groups with cirrhosis compared to the patients with IPH and non-cirrhotic PVT (P < 0.001 for all cases), see Figure 1B.
Hepatic Venous Pressure Gradient
Median HVPG was 3.0 mmHg in the healthy controls, 6.8 mmHg in patients with IPH, and 4.0 mmHg in patients with noncirrhotic PVT ( Table 1). In the patients with cirrhosis, median HVPG was 18.0 mmHg in patients without PVT and 21.5 mmHg in patient with PVT (Table 1). There was no correlation between HVPG and sCD163. Soluble MR was only correlated to HVPG in patients with cirrhosis without PVT (Rho = 0.46, P = 0.01).
DISCUSSION
This is to our knowledge the first study to investigate macrophages and macrophage activation markers in patients with IPH and non-cirrhotic PVT. The main finding of the present study was the significantly elevated sCD163 levels in patients with underlying liver disease and cirrhosis with and without PVT, and being lesser in IPH and non-cirrhotic PVT. This may suggest that the primary driver for hepatic macrophage activation and elevated sCD163 and sMR levels is the underlying liver disease with cirrhosis rather than portal hypertension.
A major strength of the present study is the inclusion of well-characterized patients and healthy controls who all had invasive measurements of HVPG. The study limitations include the relatively small number of patients within each category and the cross-sectional design, which does not permit determination of changes in macrophage activation marker levels with prognosis or progression or regression of inflammation and fibrosis. However, this may not affect the rather clear distinction between FIGURE 1 | Plasma concentration and median s163 (A) and sMR (B) levels in patients with idiopathic portal hypertension (IPH), non-cirrhotic portal vein thrombosis, cirrhosis without portal vein thrombosis, cirrhosis with portal vein thrombosis and healthy controls. sCD163 was significantly elevated in IPH and the two cirrhosis groups when compared to healthy controls (P < 0.05). Both sCD163 and sMR was significantly higher in the cirrhosis groups compared to the non-cirrhosis groups (P < 0.01). There was no difference between the two cirrhosis groups. *P < 0.05 compared to healthy controls.
Frontiers in Physiology | www.frontiersin.org healthy controls and patients with IPH or cirrhosis. Additionally, while included patients had stable disease it is not possible to control for subclinical events, such as minor infections, which could affect macrophage activation; however, none of the patients showed any signs of infections at inclusion or during HVPG measurements.
CD163 is a monocyte/macrophage lineage specific scavenger receptor for the hemoglobin and haptoglobin complex (Kristiansen et al., 2001). The soluble form is present in plasma under normal circumstances but substantially increased during macrophage activation (Moller, 2012). Over the past decade, studies have established macrophage activation as an important factor in liver disease development, progression and prognosis. Macrophage activation, as measured by sCD163, is associated with liver fibrosis and cirrhosis, liver disease severity (Child-Pugh-and MELD scores), and portal hypertension (Holland-Fischer et al., 2011;Gronbaek et al., 2012;Rode et al., 2013;Kazankov et al., 2014Kazankov et al., , 2015Kazankov et al., , 2016Grønbaek et al., 2020). Furthermore, sCD163 levels are associated with prognosis, predict the risk of variceal bleeding (Rode et al., 2013;Waidmann et al., 2013) and correlates to disease severity and treatment response in patients with non-alcoholic fatty liver disease (Kazankov et al., 2015), alcoholic hepatitis , hepatitis B and C virus infection (Dultz et al., 2015;Laursen et al., 2018Laursen et al., , 2019 and autoimmune liver diseases (Gronbaek et al., 2016a;Bossen et al., 2020Bossen et al., , 2021. The magnitude of macrophage activation and corresponding elevated plasma markers depends on the underlying pathogenesis, being more pronounced in conditions with a high hepatic inflammatory load and fibrosis like acute liver failure and advanced cirrhosis (Hiraoka et al., 2005;Moller et al., 2007;Gronbaek et al., 2012). In acute liver failure, macrophage activation is dynamic and resolves with disease regression in contrast to the stable increase in activation seen in advanced cirrhosis (Hiraoka et al., 2005).
The mannose receptor is able to bind various ligands of microbial and endogenous origin and is involved in antigen presentation and macrophage activation (Martinez-Pomares, 2012). The receptor is expressed on selected inflammatory cells, including subsets of macrophages, dendritic and endothelial cells and shed during inflammation and subsequently measurable as soluble MR (Martinez-Pomares, 2012;Rodgaard-Hansen et al., 2014). Consequently, sMR is not as specific a marker of macrophage activation as sCD163. However, elevated sMR levels have previously been described in patients with liver disease and are shown to be associated to disease severity, portal hypertension and mortality (Gronbaek et al., 2016b;Laursen et al., 2017;Sandahl et al., 2017;Grønbaek et al., 2021).
Hepatic venous pressure gradient measures the postsinusoidal pressure gradient. Consequently, the patients without cirrhosis who have elevated pre-sinusoidal pressure and a normal pressure gradient across the liver, e.g., patients with non-cirrhotic PVT or IPH, can still suffer from significant splenic and portal hypertension without it being detectable with standard HVPG-measurement (Keiding et al., 2004;Sørensen et al., 2018). The patients with IPH had a significantly higher HVPG compared to healthy controls ( Table 1), but no association to macrophage activation, as measured by sCD163 and sMR. Same lack of association was seen for cirrhotic patients, except for cirrhotic patients without PVT where HVPG correlated with sMR. This is in opposition to previous findings in patients with liver cirrhosis (Holland-Fischer et al., 2011;Gronbaek et al., 2012) and could pertain to the small sample size.
As previously observed, macrophage activation was substantially higher in patients with underlying chronic liver disease and cirrhosis compared to controls and as a novel finding, this also applied to patients with IPH where the structural changes, however, are less severe. This may suggest that macrophage activation is a pronounced feature of the underlying liver disease per se and not related to vascular or hemodynamic changes. In the cirrhotic patients, there was a tendency toward less macrophage activation in the group with PVT, who had a significantly higher HVPG and Child-Pugh score. This likewise supports that vascular changes are less important for macrophage activation. Additionally, findings of comparable patterns for both sCD163 and sMR corroborate the observed associations. Furthermore, we suggest that the central mechanisms behind macrophage activation in cirrhosis and IPH is not only driven by translocation of gut-derived PAMPs but represents a constitutive inflammatory upregulation in the liver disease as also demonstrated in TIPS treated patients (Holland-Fischer et al., 2011). In addition to the constitutive macrophage activation, this may be further enhanced by a general systemic inflammatory state as seen in cirrhosis, leading to immune activation and production of pro-inflammatory cytokines, like tumor necrosis factor and interleukin 8, which are known to be involved in recruitment of inflammatory cells to the liver (Seitz et al., 2018). In the event of acute exacerbation of inflammation or infection as seen in e.g., ACLF we have observed even more pronounced macrophage activation by sCD163 and sMR levels (Gronbaek et al., 2016b).
Patients with IPH in general have a better prognosis than patients with cirrhosis with 10-year transplant free survival of 82% (Siramolpiwat et al., 2014). The current treatment of IPH is restricted to the management of portal hypertension and this does not prevent disease progression (Schouten et al., 2012a;Hernandez-Gea et al., 2018). Our study shows that macrophages to some degree are activated in IPH patients and consequently therapeutic strategies aimed at decreasing macrophage activation might be relevant in the treatment of IPH in the future. However, the association between IPH and macrophage activation needs further investigations.
CONCLUSION
Macrophage activation, as measured by elevated sCD163 and sMR, were only observed in patients with cirrhosis with and without PVT and in IPH patients, and not in patients with non-cirrhotic PVT. This suggest that the main determinant of macrophage activation in chronic inflammatory liver diseases is associated to the underlying liver disease with cirrhosis and not portal hypertension.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
The studies involving human participants were reviewed and approved by the Institutional Review Board, Hospital Clinic, Barcelona. The Informed consent was obtained from all participants according to the Helsinki Declaration. The patients/participants provided their written informed consent to participate in this study.
AUTHOR CONTRIBUTIONS
NØ: data analysis and drafting the manuscript. MB, AB, JF, VH-G, FT, MM, SM, and HM: patient collection and data aquisition. JG-P and HG: concept and design, data analysis, and finalising the manuscript. All authors approved the final version of the manuscript.
FUNDING
JG-P received support in part through grants from the Spanish Ministry of Education and Science (SAF-2016-75767-R), Fondo Europeo de Desarrollo Regional (FEDER): (PIE15/00027), "Commissioner for Universities and Research of the Generalitat de Catalunya" (AGAUR SGR 2017), and The Spanish Health Ministry (National Strategic Plan against Hepatitis C). CIBERehd was funded by the Instituto de Salud Carlos III and Secretariat for Universities and Research of the Department of Economy and Knowledge and (SGR17_00517). HG received funding from the NOVO Nordisk Foundation and "Savvaerksejer Jeppe Juhl og hustru Ovita Juhls mindelegat." | 2021-06-11T13:30:41.729Z | 2021-06-11T00:00:00.000 | {
"year": 2021,
"sha1": "22c7f062e7b77fc90f698354fd2668c356270162",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2021.649668/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7b676d67dbcb6bb842359a2e0970aa8bb77de7dd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
251565472 | pes2o/s2orc | v3-fos-license | What workers can tell us about post-COVID workability
Abstract Background The apparent functional impact of post-COVID-19 syndrome has workability implications for large segments of the working-age population. Aims To understand obstacles and enablers around self-reported workability of workers following COVID-19, to better guide sustainable workplace accommodations. Methods An exploratory online survey comprising quantitative and qualitative questions was disseminated via social media and industry networks between December 2020 and February 2021, yielding usable responses from 145 workers. Qualitative data were subjected to content analysis. Results Over half of the sample (64%) were from the health, social care, and education sectors. Just under 15% had returned to work, and 53% and 50% reported their physical and psychological workability respectively as moderate at best. Leading workability obstacles were multi-level, comprising fatigue, the interaction between symptoms and job, lack of control over job pressures, inappropriate sickness absence management policies, and lack of COVID-aware organizational cultures. Self-management support, modified work, flexible co-developed graded return-to-work planning, and improved line management competency were advocated as key enablers. Conclusions Assuming appropriate medical management of any pathophysiological complications of COVID-19, maintaining or regaining post-COVID workability might reasonably follow a typical biopsychosocial framework enhanced to cater to the fluctuating nature of the symptoms. This should entail flexible, regularly reviewed and longer-term return-to-work planning addressing multi-level workability obstacles, co-developed between workers and line managers, with support from human resources, occupational health professionals (OHP’s), and a COVID-aware organizational culture.
Introduction
Just over 1 million people are self-reported as having 'long-COVID symptoms within the UK at the start of September 2021 [1]. Prevalence was highest amongst 35to 69-year-olds, corresponding to a broad spectrum of the working-age population. Long-COVID refers to symptoms that continue or develop after acute COVID-19, including both ongoing symptomatic COVID-19 (from 4 to 12 weeks) and post-COVID-19 syndrome (symptoms beyond 12 weeks) [2,3]. A survey of 3300 workers self-reporting as having the syndrome found that 90% experienced fatigue [4]. Other leading symptoms included diminished cognitive capacity ('brain-fog'), shortness of breath, pain and muscle ache. Symptom clusters can be disproportionate to those experienced in the acute infection phase [2]. The enduring, fluctuating multi-system nature of symptoms has implications for workability (WA) [5] and vocational rehabilitation (VR). For example, of 138 health care workers reporting symptoms, 32% described themselves as struggling to cope up to 4 months post-infection. [6] To address this, the NHS introduced several long-COVID clinics for multidisciplinary assessment and rehabilitation [7]. However, service access does not appear universal [8]: there are still relatively few clinics and waiting lists are reported as long [9]. It seems pertinent to ask whether the limited resources could be supported by existing VR approaches. Current good practice in VR generally recognizes that 'good' work is good for health [10,12,13]; requires a person-centred approach, line manager input and early intervention [11]. Recently developed return-to-work (RTW) guidance on workers recovering from COVID-19 for use by health care professionals [14] and workers [15] seems to follow this approach, advocating: regular contact with affected workers; assessment and regular review of work-relevant health needs; joint identification by manager and employer of work resumption obstacles; reasonable temporary workplace accommodations for overcoming obstacles [10,16,17] and documented within risk assessments, fit notes or RTW plans. Reflecting the biopsychosocial perspective, obstacles can be health/symptom-related, psychological, occupational, or social/contextual [17], while workplace accommodations/adjustments can encompass phased return, working pattern, workload or job responsibility/ task adjustments [17,18].
The relative recency of long-COVID means that such guidance drew on established VR principles rather than direct evidence derived from work-relevant experiences of workers recovering from COVID-19. To address this gap, this paper reports findings from an online survey to quantitatively establish the WA status of workers recovering from COVID-19, and qualitatively explore their work-relevant recovery experiences, views on workplace accommodations necessary for sustained RTW/WA and benefits for employers in making accommodations. The findings could clarify whether current RTW guidance for OHP's is sufficiently fit-for-purpose.
Methods
An exploratory online cross-sectional survey comprising a mixture of quantitative and qualitative open-ended items was developed using Qualtrics XM [19], piloted for usability and disseminated online between mid-December 2020 and February 2021. Ethical approval for the survey was gained from the ethics committee of
Key learning points
What is already known about this subject?: • The prevalence of post-COVID-19 symptoms is greatest amongst the working-age population relative to other age cohorts. • Their impact on physical, cognitive and psychological functioning implies detrimental effects on sustained workability. • Vocational rehabilitation approaches based on the biopsychosocial framework can address multi-level obstacles to working with long-term health conditions.
What does this study add?: • The self-reported insights of workers recovering from COVID-19 have identified commonly experienced post-COVID workability obstacles. Their fit within the biopsychosocial framework implies cross-organization and sector applicability. • Workers' perspectives on workplace accommodations for overcoming post-COVID workability obstacles are unpacked, for use by practitioners, employers and employees, including variations from typical vocational rehabilitation practices. • Workers' perspectives include perceived benefits that employers could gain from accommodating workers' post-COVID recovery.
What impact this may have on practice and policy?: • Variations from typical vocational rehabilitation appear to be a matter of emphasis. Longer-term, flexible, co-developed and regularly reviewed RTW plans appear to be particularly necessary for accommodating the unpredictable nature of post-COVID recovery and other conditions sharing similar unpredictable symptom characteristics, including chronic fatigue syndrome. • A biopsychosocial approach provides an appropriate framework for identifying and overcoming work-relevant post-COVID workability obstacles for use by employers, employees and OHP's. • Case studies of successful vocational rehabilitation for workers experiencing post-COVID-19 syndrome could help shape a narrative that early sustained return-to-work is possible where suitable workplace accommodations are agreed.
the University of Derby's College of Health, Psychology and Social Care. Participants were recruited via weekly social media posts to COVID-19 and long-COVID support groups and opportunistically via research team contacts with U.K based online construction industry, OH, academic, professional, carer and organizational networks. To allow for limited COVID-19 testing towards the pandemic outset, UK workers who had either tested positive for COVID-19 or suspected they had, were considered eligible. A total of 145 responses were received. The survey was created using Qualtrics XM (Qualtrics, 2021). Data was exported to IBM SPSS Statistics for Windows, version 26 for quantitative analysis (IBM, 2021). Survey items encompassed demographics, health status, RTW/WA status and views of RTW obstacles; enablers and benefits to employers for enabling RTW (see Table 1).
For health status (see Table 2), the presence of preexisting mental or physical medically diagnosed health conditions was assessed by asking participants to select from a list of generic condition labels [20]. The prevalence of post-viral symptoms was determined by asking participants to rate what proportion of a list of post-viral symptoms [21] they had experienced. The list was refined through team consensus. WA was assessed using the two single-item Workability Assessment Inventory 2 (WAI2) scale selected due to their standardization upon generic working populations, construct validity and brevity [5]. Views of anticipated or actual RTW obstacles and enablers were sought via open-ended items. Views about the benefits to employers for accommodating COVID-19 were sought to help create an RTW business case.
A content analysis [22] was conducted to identify the frequency of meaningful 'categories' of RTW qualitative data from the open-ended items, as an indication of their relative priority or importance to workers. Categories refer to groups of words with similar meanings of connotations [23]. The procedure modelled Bowling's [24].
Two researchers independently re-read open-ended responses, discussed emerging categories, and agreed labels. For RTW obstacles and enablers, categories were separated according to individual (physical and psychological), job/work support, and organizational and external groupings as per the biopsychosocial VR model [11]. For each open-ended question, online coding templates were created for documenting categories, labels, example quotes, and frequencies by which they arose. The first researcher then coded all responses for each item, counting category frequency. The second researcher then conducted inter-rater checks on all coding. Disagreements were resolved by the first researcher checking the second researcher's coding decisions, accepting or discussing and resolving areas of disagreement. Categories receiving more than 10 counts were included in this analysis.
Results
Of the 145 usable responses, 88% were female participants, 70% self-reported as key workers and 70% reported occupying predominantly non-managerial roles. Ages ranged from 25 to 65 years. The most frequently represented sectors comprised health and social care (50%), educational (15%) and professional, scientific and technical (10%). Of those that indicated their role, 25 (17%) were nurses, 22 (15%) were medics, 14 (10%) were from the allied health professions, 17 (12%) were teachers, and 9 (6%) were social workers or support workers. For health status, over half (52%) of participants reported pre-existing mental or physical health conditions. Most (see Table 2) reported having contracted COVID-19 more than 6 months previously with their symptoms continuing for longer than 6 months. Nearly 35% said their initial symptoms were mild to moderate, the remainder reporting severe symptoms (at home, 44%, or hospitalized, 12%). Most (91%) self-reported having one or more of the listed post-viral symptoms.
For work status, just under 15% had fully returned to work, and 16% had partially returned. From open-ended responses, 9 (6%) of participants depicted the RTW process as 'straightforward'. Twenty-nine (20%) portrayed it as 'difficult,' and a further 17 (12%) reported 'multiple attempts', with 10 (7%) claiming RTW specifically triggering relapse: 'I have made 3 attempts to come back to work and relapsed every time'.
Just 8% of the full sample rated their physical WA as good or very good, and 10 % described their mental WA as good or very good. As reported elsewhere [25] significant relationships were found between WA and COVID-19 duration.
For RTW obstacles, enablers, and employer benefits derived from qualitative data are listed in Tables 3-5, with category frequency and supporting quotes indicated. Category labels from these tables are italicized within the following analysis.
Fatigue and poor concentration (see Table 3) represented symptoms most frequently portrayed as obstacles at the individual level. The relapsing nature of symptoms was also widely attributed to hampering 'return-to-work planning'. Psychological-related obstacles comprised concerns over maintaining: social distancing (avoiding reinfection); safe practice (with implications for personal as well as patient safety in caring roles), and professional identity.
At the job level, an interaction between symptoms and physical demands in terms of physical load, including 'heavy lifting, ' or duration of physical activity including 'being on your feet all day,' posed key obstacles. Similarly, the interaction between symptoms and cognitive demands was ascribed as a leading obstacle. This applied to having to 'concentrate,' 'word find' and to meta-cognitive tasks including having to "multitask,' engage in 'strategic thinking, ' 'chair meetings,' 'teach' or hold sustained 'conversations.' Inadequate control over job pressures due to, "meeting deadlines' featured strongly amongst the obstacles. Difficulties upholding usual working patterns, especially where 'early starts' and 'long days' were involved, also represented leading obstacles. These job-related obstacles reflect situations where people are struggling with their usual job requirements alongside work-relevant symptoms.
For work support, leading reported obstacles related to line management, peers/colleague behaviour, occupational health (OH) and human resources (HR). Line manager behaviour-related obstacles included: inadequate reactions including 'not conducting risk assessments;' failing to 'believe what you are saying;' inadequate understanding over 'what long-COVID meant;' failing to implement 'OH recommendations;' and placing pressure on worker recovery on the premise that an employee should 'hit it the ground running on return.' Other obstacles included being 'inaccessible' or threatening job loss: 'you risk losing your job if you phone in sick'. Covering for off-sick colleagues for protracted periods was described as contributing to 'uncertainty,' 'burnout' and 'resentment' amongst peers due to being in 'harms-way' more often. Negative attributes of OH and HR support comprised: being 'physically unavailable' in the case of OH; being 'slow' and constraining sick pay in the case of HR, potentially compounded by unawareness of national absence guidance for COVID-19: 'My line manager and HR weren't aware of the national guidance regarding COVID absence. ' At the organizational level, implementation of the sickness absence policy was portrayed as 'quite rigid,' providing only 'limited sick-pay within the first few years,' catering for shortterm as opposed to 'long-term illness' and leading to COVID-19 omission from absence reporting systems: 'if Covid is not named on fit note it doesn't trigger absence management'.
Fears over job security also emerged as an organisational wide obstacle, potentially compounding 'the stress of recovery." Apparent widespread attitudes 'that you must be healthy to be successful' and that being back at work means 'you are back fully or not at all' were implied to make it 'harder to recover properly.' Inadequate knowledge of the nature of recovery was frequently cited (see Table 3), which together with shared attitudes can be regarded as reflective of organisational culture.
A commonly encountered external obstacle concerned access to suitable health care, attributed to either: inadequate understanding 'from professionals such as GPs of long-COVID;' difficulties in obtaining 'GP appointments;' and 'no access to long-COVID clinics.' Transport obstacles encompassed the effects of a 'commute upon fatigue', using 'public transport', or 'walking from carparks to a place of work'.
Leading RTW enablers are listed in Table 4. Selfmanagement was widely supported, yet aspects, including pacing was considered impractical for some jobs: 'you can't take a break when needed, and you can't even sit down most of the time.' Flexible working was also widely considered as enabling 'work from home', 'rest facilities at work' and 'flexible attitudes to working time.' While graded returnto-work featured strongly, participants having undergone phased return cautioned that it could take longer than '4 weeks,' and should omit any large steps:
I was expected to go from a few weeks of reduced hours … to full time and full duties. This was not graded return, and still being ill, I found this impossible to manage.
Changes to jobs and/or tasks were also extensively supported. In terms of their duration, 46 (32%) participants indicated that adjustments might need to be long-term: It takes as long as it takes, which may be permanent if we are permanently disabled.
For support, various suggestions for improving line manager competencies were made. 'Regular catch-ups,' 'face-to-face meetings' with a 'single point of contact' were suggested as enabling. Close communication between HR and line managers 'according to a shared return-to-work plan' was also proposed for enabling joined-up support. Managing peer expectations, improving OH and health care access and utility, creating more COVID-centric sickness absence policies, targeting organisational wide awareness levels and attitudes with respect to COVID-19 were widely supported, so that 'illness is not viewed as an inconvenience or stigma'. Working from home was attributed by 31 (21%) participants as supporting their WA by providing 'control' over 'when to work and what to focus on.' Table 5 details the themes participants identified as benefits to employers accruing from making workplace accommodations, along with supporting quotes. In order of frequency, ability to retain specialist skills, fostering commitment, enabling sustained return-to-work and productivity were cited.
Discussion
This exploratory investigation of the implications of COVID-19 on WA has identified key obstacles to resuming former WA and relevant workplace accommodations.
Findings are based on the actual or anticipated experiences of workers who believed they were recovering from SARS-CoV-2 infection, the majority of whom appeared to have post-COVID-19 syndrome to varying degrees. A small minority had fully returned to work. The majority self-reported their physical and psychological WA as moderate at best.
The obstacles to RTW most frequently highlighted (>25 participants) spanned multiple domains, comprising: fatigue; the interaction between symptoms and physical job demands; inadequate control over job pressures; inappropriate sickness absence management policies; and lack of COVID-aware organizational cultures. Highlighting the most commonly described obstacles should not obscure the significance of others, due to their potential interaction including between the physical and cognitive demands of a role.
Those most commonly described RTW enablers (<25 participants) comprised: self-management of symptoms alongside workplace demands; graded RTW planning where viable; modified job tasks or responsibilities, and improved line-management competency. Since these are participant-generated, they are not necessarily exhaustive of all potential accommodations.
A summary of data-derived potential workplace accommodations that employers can make for workers with post-COVID-19 syndrome is provided in Table 6. In return for making these accommodations, the findings suggest that employers will benefit according to the retention of specialist skills; worker commitment; productivity, and sustained WA. While these benefits help advance a business case, it is recognized that the sample is skewed to essential health, social care and education professionals. Consequently, further research is necessary to generalize the findings to managing the RTW of more diverse occupational groups, especially those who have been classified as key workers, including delivery drivers, and care workers, as a way of managing skill shortages within the wider workforce. The industry skew could partly explain the sample's high proportion of female workers, mirroring that within health and social care within the UK and more widely [27]. A higher proportion of female participants also reflects the predominance of women reporting long-COVID [28].
Although the sample size is small due to the study's exploratory nature, the fit of findings to the biopsychosocial rehabilitation framework implies transferability across organizations, industries and public/private sectors [17]. Furthermore, while this study has unpacked challenges that workers experiencing long-COVID have encountered on RTW, there was evidence that some workers found the process straightforward. Accumulating authentic case studies supporting this narrative could strengthen the case for post-COVID-19 WA.
Moreover, the findings indicate that regularly updated guidance with a suite of workplace accommodations is necessary to support individual WA/RTW trajectories, covering the varying job contexts in which these operate, and the emerging evidence on potential syndromes underpinning long-COVID. Some demarcation may be necessary according to any lasting pathophysiological damage, degree of cognitive dysfunction, psychological trauma, and use of physical activity given the continued debate surrounding its use for chronic-fatigue-syndrome rehabilitation and post-viral fatigue [29]. To optimize VR utility, further exploratory research may be warranted to determine whether RTW obstacles varied according whether SARS-CoV-2 was contracted at or outside work, or according to type of health care accessed, such as long-COVID clinics or OH service.
In judging if contemporary VR guidance derived from evidence for chronic health conditions that precede the COVID-19 pandemic [14,15] is fit for the purpose of accommodating post-COVID-19 syndrome, these findings highlight some nuances that deserve consideration. Firstly, workplace accommodations are usually advocated as temporary [14,17]. The present findings underscore a need for flexible, longer-term and regularly reviewed accommodations to allow for the potentially protracted, unpredictable multi-system nature of postviral symptoms.
Secondly, an early RTW is recognized as necessary for mitigating long-term sickness absence and disability [17], and calls are made for facilitating working while recovering on the premise that this should permit a more rapid resumption of the usual WA [18]. The present study highlights that initial RTW planning might need to select tasks that have reduced personal or public safety risks or lower cognitive complexity: this could allow cognitive functioning levels to remerge unhampered by the pressures to perform in safety critical roles at the point of RTW. Tasks requiring meta-cognitive skills or patient/ client interaction may need to be deferred until cognitive functioning is sufficient.
Third, realistic personal and workplace expectations about the ability to work have been highlighted as necessary [18,26]. Expectations that a worker recovering from COVID-19 should be fully productive on RTW might need to be countered to prevent unhelpful pressures on the rehabilitation process. The returning worker, their line manager and peers, leadership and OH practitioners may also need to modify beliefs around the need for full fitness and productivity. Persuasive COVID-awareness raising programmes targeting such attitudes could help create more rehabilitation conducive organizational cultures.
Given the reported variation in recovery experiences, these findings imply that supporting workers' autonomy to self-manage job demands alongside symptoms could provide them with the flexibility to meaningfully fulfil at least some job requirements while recovering [26]. Findings also reinforce the view that line managers should play an active role co-developing RTW plans with workers, with OH and HR providing specialist input required [18]. Given the individualistic nature of the work-relevance of post-COVID-19 symptoms and their accommodation needs, line managers and workers are best placed to work out the optimal requirements to regain WA, assuming clinical screening where appropriate.
Finally, these findings indicate a long-term and flexible approach to workplace health management as potentially important for allowing the large number of workers apparently struggling after COVID-19 to sustainably regain WA, including those with pre-existing conditions. Other conditions presenting similarly unpredictable symptom patterns and RTW obstacles such as chronic fatigue syndrome [13,30] should also benefit from such flexibility. A skew to female participants and essential workers could make this approach pertinent to health and social care. Use of the biopsychosocial framework to overcome multi-level obstacles to WA, coupled with support for working-while-recovering wherever reasonable, should afford a more person-centred approach that can contend with the unpredictable characteristics of post-COVID-19 symptoms. | 2022-08-16T06:16:51.139Z | 2022-08-15T00:00:00.000 | {
"year": 2022,
"sha1": "97bbddf7360f6fc78fc5c2dbcd23addc6ca4760d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "85961beb7ab54a42e7b67b756e573f8cf9d2f685",
"s2fieldsofstudy": [
"Sociology",
"Business",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
} |
228099308 | pes2o/s2orc | v3-fos-license | Surface functionalisation of poly-APO-b-polyol ester cross-linked copolymers as core–shell nanoparticles for targeted breast cancer therapy
Polymeric nanoparticles (NPs) are commonly used as nanocarriers for drug delivery, whereby their sizes can be altered for a more efficient delivery of therapeutic active agents with better efficacy. In this work, cross-linked copolymers acted as core–shell NPs from acrylated palm olein (APO) with polyol ester were synthesized via gamma radiation-induced reversible addition-fragmentation chain transfer (RAFT) polymerisation. The particle diameter of the copolymerised poly(APO-b-polyol ester) core–shell NPs was found to be less than 300 nm, have a low molecular weight (MW) of around 24 kDa, and showed a controlled MW distribution of a narrow polydispersity index (PDI) of 1.01. These properties were particularly crucial for further use in designing targeted NPs, with inclusion of peptide for the targeted delivery of paclitaxel. Moreover, the characterisation of the synthesised NPs using Fourier Transform-Infrared (FTIR) and Neutron Magnetic Resonance (NMR) analyses confirmed the possession of biodegradable hydrolysed ester in its chemical structures. Therefore, it can be concluded that the synthesised NPs produced may potentially contribute to better development of a nano-structured drug delivery system for breast cancer therapy.
. (a) The mechanism of reversible addition fragmentation chain transfer (RAFT) polymerisation under thermal, UV, or ionising radiation in the chain extension of macro-CTA agent and in the formation of block copolymer 9 ; (b) the general chemical structure and products of APO; (c) the general chemical structure and products of polyol ester; (d) FTIR of the (i) APO and (ii) polyol ester.
Scientific Reports
| (2020) 10:21704 | https://doi.org/10.1038/s41598-020-78601-x www.nature.com/scientificreports/ react in an aqueous solution and at room temperature 12 . The toxicity of the reaction is estimated to be weak as the 1-Ethyl-3-(3-dimethylaminopropyl) carbodiimide (EDC) is converted to a non-harmful urea derivative in a coupling reaction 13 . The targeting ligands are otherwise known as monoclonal antibodies (mAbs), antibody fragments, nucleic acid (aptamers), proteins, peptides, small molecules (i.e. folic acid, galactose, estradiol, and biotin), and others (i.e. vitamins and carbohydrates). These ligands can be paired with NPs to improve the efficiency of delivery 14 . Accordingly, peptides are attractive targeting molecules due to their small size, low immunogenicity, good stability, high specificity, low toxicity, fast manufacturing at low cost and nanoparticle conjugation, and elevated success rates in clinical trials 15 . The cell-targeting peptides (CPTs) interact in cells or tissue directly, i.e. arginylglycylaspartic acid (RGD) have thus received consideration from many research groups for its potential use in breast cancer drug delivery as surface ligands 16 . The strategy towards applying drugs in targeted anticancer therapy is by permitting them to accumulate the drugs in NPs, whether externally and/or internally.
A previous work has stated that the production of micro/nano-particles created from APO and a dispersed surfactant method formed by a typical technique of radical polymerisation, resulting in a broad distribution of particle size, which is an undesirable property 17 . Therefore, this work attempts to synthesise poly(APO-b-polyol ester) NPs using APO and polyol ester as shown in Fig. 1b,c via low-dose gamma radiation-induced RAFT polymerisation and cross-linking processes on the synthesis of potential drug carriers 18,19 . The infrared (IR) spectra of the APO and the polyol ester are shown in Fig. 1d. This work also demonstrates the physicochemical properties of NPs obtained by gravimetric analysis, dynamic light scattering (DLS), gel permeation chromatography (GPC), and ultraviolet-visible (UV-Vis) spectrophotometer, as well as characterisations via FTIR and NMR. It is expected that the gamma radiation-induced RAFT technique can be a successful method for the formulation and synthesis of core-shell NPs. A mean diameter of less than 300 nm and a small particle size distribution should be obtained in order to modify the surface of the NPs with peptide in the specific delivery of paclitaxel for breast cancer therapy.
Results and discussion
Formation of poly(APO-b-polyol ester) nanoparticles. The macro-CTA, namely macro-APO-RAFT agent is synthesised via the RAFT polymerisation method with the use of the APO synthesised from palm olein. Figure 2a shows the schematic reactions of macro-APO-RAFT agent formations. The pre-formed polymer or macro-APO-RAFT agent was first prepared by exposing the microemulsion system to 500 Gy of gamma irradiation, followed by the gamma radiation energy absorption of the water molecules under the pulse radiolysis technique 20,21 .
As a result of exposure to the gamma irradiation, the formation of the initial reactive intermediates such as those of the hydroxyl radicals (•OH), hydrogen atoms (H•), hydrated electrons (e − ), hydrogen ions (H + ), peroxide (H 2 O 2 ), and hydrogen (H 2 ) from the water molecule (H 2 O) hydrolysis is used in the preparation of the macro-APO-RAFT agent (Fig. 2b). Among all of these reactive radicals, the hydroxyl radical is considered as responsible for the reactivity transfer from the water to the polymer 22,23 . This is due to many previous studies having attested to its better effectiveness in the radiation-induced polymerisation process compared to those of the hydrogen atoms 24 . Furthermore, the reaction between the hydroxyl radical and the carbon double bond addition in the chemical structure results in the formation of radical sites on the APO chains for synthesising macro-APO-RAFT agent with the presence of DBTTC (Fig. 2a).
Next, the cross-linked-copolymer NPs or poly(APO-b-polyol ester) are produced by adding the polyol ester (i.e. second monomer) into the macro-APO-RAFT agent system. Under this circumstance, the initiation reaction is similar to the scheme reaction shown in Fig. 2b, where the water molecules undergo a radiolysis from the reactive radical irradiation. This is followed by the hydroxyl radical reaction with the polyol ester of its carbon double bond, forming the polyol ester radical (Fig. 2c) 25 . The polyol ester-radical then reacted with the macro-APO-RAFT agent in the formation of cross-linked-blocked-copolymers with a thiocarbonyl thio end-group, named by poly(APO-b-polyol ester) NPs.
The use of the radiation-induced RAFT polymerisation approach for synthesising the copolymer nanoparticles in the present study can be claimed as safe, conducted in a controllable process at room temperature, faster, and involved no catalyst to perform the reaction compared to the thermal method and/ or other C/LRP methods such as the NMP process that requires the presence of both high-temperature reaction and ATRP-based catalyst 26 . Furthermore, the application of gamma radiation as a tool in RAFT polymerization is a promising initiation source for the formation of copolymer nanoparticles with a defined molecular weight and narrow molecular weight distribution (see Table 1) 8,10,27 . RAFT polymerisation is applied in the manufacturing of multi-block copolymers and complex polymer architectures due to its living and controllability throughout the polymerisation process. Therefore, the RAFT polymerization method can be an alternative and promising technique for the development of polymeric nanoparticles.
Fourier transform infrared (FTIR) spectroscopy. The infrared (IR) spectra of the macro-APO-RAFT agent and the poly(APO-b-polyol ester) nanoparticles are shown in Fig. 2d. Referring to the first irradiation process performed in the formation of the macro-APO-RAFT agent, the occurrence of the new IR peaks at 2658, 2139, 2084, 1675, 1632, 1603, 1395, 1399, 1241, 988, 950 and 855 cm −1 on the FTIR spectroscopy corresponded with the thio, sulphate, aromatic-alkene, and benzene functional groups (Fig. 2d(i)). They were found to represent the interaction between the APO and respective DBTTC molecules. Due to the absence of these new IR peaks on the original APO spectra (Fig. 1d(i)), these new IR peaks thus confirmed the appearance of a pre-formed polymer with thiocarbonyl thio end group; in this case, this was the chemical structure of a macro-APO-RAFT agent (Fig. 2d(i)). www.nature.com/scientificreports/ The use of an FTIR spectroscopy in the second irradiation process for the formation of poly(APO-b-polyol ester) nanoparticles further verified the appearance of an amine (C-N) functional group at the wavenumber of 1000 and 1028 cm −1 . The thio (2689 cm −1 ), isothiocyanate (2140, 2020 and 1250 cm −1 ), aromatic-alkene (1588, 1562 and 940 cm −1 ) and benzene functional groups (1380 cm −1 ) in Fig. 2d(ii) signifies the formation of the poly(APO-b-polyol ester) nanoparticles accordingly 19 . Supplementary Table S1 shows the details on the assignment of IR peaks for macro-APO-RAFT agent and poly(APO-b-polyol ester) nanoparticles. Assignments of the four major functional groups such as thio, sulfate, aromatic-alkene and benzene were still present after being copolymerised by poly(APO-b-polyol ester) nanoparticles. This signifies the effective reaction by the RAFT technique for the formation of NPs between the APO and the polyol ester (see Supplementary Table S1).
Nuclear magnetic resonance (NMR). Detailed properties and locations of the proton-NMR shifts (ppm)
for the macro-APO-RAFT agent and the poly(APO-b-polyol ester) nanoparticles have been shown in Supplementary Table S2. Therefore, the use of a 1 H-NMR spectroscopy revealed the formation of a macro-APO-RAFT agent from the thio-sulfonate sulphide and benzene proton signals that occurred at the respective 1.672 ppm, www.nature.com/scientificreports/ 1.895-1.944 and 3.426-3.529 ppm for the thio-sulfonate sulphide group and 7.270-7.265 ppm for the benzene group. The spectra of the macro-APO-RAFT agent exhibited in Fig. 2e(i) further verified the formation of a molecular structure between the macro-APO-RAFT agent and the poly(APO-b-polyol ester) nanoparticles in Fig. 2e 28 . Meanwhile, the formation of the poly(APO-b-polyol ester) nanoparticles was found to be hindered by the absence of the C-N peak in the macro-APO-RAFT spectra as shown in Fig. 2e (i). Therefore, the amine protons (-CH 2 -N) and the carbon protons next to the ester linkages (COO-CH 2 -) or the presence of a molecular polyol ester occurring at 2.100-2.200 ppm and 3.7000-3.950 ppm in Fig. 2e thus (ii) verified the formation of the poly(APO-b-polyol ester) nanoparticles from a copolymerisation between the macro-APO-RAFT agent and the polyol ester. The disappearance of certain sulphide peaks at 3.426-3.529 ppm is also indicative of the sulphide group shift 28 .
Further confirmation of the copolymerisation process showed by the appearance and shifting of the benzene peaks at 7.300-7.500 ppm in the 1 H-NMR spectra region. This was possibly be due its associated chemical structure in the poly(APO-b-polyol ester) nanoparticles system 28 .
Particle size. The NPs increased in size throughout the radiation process until they received sufficient radiation energy to complete the copolymerisation (see Table 1). The optimum dose for the copolymerised NPs was determined by the transformation in their sizes throughout the irradiation process. The ideal dose for the copolymerised NPs is defined directly from the irradiation dose right before the NPs decrease in size (see Table 1). Their hydrodynamic size was less than 200 nm. The NPs produced possess a common polymeric particle size for drug delivery, as previously reported in many studies (see Table 1) 2,29 .
The zeta potential results revealed that the surface charge of the NPs at an optimum dose of 700 Gy had a highly stable value of 61.97 mV in a colloidal system. The resulting NPs also possessed a positive surface charge as indicated by the zeta potential measurement, thereby indicating the interaction of the CTAB molecules surrounding the NPs with the aqueous molecules. As reported in a few studies, positively-charged or cationic-type polymeric NPs are perfect for cancer therapy applications due to their advantages over the commonly used liposomes in drug delivery systems. For instance, cationic NPs are more stable and offer more protection during cellular trafficking 30 .
Therefore, this study revealed that the gamma radiation-induced RAFT polymerisation could generate the desired polymeric NPs suitable for drug delivery applications. The copolymerisation of APO and polyol ester was successfully conducted in a colloidal system to obtain spherical core-shell nano-scale sized particles (Fig. 3a).
Effect of the radiation dose on the molecular weight and gel fraction of nanoparticles. In reference to Table 1, the increase in the gel fraction was found to correlate with the copolymerisation reaction of the designed NPs at the hydrodynamic particle diameter of 142.09 nm. Since the results showed the gel fraction percentage of the NPs to be increasing with the radiation dosage, this confirmed the copolymerisation and crosslinking density for inducing their size.
As most of the synthesised NPs are formed by the crosslinking polymerisation, they are produced by the combination of two processes, namely the intramolecular crosslinking and intermolecular crosslinking 22,31 . The intermolecular crosslinking causes an increase in the average size of the polymer chains (molecular weight) and can contribute to the creation of macroscopic or branched structures. In contrast, the intramolecular crosslinking leads to the formation of additional bonds between the single macromolecule segments or a closed-loop structure with a smaller dimension 31 .
Similarly, a larger diameter of the NPs was observed at 700 Gy compared to the 0 Gy and an increase in the number average molecular weight of the obtained NPs at 700 Gy with 23.75 kDa are shown in Table 1. This indicated the copolymerisation and intermolecular crosslinking process to dominate the recombination of molecules in the formation of the poly(APO-b-polyol ester) NPs, leading to the increased particle size 9,31 . Besides, RAFT polymerization often allows nanoparticles to be synthesized with well-controlled molecular weight distribution and narrow polydispersity index (PDI) when the necessary radiation dose is acquired to crosslink the block copolymers as nanoparticle structures such as from the irradiation of the low gamma doses (Table 1).
In vitro degradation. Next, the biodegradation of the poly(APO-b-polyol ester) nanoparticles was studied in an SBF solution at a temperature of 37 °C. The results are displayed in Fig. 3c, whereby the NPs show a good reaction towards weight loss in the SBF solution. The weight loss calculated (using Eq. 1) for the NPs was found to be 70% and 64% at the respective analysis periods of 60 days and 90 days. This result indicated that the nanoparticle substances had the capability to gradually decompose in the SBF solution, which was supported by the nanoparticle sample degradation. This can be measured using FTIR spectroscopy under dry conditions (refer to Fig. 3d). The FTIR spectra showed specific changes in several peak intensities of the NPs' chemical functional groups both before and after the degradation process. It was observed that around seven peak intensity bands of the functional groups decreased after the degradation process. This decrease in the FTIR spectral peaks indicated that the weight loss was mainly due to the degradation of the hydrolysable ester bonds 19 and some of the other functional groups such as the hydroxyl, alkyl, thio and amine groups in the SBF solution. The details of these degraded peaks were as follows: (1) www.nature.com/scientificreports/ Several studies have found that NPs in the SBF solution are degraded by ester bonds, resulting in monomer separation, decomposition of polymer chains, and dissolution of degradation products. This mechanism leads to the formation of small water-soluble fragments of carboxyl end-group chains 32 (Fig. 3e). Meanwhile, the TEM micrographs in Fig. 3b show that the poly(APO-b-polyol ester) nanoparticles structure collapses slightly after the degradation process compared to its better shape prior. This showed that the fresh NPs were welldispersed in the SBF solution with no aggregation occurring before storage period (Fig. 3a). The TEM images presented in Fig. 3b further confirm that the degraded NPs deform into complex agglomerated particles after 90 days of degradation. This image indicated that the poly(APO-b-polyol ester) NPs with a molecular weight of 23.75 kDa were able to retain their degradation property. Furthermore, polymeric nanocarriers with molecular weights of below 30-50 kDa are often chosen as the backbone for drug delivery system because these polymers are highly likely do not elicit any toxic responses and are suitable for renal clearance [33][34][35] . Hence, this property could become an added advantage for the NPs as potential biodegradable nanocarriers if they were to be applied in a drug delivery system. Surface functionalisation of nanoparticles with peptide. Succinylation method using succinyl anhydride was used to transform the hydroxyl group of the NPs into a carboxyl group in the presence of DMAP and TEA 36 . After the succinylation process, the yellowish solid colour of the poly(APO-b-polyol ester) NPs reformed into a dark yellow solid colour. The schematic reaction of transforming the hydroxyl group of nanoparticles to carboxylated-nanoparticles is shown in Fig. 4a. FTIR spectra analysis in 4b shows variable peaks of hydroxyl (-OH) at 3494 cm −1 and 1623 cm −1 , and carboxylic acid (-C=O) at 1709 cm −1 . This implicates the existence of carboxylic absorption on the nanoparticle's molecular structure. In addition, strong ester absorptions i.e. (C=O) at 1739 and 1709, and (C-O-C) at 1246, 1170, 1113, 1097 and 1058 cm −1 both confirmed the successful functionalisation of carboxylic molecules onto the NPs (Fig. 4b) 37 . Supplementary Table S1 reveals the transformation of IR functional groups for neat NPs into carboxylated NPs in detail.
Meanwhile, NHS-activation process by using NHS and EDC was utilised to modify the carboxylic group of NPs to the NHS group (see Fig. 4c). Accordingly, the amine coupling with the techniques of EDC/NHS is a strategy for peptide conjugation 15,[38][39][40][41][42] . From the FTIR spectral analysis shown in Fig. 4b, several new peaks representing the NHS molecule appear in the NHS-activated nanoparticles. For example, an ester group (C=O) appeared at 1789 cm −1 , while amide groups i.e. (N-H) and (C=O) were found between 3700-3500 and 1715-1585 and 879 cm −1 , respectively 37 . Furthermore, Supplementary Table S1 presents a comparison of the IR peak and functional groups between the carboxylated-nanoparticle and NHS-activated nanoparticle. The latter NPs were effectively synthesised with carboxylic group consumption and subsequently substituted by the NHS molecules in the nanoparticle structure. This demonstrates the efficacy of an EDC-mediated reaction by producing an intermediate sulfo-NHS ester product or the NHS-activated nanoparticles. www.nature.com/scientificreports/ The peptide was conjugated with NPs in the presence of DMAP and TEA to improve the efficacy of their therapeutic delivery to target cells 43 . The schematic process for the peptide functionalisation to the NHS-activated nanoparticles is presented in Fig. 5a. The peptide's amine (-NH 2 ) group acts as a nucleophile and targets NPs ester (-C=O-O-) as it is a strong electrophilic agent, thus allowing nucleophiles to easily attack [ Fig. 5a(i)]. Meanwhile, the TEA then deprotonated the positively-charged peptide cation (or amine), followed by the NHS removal process [see Fig. 5a(ii)]. The application of DMAP as a catalyst towards the reaction mixture is intended to increase the coupling efficiency 43 . Following this, the peptide bond is developed to form peptide-functionalised poly(APO-b-polyol ester) nanoparticle, whereby subsequent TEA protonation for the oxygen anion of NHS produces their side products [see Fig. 5a From the analysis of FTIR spectra in Supplementary Table S1, the NHS ester molecule (C=O) at absorption peak 1789 cm −1 disappears upon the functionalisation of the peptide. As a result, the ligand molecule of the peptide was successfully functionalised to the NHS-activated nanoparticles. Broad and strong alcohol peak absorption occurred at wavenumber 3491 cm −1 for NPf sample formulation compared to the NHS-activated nanoparticles [ Fig. 4b(ii, iii)]. Besides, the alkyl, amide, and phenol groups were found to appear at the respective wavenumbers of 2848, 1645 and 1370 cm −1 , thereby representing the presence of peptide molecule in the sample NPf 44 .
Disappearance of ester and nitro from the spectra indicated that the peptide was conjugated with the NHS group of NPs (Fig. 4c). Furthermore, the ester peaks (-C-O-C-) between 1230 and 988 cm −1 almost disappeared or shifted, which suggested the peptide coupling to the nanoparticle surface 45 [Fig. 4b(iii)]. Supplementary Table S1 shows the IR peak and functional groups of peptide-functionalised nanoparticles compared to the NHS-activated nanoparticles for usage in evaluating the effective peptide synthesis conjugated to the NPs. Figure 5b,c display the chemical molecular structure and 1 H-NMR proton of peptide-functionalised nanoparticles, respectively. The presence of peptide compounds in the NPf nanoparticles is confirmed by the new chemical shift of carboxylic acid (-C=O-OH) at 2.744 ppm, primary amine (-NH 2 ) at 2.854 ppm, secondary amine (-NH) at 2.998 ppm, amine-N-Hydroxyl (-NH-CH 2 -Phenol) at 5.136-5.171 ppm, phenol at 6.325 ppm and 6.716-6.728 ppm, and carboxylic amide (-C=O-NH-) at 7.10 ppm and 8.159-8.181 ppm (Fig. 5c, see Supplementary Table S2) 46 . Meanwhile, arginine-glycine-aspartate (Arg-Gly-Asp) or RGD peptides are the ligands for vβ3 and αvβ5 integrins that can act as cell adhesion sites 47 . Hence, the spectra in the 1 H-NMR showed that at 8.159-8.181 ppm and 7.10 ppm, the NPfTX nanoparticles were actually coupled with RGD peptides carrying amide-Arg 46 . All of these peaks were, however, undetected in neat NPs. These spectral variations in 1 H-NMR are thus consistent with the development of amide bonds coupled to the peptide on the NPs [ Fig. 5a(iii)]. Based on the NMR and FTIR analyses, the molecular structure of NPf nanoparticles is identified and shown in Fig. 5b. As a result, the NPf nanoparticles have a high functionalisation of the peptide at 96.13% and, about 99.60% of NHS molecules have been consumed by peptide conjugation to the NHS-activated nanoparticles. Results showed that significant functionalized efficiency of peptides on the surface treatment of nanoparticles. www.nature.com/scientificreports/ Paclitaxel loaded nanoparticles. It was noted that when loaded with paclitaxel, the mean NPf nanoparticle size increased continuously from 239.33 nm (± 9.21) to 263.08 nm for the development of NPfTX nanoparticles as expected 48 . For non-functionalised nanoparticles, NPTX increased to 176.99 nm after paclitaxel loading ( Table 2). The TEM images result in a spherical nanoparticle core-shell for NPfTX and NPTX as shown in Fig. 6a. Supplementary Table S1 indicates variations in the IR peaks between NPf, NPfTX, and NPTX nanoparticles after the loading of paclitaxel. The existence of hydroxyl and amide (-C-OH and NH-), carbonyl (C=O), and ester (C-O-C) groups was observed at the peaks of 3480, 1748, 1100, and 1070 cm −1 in the NPfTX spectrum, respectively. These peaks showed the presence of paclitaxel molecules via hydrogen bonding and van der Waals interaction (Fig. 6b,c). The peak observed at 1645 cm −1 corresponds to the -C=O-paclitaxel band and indicates the existence of hydrogen bonding (N-H-O-C) between the NPs and paclitaxel (Fig. 6c, see Supplementary Table S1) 49 . Meanwhile, the peaks recorded at 1253 cm −1 and 1100-1001 cm −1 relate to the paclitaxel -C-O-C-ester band, indicating the drug's existence in NPfTX (Fig. 6b,iii, see Supplementary Table S1). The peaks at the wavenumber of 922 cm −1 showed the -C=C-of the aromatic rings for paclitaxel accordingly 50 . Besides, the -C-H peaks at 808 cm −1 in the NPfTX nanoparticles disappeared compared to its existence before drug entrapment, such as in NPf nanoparticles, due to alkyl-alkyl bonded by van der Waals interaction throughout the paclitaxel entrapment process (Fig. 6b,i).
In vitro drug release, drug release kinetics and mechanism. In vitro drug release study showed that the NPfTX and NPTX nanoparticles had burst release for 4 h from 0 to 75% for NPfTX, and 6 h from 0 to 45% for NPTX. This was followed by a continuous release up to 24 h due to the loss of paclitaxel localised in the nanoparticle matrix from 75% (i.e. 6 h for NPfTX) and 45% (i.e. 8 h for NPTX) to 100% (Fig. 6d). The initial burst release in NPfTX can occur due to the heterogeneous drug distribution (see Fig. 6d), whereas in NPTX, this may present through the pores and cracks associated with the changes in particle morphology particularly related to biodegradability (see Fig. 6d) 52 . In both NPfTX and NPTX, the paclitaxel was fully released in the PBS solutions by the end of 24 h. Therefore, the percentage paclitaxel release practically remained constant between the duration of 24-72 h. Table 2 shows the yield, drug loading content, the drug entrapment efficiency, the in vitro drug release kinetic values, and the diffusion exponent values of in the NPfTX and NPTX samples, respectively. The drug loading content and drug entrapment efficiency for the NPs were measured using the paclitaxel calibration curve in the DMSO solution (Eq. 6) and calculated using Eqs. (7), (8) and (9). The results revealed that the NPTX www.nature.com/scientificreports/ nanoparticles had a high nanoparticle yield and less paclitaxel loading capacity at 2.9% compared to the NPfTX nanoparticles. In contrast, the latter had a better loading capacity of 7.12%. Furthermore, NPTX nanoparticles at the hydrodynamic diameter of 176.99 nm were found to be smaller in size compared to NPfTX nanoparticles at the hydrodynamic diameter of 263.08 nm. Therefore, the NPfTX nanoparticles' peptides may be responsible for the non-covalent interactions between paclitaxel and nanoparticle too due to the existence of functional groups i.e. hydroxyl (-OH), amide (-NH), carbonyl (-C=O) and alkyl (-CH 3 ) (see Fig. 5b). This allows additional loading areas for the paclitaxel molecules to the NPfTX's peptide in comparison to non-peptide-functional nanoparticles, such as in NPTX 49 . Besides, Table 2 shows that these NPs yield fair linearity for the zero-order, Korsmeyer-Peppas, and Hixson-Crowell kinetics. The best fit with higher correlation was found with the first-order model for the NPfTX nanoparticles, while the Higuchi model applied for the NPTX nanoparticles as the correlation (R 2 ) showed high linearity. Based on these findings, the drug release kinetics from the designated NPfTX nanoparticles corresponded to the first-order kinetics. Here, the release frequently starts with an initial burst of a drug, which is Table 2. Particle size and zeta potential of the NPs and their nanoparticles yield, the drug loading content, the entrapment efficiency, the in vitro drug release kinetic values, and the diffusion exponent values of the peptidefunctionalised and non-peptide functionalised poly(APO-b-polyol ester) nanoparticles, loaded with paclitaxel. www.nature.com/scientificreports/ shown in the early period of paclitaxel release as seen in Fig. 6d (NPfTX) 53 . For the NPTX nanoparticles (Fig. 6d), the paclitaxel release pattern was best fitted with the Higuchi model based on the higher correlation, indicating the drug release kinetic to be through diffusion control 54 . According to the Korsmeyer-Peppas equation, if the n value is less than 0.45, the drug release follows a quasi-Fickian mechanism. Therefore, the results indicated that the paclitaxel release mechanism from the investigated core-shell NPs (i.e. NPfTX and NPTX) was through diffusion control 55 .
In vitro cytotoxicity assay. Figure 7 indicates the in vitro viability of (a) NPfTX and (b) NPTX, and NPs ( Supplementary Fig. S1) samples at the corresponding nanoparticle concentrations of 15.62, 31.25, 62.50, 125, 250, 500, 1000 and 2000 μg/ml. The NPfTX nanoparticles showed both dose-and time-dependent responses (Fig. 7a). Accordingly, the cell viability decreased gradually along with increased nanoparticle dose and incubation time. In particular, NPfTX shows a higher cytotoxicity efficacy against MCF-7 cells than NPTX (see Fig. 7b) and neat NPs (Supplementary Fig. S1). The cell viability of NPfTX-treated MCF-7 cells, on the other hand, was observed to be higher than 100% at low concentrations due to a natural variation in cell metabolism where it is often possible that cell treatment may lead to an increase in enzymatic activity without directly influencing the cell number or cell viability. Additionally, it can be concluded from Fig. 7a that the MCF-7 cells are inhibited by a sample of peptidefunctionalised and paclitaxel-loaded nanoparticles after 24, 48, and 72 h (NPfTX, Fig. 7a). This was due to the presence of peptides that could contribute to the therapeutic action of the NPfTX nanoparticles. Subsequently, the best target for MCF-7 cells and the unique release of paclitaxel molecules was to inhibit the MCF-7 cells, which could be as small as 19.01% and 8.81% cell viability at 1000 and 2000 μg/ml of NPfTX samples after 24 h of incubation, respectively. At 24 h, nearly 100% of the paclitaxel release equal to 53.33 nM of paclitaxel was able to suppress the MCF-7 cells below 19%. Besides, it is suggested that an increased cytotoxicity results in an increased paclitaxel exposure at 48 or 72 h, with 50 nM of paclitaxel in MCF-7 cells 56 . Furthermore, Fig. 7a shows that the MCF-7 cell viability is between 15.21-16.98 and 7.97-5.04% for the NPfTX sample after 48 and 72 h of incubation at 1000 and 2000 μg/ml concentrations, respectively.
No difference in cytotoxicity was observed in NPTX despite the presence of paclitaxel in the assay at 24, 48, and 72 h (Fig. 7b). The NPTX and NPs ( Supplementary Fig. S1) samples displayed a moderate cytotoxicity efficacy against MCF-7 cells after 72 h, which reached 90.89% and 80.13% cell viability, respectively. This can be due to the paclitaxel release from NPTX nanoparticles and the degradation components of their matrix, as well as the NP matrix that may inhibit or damage the effects of the MCF-7 cells. This finding is consistent with the biodegradability studies conducted regarding NPs; once they are degraded, these NPs lead to drug leaching and the generation of degradation products that can harm MCF-7 57 . In the paclitaxel release study, NPTX revealed excellent paclitaxel release of about 100%, which was equivalent to 40 nM of paclitaxel concentration for 24 and 48 h. However, even though it was released from NPTX, the MCF-7 cells seemed viable (see Fig. 7b). This finding showed that NPTX acted as a non-target carrier when the absence of peptide in NPTX nanoparticles inhibited NPs targeting MCF-7 cells. 58 . From these images, the MCF-cell (blue) can be seen to be located closely around NPfTX nanoparticles (i.e. green, FITC stained), suggesting that they have internalised into the NPs 40 . As a result, the incubation of MCF-7 with NPfTX nanoparticles led to a significant decrease in cell viability, showing there capable of inhibiting cell viability as shown in Fig. 7c 40,58 .
Cells imaging.
At 24 h, NPfTX demonstrated a 100% release profile equal to 52.5 nM of paclitaxel concentration, suggesting 9-19% inhibition of MCF-7 cells compared to NPTX nanoparticles, where MCF-7 cells were located in NPfTX that had many that were not viable (see Figs. 6d, 7a]. This was due to the high affinity of the peptidefunctionalised and paclitaxel binding to NPfTX, thereby causing the high content screening images to reveal the localisation of the MCF-7 cells in the NPfTX 58 . Figure 7d displays the high-content screening images of MCF-7 cells at 24 h of incubation with NPTX dispersion in PBS. From this figure, within 24 h, only a fewer number of cells (blue) were spread around the NPTX nanoparticles (i.e. green, FITC stained). NPTX also showed a 100% release profile at 24 h, which was equivalent to 40 nm of paclitaxel concentration, but demonstrated almost 100% viable cells compared to NPfTX (see Figs. 6d and 7b). This result revealed that NPTX behaved as the non-target form carriers, whereby these MCF-7 cells could not enter the NPTX nanoparticles. This is shown in the FITC and DAPI merged channels at 24 h in Fig. 7d. The absence of peptide in the NPTX nanoparticles did not significantly affect MCF-7 cells. As a result, the incubation of MCF-7 cells with NPTX nanoparticles did not result in decreased cell viability as shown in Fig. 7b. A slight decrease may be otherwise due to the product degradation and the leaching of the drugs from the NPTX nanoparticles that can affect the MCF-7 cells 57 .
Conclusions
The production of poly(APO-b-polyol ester) nanoparticles by using gamma radiation-induced RAFT polymerisation technique is found to be not only promising and well-suited; the absence of initiators and catalysts in the renders it an environmentally-friendly method for producing the targeted NPs. Besides having a hydrodynamic particle diameter of less than 200 nm when subjected to a very short gamma irradiation exposure (i.e. 700 Gy),
Scientific Reports
| (2020) 10:21704 | https://doi.org/10.1038/s41598-020-78601-x www.nature.com/scientificreports/ www.nature.com/scientificreports/ the hydrolysed ester bond of these poly(APO-b-polyol ester) nanoparticles were found to have very good biodegradable properties: an average of 24 kDa of MW, a controlled MW distribution, and a narrow PDI of 1.01. The study shows that poly(APO-b-polyol ester) nanoparticles are competent to be modified with peptide and loaded with paclitaxel in developing active-targeting NPs. The localisation of these MCF-7 cell lines into the cytoplasm of the activated-targeted poly(APO-b-polyol ester) nanoparticles revealed the efficacy of these NPs at 1000 and 2000 μg/ml concentrations across specific deliveries. Here, the percentage of cell viability for MCF-7 inhibition could be at 5-20% over a 24, 48 to 72 h of incubation. The cells were found viable without ligands as a targeting agent for NPs due to the non-specific distribution of paclitaxel to cancer cells. As a result, NPTX nanoparticles tended to act as the passive targeting NPs; in comparison, NPfTX acted as the active targeting NPs. Collectively, they exhibited promising properties for binding and destroying cancer cells.
Experimental
Synthesis of poly(APO-b-polyol ester) nanoparticles. Acrylated palm olein (APO) (molecular weight: 1750.04 g/mol) and polyol ester (molecular weight: 5001.86 g/mol) were synthesised at the Laboratory of Radiation Processing Technology Division, Malaysian Nuclear Agency, Selangor, Malaysia. The macro-APO microemulsion consisting of APO, S,S-dibenzyl trithiocarbonate (DBTTC) (97%, Aldrich), ethyl acetate (EA) (99.5%, Merck), and N,cetyl-N,N,N-trimethylammonium bromide (CTAB) (98%, Merck) were used accordingly without further purification. They were exposed to 500 Gy (Gy) of a gamma radiation source. Then, approximately 20 ml of macro-APO-RAFT solution was added to 3 mg of polyol ester containing 1 mg of DBTTC to obtain the respective poly(APO-b-polyol ester) NPs. The mixture was primarily stirred at 300 rpm for 1 h using a magnetic stirrer before being continuously stirred for 1 h using a high-speed disperser at 6000 rpm. The mixture was degassed with nitrogen gas before being exposed to a range of gamma radiation doses (i.e. 100, 400, and 700 Gy, and 1, 5, and 10 kGy). All samples were irradiated by using gamma radiation from a Cobalt-60 source at a dose rate of 2.16 Gy/s.
Physicochemical properties.
The hydrodynamic mean diameter (nm) of the samples was determined by photon cross-correlation spectroscopy (PCCS) using DLS (Sympatec Nanophox, German) with a helium-neon (HeNe) laser at a wavelength of 632 nm. The sample charge was analysed at room temperature via a zeta potential analyser (Zetapals, Brookhaven, USA). The palladium electrode cell was fitted and immersed into a 1 cm quartz cell consisting of the test solution. The zeta potential of the sample was determined using phase analysis light scattering (PALS) with a HeNe laser at a wavelength of 632 nm. Images of the NPs were captured by using a transmission electron microscope (TEM) for their morphological properties. Accordingly, the samples were dispersed in ultrapure water and a few droplets were placed on a copper grid to be dehydrated at room temperature. The TEM images were analysed by using a Zeiss microscope (Jeol, Japan) at a voltage of 160 kV. The chemical functional groups of the NPs were analysed by using Spectrum 400, Fourier Transform Near-Infrared (FT-NIR) spectrometer (Perkin Elmer, UK) within the wavenumber range of 500 to 4000 cm −1 . Next, the samples were dissolved in deuterated chloroform (CDCl3) for the NMR measurement. The δ H-NMR spectroscopy was performed by using a Bruker Avance Fourier Transform Nuclear Magnetic Resonance (FT-NMR) Spectrometer The solution was kept for 24 h at room temperature before its measurement. The GPC operation system was set up at 35 °C with 1 mL/min of flow rate using the THF eluent.
In vitro degradation study. The in vitro degradation of the NPs was performed in an incubator at 37 °C with orbital shaking at 150 rpm for the set periods of 1, 7, 30, 60, and 90 days accordingly. For each degradation period, a sample containing about 1 mg of NPs was weighed and placed in screw cap glass test tubes. Then, 5 ml of the simulated body fluid (SBF) solution (pH 7.4) was added into the test tubes, which was replaced every week to maintain the freshness of the fluid. After a specific period of incubation, the SBF solution was removed from the test tubes. Next, the NPs kept in the test tubes were rinsed with ultrapure water before being freeze-dried and weighed. The degree of biodegradation is calculated based on the weight loss percentage of the NPs using Eq. (1) below: The biodegradation of the NPs was determined by comparing the FTIR peak profile of the degraded nanoparticle sample with the fresh sample by an FTIR/FT Near-IR (FT-IR / FT-NIR) spectrometer (Perkin Elmer, UK) within the wavenumber range of 500-4000 cm −1 . Furthermore, images of the same set of NPs were captured by using a TEM (Jeol, Japan) for their morphological properties at a voltage of 160 kV. The dried nanoparticle samples were dispersed in 1 ml of acetone. A few droplets of the samples were placed on a copper grid and dehydrated at room temperature. www.nature.com/scientificreports/ added to a solution of NPs. The mixture was then stirred at 250 rpm and heated at 90 °C for 12 h under nitrogen. The nanoparticle derivatives were collected using the gravimetric method, while the sample was cooled before re-dissolved with 10 ml dimethylformamide (DMF) and precipitated using 10 ml of diethyl ether as precipitant.
The sample was centrifuged for 1 h at 5000 rpm and dried under vacuum at room temperature for 24 h to produce the carboxylate NPs. N-hydroxysuccinimide (NHS) activation with the carboxylic-functionalised NPs was performed by using the NHS and EDC. Accordingly, 15 mg of carboxylic-functionalised NPs and 1.77 mg of NHS were weighed in a 50 ml flask and dissolved with 10 ml of dichloromethane. The solution was stirred for 30 min at 250 rpm. Next, 4.05 mg of EDC was added to the solution. The mixture was mixed at room temperature for 12 h under nitrogen with a stirring speed of 250 rpm. The NPs derivatives were collected using the gravimetric method. The sample was re-dissolved in 5 ml ethyl acetate and precipitated using 10 ml of diethyl ether as a precipitant. The sample was centrifuged for 1 h at 5000 rpm and dried under vacuum at room temperature for 24 h to produce the NHS-activated NPs.
About 100 μL of 0.1 mM of peptide was dissolved in 5 ml of DMF in a 20 ml flask. Next, 1.12 mg of DMAP and 2.72 μL were mixed into the peptide solution. Afterwards, 10 mg of NHS-activated-nanoparticles was dissolved in 10 ml DMF in a 50 ml flask. The mixture of peptide solution was then added to the NHS-activatednanoparticles solution while stirring at a speed of 250 rpm for 12 h at room temperature under nitrogen. The sample was re-dissolved in 5 ml ethyl acetate and precipitated using 10 ml of diethyl ether as the precipitant, following which it was centrifuged for 1 h at 5000 rpm and dried under vacuum at room temperature for 24 h to produce the peptide-functionalised nanoparticles.
The concentration of the peptide conjugated into the nanoparticles was determined using UV-Vis spectrophotometer (Shidmazu UV-1800, Japan) at wavelengths between 190 and 500 nm and the wavelength at 275 nm was chosen for obtaining the absorbance peak values. The concentration and the percentage of the conjugated peptide were calculated according to the peptide calibration curves using Eqs. (2) and (3), respectively. Meanwhile, the NHS usage was determined via FTIR at wavenumbers between 4000 to 500 cm −1 and the wavenumber at 1789 cm −1 was chosen for obtaining the absorbance peak ratio values. The concentration and the percentage of NHS usage were calculated according to the NHS calibration curves using Eqs. (4) and (5) Paclitaxel encapsulation of nanoparticles. About 2.5 mg of peptide-functionalised-nanoparticles was added in 5 ml of 50 nM of paclitaxel in the DMSO solution. The mixture was stirred for 8 h at 500 rpm. After the entrapment process, the sample was precipitated using 2 ml of diethyl ether as the precipitant and centrifuged for 1 h at 5000 rpm, whereby the resulting supernatant was separated and collected. The sample was purified by dialysis against deionised water with dialysis tubing for 24 h to remove any excess impurities and DMSO. Then, the remaining solution in the dialysis tubing was transferred to a new vial and dried under vacuum at room temperature to produce solid NPs.
The supernatant solution were analysed by using UV-Vis spectrophotometer at the wavelengths between 190 and 500 nm; the maximum UV absorption wavelength at 265 nm was thus chosen to obtain the absorbance peak values 59 . The paclitaxel calibration graph is obtained by plotting the absorbance against the paclitaxel concentrations, while the excess concentration of paclitaxel in the supernatant that is not entrapped is calculated using the paclitaxel calibration curve according to Eq. (6). Afterwards, the actual amount of drug entrapped and loading can be obtained.
where x represents the concentration of the paclitaxel in DMSO.
Subsequently, the yield of NPs produced, drug loading content, and drug entrapment efficiency are calculated using Eqs. (7), (8) and (9). On the other hand, the above procedure was repeated for the production of drugloaded nanoparticles using non-functional (or neat) nanoparticles.
(2) y = 0.5592x were used for determining the in-vitro drug release from the nanoparticulate drug delivery systems by using a dialysis membrane method. First, 2.5 mg of NPs (i.e. NPfTX or NPTX) in 5 ml PBS was transferred to a dialysis tubing and closed tightly. The dialysis tubing was placed in a 50 ml beaker with a magnetic stir bar, whereby 45 ml of PBS was added. Then, the beaker was positioned in the flat bottom beaker containing distilled water that was placed on a hotplate magnetic stirrer. The paclitaxel release was conducted at the set periods of 15, 30, and 45 min and 1, 2, 3, 4, 5, 6, 8, 24, 48 and 72 h accordingly, at a temperature of 37 °C and speed of 250 rpm. A 2 mL sample solution was taken at each time interval and replaced with 2 ml of fresh PBS solution. The solution was then analysed by using UV-Vis spectrophotometer at wavelengths between 190 and 500 nm, whereby the wavelength at 265 nm was chosen to obtain the absorbance peak values. Next, the concentration of paclitaxel release from the NPs was calculated using the paclitaxel calibration curve in PBS solution as shown in Eq. (10), while the cumulative percentage of paclitaxel released was calculated using Eq. (11). The concentration study of paclitaxel release from NPs was performed in a triplicate with all data expressed as the mean (± standard deviation). Additionally, the kinetics and drug release mechanism were determined by the use of in vitro release data for different models of kinetics.
where y is absorbance value; x is concentration of paclitaxel release.
Cumulative percentage of paclitaxel release where W c is the total paclitaxel concentration in the dialysis membrane and W t is the paclitaxel concentration in the PBS medium at time, 't' .
In-vitro cytotoxicity. The initial stock solutions of the NPfTx and the NPTx samples were made at 2000 μg/ ml of PBS in a 15 ml of centrifuge tube, and then vortexed at room temperature. The serial dilutions of these nanoparticle solutions were prepared at a final volume of 2 ml in the 12-well culture plate by using a complete media solution within the concentration range from 15.63, 31.25, 62.5, 125, 250, 500, 1000 to a 2000 μg/ml. These nanoparticle serial dilution solutions were used in the 3-(4,5-dimethylthiazol-2-yl)-2-5-diphenyltetrazolium bromide (MTT) assay. First, 100 μl of complete media solutions was aspirated from the incubated MCF-7 cells with a concentration of 7.5 × 10 3 cells in a 96-well culture plate. Then, each of the 100 μl nanoparticle serial dilution solutions were seeded to the incubated MCF-7 cells, respectively. The 96-well culture plates were next incubated at 24, 48, and 72 h in a 5% carbon dioxide (CO 2 ) incubator system.
Next, 20 μl of 5 mg/ml of MTT reagent was added to the MCF-7 cells accordingly. These 96-well culture plates were kept for 4 h in a 5% CO 2 incubator system. Afterwards, all of the complete media solutions were removed from the MCF-7 cells and 100 μl of DMSO solution was pipetted to the 96-well culture plates, respectively. Then, the culture plates were shaken for 10 min at room temperature in a dark place and placed in the multimode plate reader (Enspire) to perform the MTT assay. The absorbance of the cells was measured at a wavelength of 540 nm by using the Enspire manager software. Then, the cellular viability for the set nanoparticle serial dilution concentrations (i.e. 15.63, 31.25, 62.5, 125, 250, 500, 1000 to a 2000 μg/ml) was calculated following Eq. 12 and the values were plotted in graphs accordingly. The MCF-7 cell viability analysis regarding the effect of paclitaxel release from the NPs was conducted in a triplicate of all data expressed as the mean (± standard deviation).
Calculation on the cell viability: www.nature.com/scientificreports/ These stained nanoparticle solutions were vortexed and incubated for 1 h at room temperature. Afterwards, any excess NHS-fluorescent dye in the stained NPs was removed using the dialysis technique and the figures of the experimental setup were displays in Supplementary Fig. S2. The sample solution was transferred to a single Slide-A-Lyser MINI Dialysis unit, which was floating in a dark container with at least 100 ml of PBS to protect the NHS from light. The samples were dialysed for 24 h at room temperature. Subsequently, the sample solutions were removed from the dialysis unit and transferred to a 2 ml microcentrifuge tube. These sample solutions composed of NHS-fluorescein stained NPs were used for cellular imaging by using the Image Xpress Micro High Content Screening (Molecular Devices) system.
Data availability
All data generated or analysed during this study are included in this published article (and its Supplementary Information files). | 2020-12-12T14:08:02.720Z | 2020-12-01T00:00:00.000 | {
"year": 2020,
"sha1": "aedcb696c86efeaded128bfdc152f4cc93b6d6f1",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-78601-x.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "8bd68849caf07b3b80ba2e6077213d0210b41e16",
"s2fieldsofstudy": [
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
17126581 | pes2o/s2orc | v3-fos-license | Probiotics and Gastrointestinal Infections
Gastrointestinal infections are a major cause of morbidity and mortality worldwide, particularly in developing countries. The use of probiotics to prevent and treat a variety of diarrheal diseases has gained favor in recent years. Examples where probiotics have positively impacted gastroenteritis will be highlighted. However, the overall efficacy of these treatments and the mechanisms by which probiotics ameliorate gastrointestinal infections are mostly unknown. We will discuss possible mechanisms by which probiotics could have a beneficial impact by enhancing the prevention or treatment of diarrheal diseases.
INTRODUCTION
Within the microbiota, individual bacteria containing important genes may benefit the host in different ways. As one considers the vast community of commensal microbes, subsets of these organisms may have important physiologic benefits for the host in the context of human nutrition and host:microbe interactions. Probiotics may stimulate immunity, regulate immune signaling pathways, produce antipathogenic factors, or induce the host to produce antipathogenic factors. Probiotics may produce secreted factors that stimulate or suppress cytokines and cellmediated immunity. These factors may also interfere with key immune signaling pathways such as the NF-κB and MAP kinase cascades. Probiotics may produce factors that inhibit pathogens and other commensal bacteria, effectively enabling these microbes to compete effectively for nutrients in complex communities. Microbes that produce antipathogenic factors may represent sources of novel classes of antimicrobial compounds, and these factors may be regulated by master regulatory genes in particular classes of bacteria. Microbes can also regulate signaling pathways in immune cells that result in the production of antimicrobial factors by mammalian cells, effectively resulting in remodeling of intestinal communities and prevention or treatment of infections.
Gastrointestinal infections are a major cause of morbidity and mortality worldwide. Studies conducted in 2006 found that, globally, severe diarrhea and dehydration are responsible each year for the death of 1,575,000 children under the age of five. This represents 15% of the 10.5 million deaths per year of children in this age group [1]. According to recent estimates, acute gastroenteritis causes as many as 770,000 hospitalizations per year in the United States [2]. Enteric pathogens include viruses (rotaviruses, noroviruses) and bacteria such as different strains of pathogenic Escherichia coli, toxigenic Clostridium difficile, Campylobacter jejuni, and Vibrio cholerae. These pathogens produce different types of toxins that can cause severe or life-threatening dehydration and diarrhea. Despite medical advances in diagnosis and treatment, the percent and number of hospitalized pediatric patients less than 5 years of age with severe rotavirus infection significantly increased when a recent time period (2001)(2002)(2003)) was compared to an earlier time period (1993)(1994)(1995) [3]. In addition to the typical pattern of acute gastroenteritis, infectious agents such as enteropathogenic E. coli (EPEC) may cause persistent, chronic diarrhea in children lasting longer than 1 week [4]. Such persistent infections may increase the risk of dehydration and long-term morbidities. Importantly, the relative contributions of EPEC and other bacterial pathogens to disease 2 Interdisciplinary Perspectives on Infectious Diseases remains controversial to some extent. A recent study highlighted that increased relative risk of gastrointestinal disease in children was only demonstrable for enteric viruses [5].
Recent studies have highlighted long-term morbidities associated with gastroenteritis. Early childhood diarrhea predisposes children to lasting disabilities, including impaired fitness, stunted growth, and impaired cognition and school performance [6]. Along with this data, new research on maternal and child undernutrition reported in The Lancet in January 2008 links poor nutrition with an increased risk for enteric infections in children. Furthermore, irritable bowel syndrome (IBS), a costly and difficult to treat condition that affects 20% of the United States population [7], has medical costs of up to $30 billion per year, excluding prescription and over-the-counter drug costs [8]. IBS is precipitated by an episode of acute gastroenteritis in up to 30% of all cases in prior studies [9]. Therefore, preventing or treating acute gastroenteritis before long-term sequelae develop would drastically reduce hospitalizations, disabilityadjusted life years, and both direct and indirect medical costs.
Accurate diagnosis of acute gastroenteritis is an ongoing challenge even in sophisticated academic medical centers. In a pediatric patient population exceeding 4,700 children, less than 50% of stool samples that underwent complete microbiologic evaluation yielded a specific diagnosis [10]. Enteric viruses represented the predominant etiologic agents in acute gastroenteritis in children less than 3 years of age, and bacteria caused the majority of cases of acute gastroenteritis in children older than 3 years of age [10]. The diagnostic challenges with enteric viruses include the relative paucity of stool-based molecular or viral antigen tests and the inability to readily culture most enteric viruses. Bacterial pathogens may be difficult to identify (such as most strains of diseasecausing E. coli) because of the lack of specific assays for these infections. The relative insensitivity of stoolbased toxin assays for the detection of toxigenic C. difficile precludes accurate diagnosis. In a children's hospital setting, combination toxin antigen testing yielded sensitivity below 40% in pediatric patients (J. Versalovic, unpublished data). The introduction of new molecular assays for realtime PCR detection of toxin genes directly in stool has markedly improved the ability to diagnose antimicrobialassociated diarrhea and colitis due to toxigenic C. difficile [11]. In addition, approximately 15-25% of cases of antimicrobial-associated diarrhea are caused by C. difficile. The prevalence of antimicrobial-associated diarrhea and gastrointestinal disease highlights the importance of alternatives to antibiotic strategies for treatment. Furthermore, antibiotics have limited utility for the treatment of gastroenteritis in general. Antimicrobial agents are not generally recommended as prevention strategies because of the problems of antibiotic resistance and antimicrobialassociated disease. Thus, instead of suppressing bacterial populations with antibiotics, can probiotics be used to remodel or shift microbial communities to a healthy state [12]?
The need for mechanistic details of probiotic action
The use of probiotics to prevent and treat a wide variety of conditions has gained favor in the past decade. This is in part due to a need to find alternatives to traditional therapies such as antibiotics as well as the lack of good treatments for GI ailments. While there are increasing reports of the efficacy of probiotics in the treatment of diseases such as pouchitis [13,14], diarrhea [15][16][17], and irritable bowel syndrome [18], the scientific basis for the use of probiotics is just beginning to be understood. We will focus on the potential applications for probiotics in the treatment of diarrheal disease. Several examples will highlight how probiotics may be selected for and utilized against pathogens causing gastroenteritis. The concept of using probiotic microorganisms to prevent and treat a variety of human ailments has been around for more than 100 years [19]. With the rise in the number of multidrug resistant pathogens and the recognition of the role that the human microbiota plays in health and disease, a recent expansion in the interest in probiotics has been generated. This phenomenon is apparent in both the numbers of probiotic products being marketed to consumers as well as the increased amount of scientific research occurring in probiotics. Although many of the mechanisms by which probiotics benefit human beings remain unclear, probiotic bacteria are being utilized more commonly to treat specific diseases.
Several definitions of what constitutes a "probiotic" in the literature have been formulated. For this review, we use the definition derived in 2001 by the Food and Agricultural Organization (FAO) and the World Health Organization (WHO)-"Probiotics are live microorganisms which when administered in adequate amount confer a health benefit on the host." [20]. This definition is the currently accepted definition by the International Scientific Association for Probiotics and Prebiotics (ISAPP) (http://www.isapp.net/).
Antipathogenic activities
Perhaps the most important scientific question regarding the use of probiotics in medicine is the identification of mechanisms by which probiotics impact human health. Several mechanisms have been implicated but most have not been experimentally proven ( Figure 1). Here, we discuss possible mechanisms that are relevant for the treatment of diarrheal diseases. We will highlight research examples that support these putative mechanisms whenever possible.
Stimulation of host antimicrobial defenses
Many probiotics have been shown to produce antipathogenic compounds ranging from small molecules to bioactive antimicrobial peptides. Most of these studies have focused on the in vitro susceptibility of pathogens to products secreted by probiotic bacteria. In most cases, the ability of an antimicrobial compound secreted by a probiotic organism to inhibit the growth of a pathogen in vivo has not been demonstrated. Conceptually, an antimicrobial compound produced by an organism would need to be produced at a high enough level and in the right location in the intestinal tract to exert a strong effect on a pathogen in vivo.
An elegant proof of principle for direct action of a probiotic-produced antimicrobial against a pathogen was recently reported by Corr et al. who demonstrated that production of the bacteriocin Abp118 by Lactobacillus salivarius was sufficient to protect mice from disease by infection with Listeria monocytogenes [21]. To prove the action of the bacteriocin was directly responsible for the protection of the mice, they generated a L. salivarius strain that was unable to produce Abp118 and showed that this mutant was incapable of protecting against L. monocytogenes infection. Notably, they were able to express a gene that confers immunity to the Abp118 bacteriocin within L. monocytogenes and showed that this strain was now resistant to the probiotic effect of L. salivarius within the mouse. This study provided clear evidence that a probiotic-derived bacteriocin could function directly on a pathogen in vivo.
Pathogen exclusion via indirect mechanisms
In addition to producing antimicrobial compounds that act directly on pathogens, probiotics may stimulate host antimicrobial defense pathways. The intestinal tract has a number of mechanisms for resisting the effects of pathogens including the production of defensins [22]. Defensins are cationic antimicrobial peptides that are produced in a number of cell types including Paneth cells in the crypts of the small intestine and intestinal epithelial cells. A deficiency in alpha-defensin production has been correlated with ileal Crohn's disease [23,24]. Tissue samples from patients with Crohn's disease showed a lower level of alpha-defensin production and extracts from these samples exhibited a reduced ability to inhibit bacterial growth in vitro. Moreover, some pathogenic bacteria have evolved mechanisms to inhibit the production or mechanism of action of defensins (reviewed in [25]).
Probiotics may act to stimulate defensin activity via at least two mechanisms. First, probiotics may stimulate the synthesis of defensin expression. This has been demonstrated for human beta defensin 2 (hBD-2), whose expression is upregulated by the presence of several probiotic bacteria via the transcription factor NF-κB [26,27]. The implication is that probiotic strains with this capability would strengthen intestinal defenses by increasing defensin levels. This effect is also observed with certain pathogenic bacteria and thus is not a specific property of probiotic bacteria. Second, many defensins are produced in a propeptide form that must be activated via the action of proteases. One well-characterized example is the activation of the murine defensin cryptdin (an alpha-defensin that is produced by Paneth cells) by the action of matrix metalloprotease 7 (MMP-7) [28]. Mice defective for MMP-7 are more susceptible to killing by Salmonella. Evidence indicates that bacteria can stimulate the production of MMP-7 in the intestine [29]. Thus, one mechanism in which probiotics could participate in activating defensins is by stimulating the production of MMPs in the intestinal tract. Alternatively, probiotics could produce proteases that themselves activate defensins in the intestinal lumen. Although there is no evidence yet to support this mechanism, a subset of lactobacilli and streptococci encode MMP-like proteins in their genomes (R. Britton, unpublished observation). These MMPs are not found in any other bacteria and thus it will be interesting to determine what effect they have on host cell function.
Immunomodulation
Rather than directly inhibiting the growth or viability of the pathogen, probiotics may compete for an ecological niche or, otherwise, create conditions that are unfavorable for the pathogen to take hold in the intestinal tract. There are many possible mechanisms for how pathogen exclusion may take place. First, several probiotics have been demonstrated to alter the ability of pathogens to adhere to or invade colonic epithelial cells in vitro, for example, see [30,31]. Second, probiotics could sequester essential nutrients from invading pathogens and impair their colonization ability. Third, probiotics may alter the gene expression program of pathogens in such a way as to inhibit the expression of virulence functions [32]. Lastly, probiotics may create an unfavorable environment for pathogen colonization by altering pH, the mucus layer, and other factors in the local surroundings. It is important to note that although many of these possible effects have been demonstrated in vitro, the ability of probiotics to exclude pathogens in vivo remains to be proven.
Enhancing intestinal barrier function
Probiotics may have strain-dependent effects on the immune system. Different strains representing different Lactobacillus species demonstrated contrasting effects with respect to proinflammatory cytokine production by murine bone marrow-derived dendritic cells [33]. Specific probiotic strains counteracted the immunostimulatory effects of other strains so that probiotics have the potential to yield additive or antagonistic results. Interestingly, in this study, the anti-inflammatory cytokine IL-10 was maintained at similar levels [31]. Different probiotic Lactobacillus strains of the same species may also yield contrasting effects with respect to immunomodulation. Human breast milkderived Lactobacillus reuteri strains either stimulated the key proinflammatory cytokine, human tumor necrosis factor (TNF), or suppressed its production by human myeloid cells [34]. The mechanisms of action may be due, not surprisingly, to contrasting effects on key signaling pathways in mammalian cells. Probiotic strains such as Lactobacillus rhamnosus GG (LGG) may activate NF-κB and the signal transducer and activator of transcription (STAT) signaling pathways in human macrophages [35]. In contrast, probiotic Lactobacillus strains may suppress NF-κB signaling [36,37] or MAP kinase-/c-Jun-mediated signaling [34]. Stimulation of key signaling pathways and enhancement of proinflammatory cytokine production may be important to "prime" the immune system for defense against gastrointestinal infections. Conversely, suppression of immune signaling may be an important mechanism to promote homeostasis and tolerance to microbial communities with many potential antigens, and these immunosuppressive functions may promote healing or resolution of infections.
Why understanding mechanisms is important?
The disruption of epithelial barrier function and loss of tight junction formation in the intestinal epithelium may contribute to pathophysiology and diarrheal symptoms observed during infection with certain pathogens [38,39]. Loss of tight junctions can lead to increased paracellular transport that can result in fluid loss and pathogen invasion of the submucosa. Pathogens may secrete factors such as enterotoxins that may promote excessive apoptosis or necrosis of intestinal epithelial cells, thereby disrupting the intestinal barrier. Enteric pathogens may also cause effacing lesions at the mucosal surface due to direct adherence with intestinal epithelial cells (e.g., EPEC). In contrast, probiotics have been reported to promote tight junction formation and intestinal barrier function [40,41]. Although the mechanisms of promoting barrier integrity are not well understood, probiotics may counteract the disruption of the intestinal epithelial barrier despite the presence of pathogens. Probiotics may also suppress toxin production or interfere with the abilities of specific pathogens to adhere directly to the intestinal surface. As a result, pathogens may have a diminished ability to disrupt intestinal barrier function.
Important considerations for the use of probiotics: strain selection and microbial physiology
An important challenge in the field of probiotics is the identification of genes and mechanisms responsible for the beneficial functions exerted by these microbes. Successful identification of mechanistic details for how probiotics function will have at least three important benefits. First, understanding mechanisms of action will provide a scientific basis for the beneficial effects provided by specific microbes. These breakthrough investigations will help move probiotics from the status of dietary supplements to therapeutics. Second, understanding mechanisms of probiosis and the gene products produced by probiotics will allow for the identification of more potent probiotics or the development of bioengineered therapeutics. As an example, the antiinflammatory cytokine IL-10 was postulated to be a potential therapeutic for the treatment of inflammatory bowel disease. To test this hypothesis, a strain of Lactococcus lactis engineered to produce and secrete IL-10 was constructed and demonstrated to reduce colitis in a murine model [42].
Early clinical trials in patients with inflammatory bowel disease indicate some relief from symptoms when treated with the IL-10 overproducing strain. Third, the identification of gene products that are responsible for ameliorating disease will allow researchers, industry, and clinicians to follow the production of these products as important biomarkers during probiotic preparation. As discussed below, the physiological state of microbes can be crucial to the functions of probiotics. Thus, it will be important to be able to follow the R. A. Britton and J. Versalovic 5 production of important bioactive molecules when culturing and processing probiotics for applications in animals and humans.
Probiotics and diarrhea
Probiotics are considered to be living or viable microorganisms by definition. Unlike small molecules that are stable entities, probiotics are dynamic microorganisms and will change gene expression patterns when exposed to different environmental conditions. This reality has two important implications for those who choose to use these organisms to combat human or animal diseases. First, probiosis is a strain-specific phenomenon. As defining a bacterial species is challenging in this age of full genome sequencing, it is clear that probiotic effects observed in vitro and in vivo are strain specific. For example, modulation of TNF production by strains of Lactobacillus reuteri identified strains that were immunostimulatory, immunoneutral, and immunosuppressive for TNF production [34,43]. These findings highlight the strain-specific nature of probiotic effects exerted by bacteria. Thus, it is important for research groups and industry to be cautious with strain handling and tracking so that inclusion of correct strains is verified prior to administration in clinical trials. The second key point is that the physiology of the probiotic strain is an important consideration. Being live microorganisms, the proteins and secondary metabolites that are being produced will change depending on growth phase. This feature raises a number of important issues for the stability and efficacy of probiotic strains. First, probiotics are subjected to numerous environmental stresses during production and after ingestion by the host. Most notably, probiotics used to treat intestinal ailments or whose mode of action is thought to be exerted in the intestinal tract must be able to survive both acid and bile stress during transit through the gut. The physiological state of the microbe is an important characteristic that determines whether cells will be susceptible to different types of environmental stress [44,45]. For example, exponentially growing cells of L. reuteri are much more susceptible to killing by bile salts than cells in stationary phase [45]. Thus, it is important to consider the physiological state of the cells in terms of stress adaptation not only for survival in the host but also during production. Second, the expression of bioactive molecules, which are most often responsible for the health benefits exerted by probiotics, is often growth phase-dependent. For example, our groups have been investigating the production of immunomodulatory compounds and antimicrobial agents by strains of L. reuteri. In both cases, these compounds are more highly expressed in the entry into and during stationary phase (unpublished observation).
PROBIOTICS AND THE PREVENTION AND TREATMENT OF GASTROENTERITIS-EXAMPLES
Commensal-derived probiotic bacteria have been implicated as therapy for a range of digestive diseases, including antibiotic-associated colitis, Helicobacter pylori gastritis, and traveler's diarrhea [46]. Probiotic formulations may include single strains or combinations of strains. L. reuteri is indigenous to the human gastrointestinal tract, is widely present in mammals, and has never been shown to cause disease. In human trials, probiotic treatment with L. reuteri in small children with rotaviral gastroenteritis reduced the duration of disease and facilitated patient recovery [15,16], while in another study, it prevented diarrhea in infants [17]. Despite the promising data from clinical trials, the primary molecular mechanisms underlying the antipathogenic properties of L. reuteri remain unknown. Probiotics may be effective for the prevention or treatment of infectious gastroenteritis. In the context of disease prevention, several studies with different probiotic strains have documented that these bacteria may reduce the incidence of acute diarrhea by 15-75% depending on the study [17,[47][48][49][50]. Although the relative impacts on disease incidence vary depending on the specific probiotic strain and patient population, consistent benefits for disease prevention have been demonstrated in multiple clinical studies. In one disease prevention study [49], supplementation with Bifidobacterium lactis significantly reduced the incidence of acute diarrhea and rotavirus shedding in infants. Studies that examined potential benefits of probiotics for preventing antimicrobial-associated diarrhea have yielded mixed results [51][52][53][54]. One prevention study reported a reduction in incidence of antimicrobial-associated diarrhea in infants by 48% [52].
Probiotics may also be incorporated in treatment regimens for infectious gastroenteritis. Several meta-analyses of numerous clinical trials with different probiotics documented reductions in disease course of gastroenteritis that ranged from 17 to 30 hours [49,50,55]. Examined another way, meta-analyses of probiotics used in clinical trials of gastroenteritis noted significant reductions of incidence of diarrhea lasting longer than 3 days (prolonged diarrhea). The incidence of prolonged diarrhea was diminished by 30% or 60%, respectively, depending on the study [50,56] (summarized in [55]). The probiotic agent, LGG, contributed to a significant reduction in rotavirus diarrhea by 3 days of treatment when administered to children as part of oral rehydration therapy [57]. Recent data compilations of a large series of probiotics trials by the Cochrane Database of Systematic Reviews (http://www.cochrane.org/) have yielded promising conclusions. As of 2008, probiotics appear to be effective for preventing acute gastroenteritis in children and may reduce duration of acute disease. Additionally, probiotics are promising agents for preventing and treating antimicrobial-associated diarrhea, although intention-totreat analyses have not demonstrated benefits.
Clostridium difficile and antibiotic-associated diarrhea
In what follows, we highlight some possible mechanisms by which probiotics can be used to ameliorate gastroenteritis. Because a number of infectious agents cause diarrhea, colitis, and gastroenteritis, we will only focus on a few examples with the idea that many of the mechanisms discussed can be extended to other bacterial or viral causes of diarrhea.
The potential role of probiotics in treating CDAD
An estimated 500,000-3,000,000 cases of Clostridium difficile-associated diarrhea (CDAD) occur annually with related health care costs exceeding $1 billion per year [58][59][60]. CDAD occurs primarily in patients that have undergone antibiotic therapy in a health care setting, indicating that alterations in the intestinal microbiota are important for the initiation of CDAD. In a small but increasing number of cases, more severe complications will occur including pseudomembranous colitis and toxic megacolon. Moreover, the emergence of metronidazole-resistant strains of C. difficile has diminished the efficacy of metronidazole, and vancomycin-and metronidazole-induced cecitis reinforces the need for new therapies for the treatment and prevention of CDAD [61,62]. Approximately 10-40% of patients treated for an initial bout of CDAD will show recurrent disease, often with multiple episodes [63]. Such recurrences are often refractory to existing therapies including antibiotic therapy. Patients with recurrent CDAD had a marked decrease in the diversity of organisms in their fecal microbiota while patients that were free of recurrent disease had a normal microbiota [64]. Thus, therapies that restore a normal microbiota or suppress C. difficile growth while allowing the repopulation of the intestine with a favorable microbiota may be important to resolve infections and maintain intestinal health.
Eradication of C. difficile through the production of antimicrobial compounds
Probiotic organisms have been used to treat recurrent C. difficile in the past and in a few cases have showed a modest effect in ameliorating recurrent disease [63]. This application has been somewhat controversial and at this time the use of probiotics in ameliorating CDAD is not recommended [65]. However, the organisms tested were not specifically isolated for the treatment of CDAD and, therefore, may have not been the appropriate strains to be used to prevent recurrent CDAD. In what follows, we outline potential mechanisms in which carefully selected or engineered probiotics could be used in the treatment of C. difficile and the eradication of this pathogen.
Competitive exclusion of C. difficile using probiotics
CDAD is currently treated by the use of antimicrobial agents that are effective against C. difficile, most often vancomycin or metranidazole. Because these drugs are broad-spectrum antibiotics, they likely play a role in recurrent disease by suppressing the normal intestinal microbiota. Using antimicrobial compounds that target C. difficile while allowing restoration of resident organisms would be one possible mechanism to prevent recurrent CDAD.
Probiotics and C. difficile spore germination
As mentioned above, CDAD is usually an infection that is acquired in the hospital or other health care setting. Therefore, a probiotic that could competitively exclude C. difficile could be administered prior to entry into the hospital. Unfortunately, little is known about how and where C. difficile colonizes the intestine. Once this information is known, strategies for blocking colonization with probiotics can be developed.
Nonetheless, a promising probiotic approach using nontoxigenic C. difficile has been described. Using a hamster model of C. difficile infection, Gerding et al. demonstrated a protective effect of populating the hamster with strains of C. difficile that are unable to produce toxin prior to challenge with a virulent toxin-producing strain [66]. Colonization of the intestinal tract by the nontoxigenic strain appeared to be required for protection. Currently, this probiotic approach is under investigation for use in humans (http://www.viropharma.com/).
Enterohemorrhagic E. coli
A likely contributor to the difficulty in eradicating C. difficile from the intestine is the ability of the organism to develop stress-resistant spores. The identification of probiotic strains that can prevent either spore formation or the germination of spores in the intestinal tract provides a promising avenue to combat CDAD. Recent work on spore germination has provided in vitro assays in which inhibitory activities of probiotics can be tested [67].
Germination of spores in the laboratory requires the presence of bile acids, with taurocholate and cholate demonstrating the best activity [67]. Thus, bile acids could play a role in signaling to C. difficile that spores are in the correct location of the gut to germinate. Sorg and Sonenshein have recently proposed a mechanism by which the reduction in the intestinal microbiota could lead to efficient spore germination and overgrowth of C. difficile [67]. They found that the bile acid deoxycholate (DOC) was able to induce spore germination but that subsequent growth was inhibited due to toxic effects of DOC on vegetative C. difficile. Their work suggests a model in which a reduction in the concentration of DOC in the intestine, due to the disruption of the normal microbiota, removes this key inhibitor of C. difficile growth. DOC is a secondary bile acid produced from dehydroxylation of cholate by the enzyme 7α-dehydroxylase, an activity that is produced by members of the intestinal microbiota. While it is unclear whether or not antibiotic therapy reduces the level of DOC in the intestine, it is tempting to speculate that providing probiotic bacteria capable of producing 7αdehydroxylase may prevent intestinal overgrowth by C. difficile while the normal microbiota is being reestablished.
Toxin sequestration and removal
Enterohemorrhagic E. coli (EHEC) infections cause sporadic outbreaks of hemorrhagic colitis throughout the world (∼100,000 cases per year in the United States) [68]. Most R. A. Britton and J. Versalovic 7 infections result in the development of bloody diarrhea but a subset (∼5-10%) of EHEC patients (mostly children) will develop the life-threatening condition hemolytic uremic syndrome (HUS) [69,70]. HUS is the leading cause of kidney failure in children. EHEC, which likely evolved from an EPEC strain [71], also produces attaching and effacing lesions on host epithelial cells and reduces intestinal epithelial barrier function. In addition, EHEC strains are characterized by the expression of Shiga toxin (Stx) genes, and thus they can be labeled as Shiga-toxin-producing E. coli (STEC). Currently, only supportive therapy for EHEC infection is available since antibiotic therapy may increase the risk of developing HUS, and therefore, novel therapies must be developed. One promising alternative therapeutic may be the use of probiotics to treat EHEC infections.
Inhibition of toxin production by EHEC-identification of strains that repress the lytic functions of lambda
Shiga toxins are ribosome-inactivating proteins that inhibit protein synthesis by removing a specific adenine residue from the 28S rRNA of the large ribosomal subunit [72]. Shiga toxin is required for the development of HUS and recent work has indicated that EHEC strains mutated for Shiga toxin production fail to cause disease in a germfree mouse model [73]. Indeed, injection of Shiga toxin with LPS directly into mice is sufficient to generate a HUS-like disease in the kidneys of mice [74]. Therefore, Shiga toxin is an important mediator of HUS and therapies aimed at neutralizing its activity are expected to reduce or eliminate this life-threatening complication although current attempts at Shiga toxin neutralization have been unsuccessful [75]. As a possible mechanism for treating EHEC disease and reducing the incidence of HUS cases, Paton et al. have generated "designer probiotics" in which the oligosaccharide receptor (Gb 3 ) for Stx is expressed on the cell surface of an E. coli strain [76][77][78]. This probiotic strain was shown to be capable of neutralizing Stx in vitro. As a proof-of-concept, mice that were challenged with a STEC strain were protected by administration of the probiotic expressing the Gb 3 receptor [79]. The protective effect was observed even when the strains were formalin-killed prior to use, supporting the hypothesis that toxin sequestration and removal was the mechanism by which the mice were protected. Similar results have been obtained using bacteriaexpressing receptors for toxins produced by other diarrheal pathogens including enterotoxigenic E. coli (most common cause of traveler's diarrhea) and Vibrio cholerae.
Inhibition of pathogen adherence and strengthening of intestinal barrier functions
Stx genes are carried on lambdoid prophages and are usually located in a late transcribed region of the virus, near the lytic genes [80]. Since no mechanism for toxin secretion has been identified, the location of Stx near the lytic genes suggests that phage activation and cell lysis are responsible for Stx production and release. This genetic juxtaposition suggests that therapeutics that suppress the lytic decision of lambda in vivo would greatly reduce or eliminate complications caused by systemic release of Stx.
Rotavirus
A key interaction of EHEC, as well as EPEC, with the intestinal epithelium is the formation of attaching and effacing lesions on the surface of the epithelium [81]. This interaction is brought about by factors secreted directly from the bacterium into the host cell, where a redistribution of the actin cytoskeleton occurs. EHEC and EPEC infection also induces a loss of tight junction formation and reduction of the intestinal epithelial barrier by inducing the rearrangement of key tight junction proteins including occludin [82,83]. Therapies that would either disrupt this interaction of EHEC/EPEC with the intestinal epithelium or inhibit the loss of barrier function should ameliorate disease. Probiotics have shown some success inhibiting adhesion, A/E lesion formation and enhancing barrier function in response to EHEC infection in vitro. Johnson-Henry et al. tested the ability of Lactobacillus rhamnosus GG to prevent loss of barrier integrity and formation of A/E lesions induced by EHEC infection of cell culture in vitro [40]. They found that pretreatment of intestinal epithelial cells in vitro with LGG was sufficient to reduce the number of A/E lesions and to prevent loss of barrier function as measured by transepithelial resistance, localization of tight junction proteins, and barrier permeability assays. Importantly, live LGG was required for these effects as heat-killed bacteria were not effective in preventing EHEC effects on epithelial cells.
Enteric viruses including noroviruses and rotavirus represent major causes of gastroenteritis, especially in young children. Rotavirus infection results in acute gastroenteritis with accompanying dehydration and vomiting mainly in children 3-24 months of age. Human rotavirus primarily infects intestinal epithelial cells of the distal small intestine, resulting in enterotoxin-mediated damage to intestinal barrier function. Recent studies indicate that probiotics may reduce the duration and ameliorate disease due to rotavirus infection ( [84]; G. Preidis and J. Versalovic, unpublished data). Probiotics promoted intestinal immunoglobulin production and appeared to reduce the severity of intestinal lesions due to rotavirus infection in a mouse model. These findings and related investigations suggest that probiotics may diminish the severity and duration of gastrointestinal infections by mechanisms independent of direct pathogen antagonism. Probiotics may also promote healing and homeostasis by modulating cytokine production and facilitating intestinal barrier function.
CONCLUDING REMARKS
Probiotics may provide an important strategy for the prevention and treatment of gastrointestinal infections. Specific bacteria derived from human microbial communities may have key features that establish these microbes as primary candidates for probiotic therapies. These beneficial microbes may have different effects within the host such as prevention of pathogen proliferation and function. Probiotics may also stimulate the host's immune function and mucosal barrier integrity. By working via different mechanisms of probiosis, probiotics may yield effects at different steps in the process. Probiotics may prevent disease from occurring when administered prophylactically. Probiotics may also suppress or diminish severity or duration of disease in the context of treatment. As our knowledge of the human microbiome advances, rational selection of probiotics based on known mechanisms of action and mechanisms of disease will facilitate optimization of strategies in therapeutic microbiology. Ultimately, we expect that probiotics will help to promote stable, diverse, and beneficial microbial communities that enhance human health and prevent disease. | 2017-06-17T19:27:18.580Z | 2009-02-04T00:00:00.000 | {
"year": 2008,
"sha1": "f2795bd13fa90a4c2c9f42c66414d239ae45b1a0",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/ipid/2008/290769.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4c0810fd2f523d279870fc478b8f381b74e9b3a9",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
51926130 | pes2o/s2orc | v3-fos-license | Dietary and Physical Activity Behaviours in African Migrant Women Living in High Income Countries: A Systematic Review and Framework Synthesis
Dietary and physical activity behaviours during preconception and in pregnancy are important determinants of maternal and child health. This review synthesised the available evidence on dietary and physical activity behaviours in pregnant women and women of childbearing age women who have migrated from African countries to live in high income countries. Searches were conducted on Medline, Embase, PsycInfo, Pubmed, CINAHL, Scopus, Proquest, Web of Science, and the Cochrane library. Searches were restricted to studies conducted in high income countries and published in English. Data extraction and quality assessment were carried out in duplicate. Findings were synthesised using a framework approach, which included both a priori and emergent themes. Fourteen studies were identified; ten quantitative and four qualitative. Four studies included pregnant women. Data on nutrient intakes included macro- and micro-nutrients; and were suggestive of inadequacies in iron, folate, and calcium; and excessive sodium intakes. Dietary patterns were bicultural, including both Westernised and African dietary practices. Findings on physical activity behaviours were conflicting. Dietary and physical activity behaviours were influenced by post-migration environments, culture, religion, and food or physical activity-related beliefs and perceptions. Further studies are required to understand the influence of sociodemographic and other migration-related factors on behaviour changes after migration.
Introduction
Dietary and physical activity (PA) behaviours play a central role in the health and wellbeing of women and their children. This is especially important during preconception and in pregnancy. The preconception period represents a crucial time when dietary and PA behaviours could help prepare the body and ensure the accumulation of sufficient nutrient stores for healthy pregnancies in the future [1]. Maternal nutrition during pregnancy influences foetal growth and development and sets a foundation for long-term health for both mother and child [2]. Pregnant women are particularly vulnerable to inadequate nutrition due to the high nutrient and energy demands of pregnancy [3,4]. Proper foetal growth and development requires an appropriate nutritional intake at each stage of pregnancy, alongside adequate energy levels [5]. The World Health Organization (WHO) has established dietary guidelines and recommendations for intakes of various nutrients, to help and translated across eight other electronic databases: Embase, PsycInfo, Pubmed, CINAHL, Scopus, Proquest, Web of Science, and the Cochrane library. Searches were restricted to studies published between 1 January 1990 and 26 February 2018-1990 represents the start of a significant peak period of migration from African countries to HICs [36]. Database searches were completed in February 2018. Reference list and citation searches were completed in March 2018.
Inclusion and Exclusion Criteria
Inclusion criteria were: (1) primary research studies (qualitative and quantitative), (2) human participants, (3) pregnant women or women of childbearing age, (4) women who have migrated from an African country to live in a HIC, (5) studies conducted in HICs, (6) studies published in English language, and (7) studies reporting on at least one of the following: dietary behaviours, PA behaviours, or determinants of dietary and/or PA behaviours. Studies including "Black" women with no specification of country of origin were excluded, because these may have included women who have not migrated from African countries. Studies including African women living in HICs as refugees or asylum seekers were also excluded because they had been forcibly displaced and are therefore living in HICs under different circumstances from those of African women who migrated.
Study Selection and Screening
All studies resulting from the search were imported into EndNote Version X8. Titles and abstracts of a random 10% sample of all identified papers were independently screened by three authors (LN, JR, and NH), who reached a consensus on study eligibility and inclusion. Titles and abstracts for the remaining 90% of identified papers were then screened by one author (LN) and double screened by two other authors (AO and ZA). The full texts of all studies that met the inclusion criteria for this review were retrieved and screened by one author (LN); and then divided amongst the other five authors for double screening. Additional studies were identified by hand-searching the reference lists of all included studies and performing citation searches on Google Scholar. Any inconsistencies from the screening of full texts were resolved by discussion.
Data Extraction and Quality Assessment
A data extraction form was developed and piloted among three authors (LN, NH, and JR), for two studies. The final data extraction form included the title of the study, journal title, publication year, host country, country of origin of African women and outcomes reported. The quality of quantitative studies were assessed using the "Quality Assessment Tool for Observational Cohort and Cross-Sectional Studies" [37]; and the "Critical Appraisal Skills Programme (CASP) checklist" [38] was used for qualitative studies. Quality assessments were carried out in duplicate and any inconsistencies were resolved by discussion. Quality rating was judged as good, fair, or poor. A study was rated "good" if it was deemed to have a minimal risk of bias and its results were considered valid. A "fair" rating meant that the study was susceptible to some bias although not enough to invalidate its results. "Poor" quality referred to studies with major limitations and significant risk of bias.
Data Synthesis
A framework synthesis approach was used, which offers a structured method to organise and analyse data [39], and is particularly useful to integrate qualitative and quantitative evidence in meta-synthesis. This method involves the utilisation of an a priori framework to code data, which can then be modified to reflect the evidence reported in the included studies [40]. The framework developed for this review was informed by background literature, and consisted of a list of a priori themes including dietary behaviours, PA behaviours and factors that influence dietary and PA behaviours (determinants). The determinants were sociodemographic factors (e.g., maternal age, parity, level of education, socioeconomic status (SES) income, and marital status); migration-related factors (e.g., duration of residence, age at arrival in HICs, and environmental factors); culture and religion; health status (e.g., stress); other health behaviours (e.g., smoking and alcohol consumption); pregnancy-related factors (e.g., pregnancy status and gestational age); and nutrition-or PA-related knowledge, beliefs, and perceptions. The framework was adapted throughout the process of data synthesis to incorporate and reflect themes that emerged from the data. This framework provided a matrix onto which relevant data from the included studies were coded. Relevant data included narratives from results sections, figures, data tables, and supplementary materials.
The framework synthesis process involved two authors (LN and NH) and the following stages: familiarisation with the data, identification of a thematic framework, indexing, charting the data into the framework matrix, and mapping and interpretation. The familiarisation stage was an iterative process where all included studies were read several times, while marking portions of data with relevant information. Codes were then applied to fragments of text with information that was relevant to the review. Coding also involved making notes on questions to consider during the analysis process. The codes were then assessed for similarities and differences, and clustered together around similar concepts. Included studies were checked again to ensure that no new codes could be generated from the data. Any new themes that emerged were assessed to establish whether they were in fact new, or subgroups related to the existing a priori themes. In the final stage, the themes were used to explore patterns and relationships within the data.
Results
The search strategy identified a total of 4343 citations ( Figure 1). Exclusion of duplicates, initial screening of titles and abstracts and screening of full-texts against inclusion criteria left 55 potentially eligible studies. A further 41 studies were excluded from this review as they did not address dietary and PA behaviours in pregnant women or women of childbearing age and only reported data on weight status of women and children. These studies will inform additional systematic reviews relating to the wider research programme. Fourteen studies were included in this review. beliefs, and perceptions. The framework was adapted throughout the process of data synthesis to incorporate and reflect themes that emerged from the data. This framework provided a matrix onto which relevant data from the included studies were coded. Relevant data included narratives from results sections, figures, data tables, and supplementary materials. The framework synthesis process involved two authors (LN and NH) and the following stages: familiarisation with the data, identification of a thematic framework, indexing, charting the data into the framework matrix, and mapping and interpretation. The familiarisation stage was an iterative process where all included studies were read several times, while marking portions of data with relevant information. Codes were then applied to fragments of text with information that was relevant to the review. Coding also involved making notes on questions to consider during the analysis process. The codes were then assessed for similarities and differences, and clustered together around similar concepts. Included studies were checked again to ensure that no new codes could be generated from the data. Any new themes that emerged were assessed to establish whether they were in fact new, or subgroups related to the existing a priori themes. In the final stage, the themes were used to explore patterns and relationships within the data.
Results
The search strategy identified a total of 4343 citations (Figure 1). Exclusion of duplicates, initial screening of titles and abstracts and screening of full-texts against inclusion criteria left 55 potentially eligible studies. A further 41 studies were excluded from this review as they did not address dietary and PA behaviours in pregnant women or women of childbearing age and only reported data on weight status of women and children. These studies will inform additional systematic reviews relating to the wider research programme. Fourteen studies were included in this review.
Dietary Behaviours
Nine studies [42,[44][45][46][47][48][49][50]53] reported dietary behaviours ( Table 2). These included dietary intakes, dietary patterns, and food practices. Studies reporting on dietary intakes assessed energy, macronutrients and micronutrients using different methods of assessment including food-frequency questionnaires, 24 h recalls, face-to-face interviews, and researcher-developed questionnaires. Studies reporting dietary patterns assessed whether African women maintained their traditional dietary behaviours or adopted 'Western-style' dietary behaviours after migration. Studies reporting food practices described any factors relating to the production and consumption of food, such as cooking methods, meal planning, diet restrictions, and eating out of home.
Dietary Intakes
Dietary intakes were assessed in four studies [44,45,48,50]. Results were presented as mean (SD) and percentage of total energy (%TE) derived from macronutrients. Two studies [44,45] also reported on nutrient supplement usage and three reported on nutrient inadequacies [44,45,50]. As there were limited comparison groups reported in the studies, this review compared the dietary results reported for African women with WHO recommendations [6,55]. Table 3 shows the reported mean intakes of energy and macronutrients in pregnant women and women of childbearing age; and WHO recommended intakes. Five macronutrients were analysed, including carbohydrate, protein, total fat, dietary fibre, and cholesterol. Only one study [44] in Ireland reported intakes in pregnant women, which included total energy, dietary fibre, and %TE from carbohydrate, protein, and fat [44]. The study did not compare the women's mean intakes with any recommendations but found that their %TE values were compliant with Irish national guidelines. When compared with WHO reference ranges, mean intakes of total energy and dietary fibre were low, while %TE were higher for protein and fat.
Energy and Macronutrient Intakes
Carbohydrate intakes in women of childbearing age were higher in two studies [45,48] compared to WHO recommendations, while intakes of dietary fibre were lower. Results for protein, fat and cholesterol were contradictory; while one study showed higher intakes than recommended by the WHO, the other study found lower intakes. Only one study reported %TE in women of childbearing age and all values were within the WHO reference ranges. No participants reported taking any nutrient supplements.
Micronutrient Intakes
A total of 15 micronutrients (Table 4) were reported in 3 studies [44,45,48], one of which included pregnant women [44]. Pregnant women's intakes of vitamins A, B12, C, selenium, iron, and iodine were seen to be compliant with National Irish dietary guidelines, although iron intakes were less compliant than the other micronutrients (86.4% vs. 100%). Compared with WHO guidelines, intakes of vitamins A, B12, C, D, selenium, sodium, and iodine were higher than recommended. The study also reported inadequate intakes of calcium, folate, and vitamin D. Inadequate intakes of calcium, folate, and iron were seen following WHO recommendations, but not vitamin D.
Two studies [45,48] found lower intakes of iron and higher intakes of vitamin C in women of childbearing age, compared to WHO reference ranges. Results for calcium and zinc were contradictory-intakes were higher in one study [48] and lower in the other [45]. Only one study [45] reported on vitamin B12, magnesium, folate, and phosphorus; while another [48] reported on vitamin E, sodium and folic acid in women of childbearing age. Intakes of vitamin E, sodium, magnesium, and phosphorus were higher than recommended, while those of folate and vitamin B12 were lower. folic acid intakes were within the WHO reference range.
Dietary Patterns
Dietary patterns were explored in eight studies [42,[44][45][46][48][49][50]53], one included pregnant women [44]. The studies described the extent to which the women's usual dietary habits had changed after migration, the food groups they consumed and the kinds of food they had adopted from their host countries. No study reported that the women had completely adopted the dietary patterns of their host countries. African women were seen to preserve some of their traditional dietary behaviours after migration; the extents of which varied across studies. Dietary patterns differed according to how strictly the women adhered to their traditional foods. Three patterns of adherence were identified-strict, flexible, and limited. Adopting foods from the host country was reported for all three dietary patterns; and such foods were usually eaten at breakfast and lunch. Examples of these included processed meats, pizza, cereals, fish, chips, sandwiches, snacks, candies, and soft drinks.
Two studies [44,49] (both in pregnant women) found that African migrant women strictly maintained their traditional dietary patterns. In these groups of women, snacking between meals was uncommon and the consumption of Western-style processed foods such as sausages, sugar-sweetened cereals, cheese, chilled desserts, cakes, biscuits, and pastries was very low [44]. One of the studies [44] described the composition of the women's daily diets, which predominantly consisted of rice, other grains, tubers, fruits, vegetables, fish, meat, and grains.
In contrast, two studies [45,53] observed that African migrant women (non-pregnant) adhered less strictly to their traditional African foods, although these were still consumed quite often. Other Western-style foods such as sandwiches, snacks, candies, and soft drinks were also consumed. One study presented the compositions of the women's diets; they had higher intakes of grains and simple sugars; and lower intakes of vegetables, meat and fish.
Dietary patterns characterized by a limited consumption of traditional foods were identified in two studies [48,50] (both non-pregnant women). Local African foods were rarely eaten by these groups of women and their dietary patterns were similar to those seen in most HICs. Both studies defined these as "Western" dietary patterns. The women in one of the studies consumed foods high in carbohydrates and fats and had high intakes of alcohol; as opposed to their traditional diets which contained more protein, less fats, and less alcohol [48].The most commonly consumed food groups were dairy products, meat, fish, raw vegetables, and fruits [48,50].
Two studies [42,46] compared food consumption patterns in African women with those of women from other ethnic groups. One of these included pregnant women [42]. Post-migration consumption of snacks was significantly higher in African women (North Africa-4.5% and SSA-51.5%) compared to Central and Eastern European women (8%) [42]. On the other hand, consumption of fruits and vegetables was significantly lower in African women (North Africa-45.5% and SSA-36.4%; Central and Eastern Europe (62.5%) [42]. In another study [48], African women were seen to have similar energy intakes with Spanish women, and higher intakes of carbohydrate, protein, cholesterol, dietary fibre, and alcohol.
Food Practices
The food practices reported were mostly reported as "coping mechanisms" used by African women, to facilitate their adaptation into their new food environments. These were behaviours related to shopping, cooking, and eating practices [45,47,49,54].
Shopping and Cooking Practices
African women reported searching for African foods from local shops and markets, to allow them to continue with their normal African diets [47]. These foods were usually not available or in short supply. Examples of local foods that were difficult to locate included African vegetables (e.g., sweet potato leaves, cassava leaves, amaranth, and pumpkin leaves), black-eye beans, maize flour, camel milk, and cocoyam products [47]. Women also reported substituting local foods with similar items found in HICs, in order to replicate their traditional dishes, such as replacing one type of meat for another [47]. Some unfamiliar host-country food items such as asparagus and figs were tried and rejected due to taste [47]. Although African women adopted some food items from their host countries, they showed preference for their traditional cooking and seasoning methods. African women pointed out unpleasant differences between host-country and traditional foods [53,54]. These differences related to taste, texture, and cooking methods. HIC foods were commonly described as "tasteless", which usually meant the lack of salt or spices. Three studies [45,47,53] reported that women usually added salt or other local spices to HIC foods, to make them more palatable. One study in Norway [53] reported that African women seasoned fish differently and preferred frying rather than poaching, as was commonly done in Norway.
Eating Practices
Changes in African women's meal plans were reported in two studies [45,47]. While some women reported taking breakfast more frequently after migration, some reported doing so less frequently, and others completely stopped [45,47]. A lack of structure in eating plans was mainly attributed to the nature of the women's work lives. Pregnant women in one study reported restricting themselves from certain HIC foods during pregnancy (e.g., processed meats) and mostly consuming African vegetables [49]. Women reported a higher frequency of eating out of their houses since they migrated [47].
Determinants of Dietary Behaviours
Five studies provided data on the factors influencing dietary behaviours in African immigrant women [45,47,[52][53][54], one included pregnant women [52]. The populations included in four studies were women from North and North East African countries including Somalia, Ethiopia, Morocco, Algeria, Egypt, and Eritrea. Only one study included women from SSA. The studies were conducted in Canada, Australia, Amsterdam, Israel, and Norway.
The determining factors influencing the women's dietary behaviours included five of the seven a priori themes (sociodemographic factors; migration-related factors; culture and religion; pregnancy-related factors, and nutrition-related knowledge, beliefs, and perceptions) and one data-driven theme (competing priorities). No data were reported for the a priori themes health status and other health behaviours. Some of the themes presented played a dual role as both barriers and facilitators to women's behaviours.
Sociodemographic Factors
Very limited data was available for sociodemographic factors. Only the influence of maternal age was reported in one paper [45] which reported a negative correlation with the consumption of dairy products, fats, simple sugars, and soft drinks (p < 0.001). No data were available for the influence of parity, maternal level of education, socioeconomic status (SES), income or marital status on dietary behaviours.
Migration-related Factors
These determinants were all related to the new environments that African women lived in after migrating to HICs, specifically the new natural, food, living and work environments. Natural Environment One study reported the influence of the weather in HICs on the women's dietary behaviours [52]. Organic food and homegrown fruits, vegetables, and grains were more readily available throughout the year in their countries of origin compared with HICs.
Food Environment The food environment in HICs was reported both as a barrier and a facilitator to healthy dietary behaviours. The constant availability of food was a facilitator as this brought a level of food security in the women's households compared with their countries of origin [47]. However, the availability of cheap and unhealthy convenience foods was a barrier to healthy dietary behaviours, which influenced women to eat out more, cook less, and consume more snacks [47,[52][53][54].
Living Environment African women's food choices were often influenced by the preferences of family members with whom they lived [53,54]. Most women reported that their husbands preferred traditional foods, which influenced the continuity of traditional dietary habits in their households. Other women reported that while their husbands preferred traditional foods, their children preferred foods adopted from the host country. This resulted in the women cooking separate meals for their husbands and children.
Work Environment Living and working in HICs made the women's schedules very busy and also increased the amount of time they spent away from home, which decreased their frequency of cooking at home and increased their frequency of eating snacks and take-away foods [45,52,53]. The women's work schedules also influenced how often they were able to take breakfast or eat together with their families.
No data were reported for the influence of the women's duration of residence or age at arrival in HICs on their dietary behaviours.
Culture and Religion
African women emphasised the importance of culture to their dietary behaviours and traditional food habits [45,47,53,54]. Food played a key role in showing hospitality and it was normal in their culture to serve food in large quantities. The women expressed pride in their African cuisine, highlighting the importance of using traditional spices to enhance taste. Cultural and religious festivals were commonly cited as reasons to cook traditional dishes. Religious rules also played a role on food choices and shopping patterns, especially for Muslim women who did not eat pork or had to determine whether food was "halal" (adheres to Islamic law) before consumption. Culture and religion were seen to reinforce the women's efforts to maintain their traditional dietary behaviours.
Pregnancy Status
Reasons for eating unhealthy foods during pregnancy included tiredness, long work hours, pregnancy stress, and a lack of support from family and friends in HICs [52]. Pregnant women also reported following their traditional behaviours and restraining from certain food items in order to avoid having a baby that was "too large".
Nutrition-Related Knowledge, Beliefs and Perceptions
In two studies, African women believed that HIC foods were healthier than their traditional foods because they were less dense and contained less oil and sugar [53], while traditional dishes were described as oily, spicy, high in calories, and fattening-all of which they attributed to poor nutrition [54]. Other women felt that some HIC foods were "lacking in nutrients" [53]. An example of this was vegetables-the women believed that boiling vegetables (as was commonly done in their host countries) made them watery and less nutritious; unlike frying, which helps preserve nutritional value and maintain crispiness.
Competing Priorities
Other priorities which typically prevented women from preparing home-made meals included going to work, attending school, looking after children, and managing day-to-day tasks at home [47,53,54]. Foods adopted from HICs were frequently cooked because they were considered less time-consuming to prepare and enabled women to serve warm meals on busy days. These factors also influenced eating outside the home or ordering food online.
PA Behaviours
Five studies [41,43,45,46,50] reported PA behaviours in women of childbearing age (Table 5); there were no studies in pregnant women. Two studies included North and North East African women living in Australia and Israel; two included women from West and SSA living in Spain and Italy; and one did not specify the region of origin of the African women living in England. PA behaviours were assessed and defined differently, as shown in Table 5. All methods of PA assessment were self-reported. Three studies included all-African populations [43,45,50], while two included women from other ethnic groups for comparison [41,46].
Findings from the all-African population studies reported conflicting results. Two studies of North East African women in Spain [50] and West African women in Israel [45] showed that the majority (65.9% and 72%, respectively) did not exercise regularly, and 49% of the women in Israel walked for less than 30 min a day. However, another study reported that 87% of women from SSA living in Italy exercised more than 3 times a week [43].
Studies including comparison groups of non-African women also reported conflicting data. One study [41] reported no significant different in physical inactivity between North African women and Australian women (OR 1.07, 95% CI 0.60-1.88), while another group of African women (countries of origin not specified) were significantly less inactive (OR 0.69, 95% CI 0.51-0.94). A second study [46] showed that Black African women in England were less likely to exercise more than 3 times in 4 weeks than White women (OR physical inactivity 2.16, 95% CI 1.71-2.75).
Determinants of PA Behaviours
Four studies explored the determinants of PA in African migrant women [45,51,52,54]; one included pregnant women [52]. The populations in all four studies were women from North and North East African countries, including Somalia, Morocco, Ethiopia, and Eritrea. No studies included women from West Africa or SSA. The studies were conducted in Canada, Sweden, Amsterdam, and Israel.
The same framework was used as described for determinants of dietary behaviours (Section 3.2). Four out of seven a priori themes (sociodemographic factors, migration, culture/religion, and PA-related knowledge, beliefs, and perceptions) were reported in the included studies. No data was reported for health status, other health behaviours, and pregnancy-related factors. Participants' responses were based on their interpretations of PA, which mostly referred to organised PA (e.g., going to the gym, cycling, or attending fitness classes) as opposed to habitual PA like walking and household chores. Only one theme (migration-related factors) included factors related to habitual PA.
Sociodemographic Factors
Findings on sociodemographic factors were limited to income and marital status. No data were available for maternal age, parity, level of education, or measures of SES additionally to income.
Income Income was reported as a barrier to PA in one study [52]. The women reported that they lacked financial resources to enrol in gyms or fitness classes that suited their needs. Marital Status One study described that married and cohabiting women reported their daily schedules to be very busy, leaving them with no time for extra activities other than everyday chores.
Migration-Related Factors
All four studies reported on migration-related factors and these were centred around the same environmental factors as dietary behaviours.
Natural environment The cold weather in HICs was reported as a barrier to PA in three studies [51,52,54]. The warm climate in their home countries enabled women to engage in outdoor activities like walking to the market. Meanwhile, outdoor activities in HICs were avoided due to the cold climate, except in the warmer seasons. The weather also influenced the means of transportation the women used during the winter as they preferred motorised rather than active transportation such as walking. North African women described how they were accustomed to staying indoors and sleeping in on rainy days, and only left the house if they had reason to (e.g., work or taking children to school) [51].
Built environment Built environments in HICs presented a barrier to PA in that the streets around the women's neighbourhoods usually either lacked sidewalks or had high volumes of traffic [51,54]. The wide availability of transport facilities (e.g., buses and trains) also significantly reduced the amount of time that the women spent walking [51,54]. The women felt that their environments "back home" were more conducive to being physically active [52] where they usually walked long distances to buy groceries and household supplies [51].
Living environment African women described the houses where they lived in HICs as relatively small [54], and they had more household appliances which reduced their daily PA related to household chores [51,54]. However, women from North and North East Africa also reported that they were used to living with extended family members who all participated in household chores, whereas this was not the case in HICs which increased the amount of time required to complete chores and limited the spare time they had for exercise [51].
Work environment Only one study included findings on African women's work environments in HICs which tended to promote sedentary activity [45].
No study reported on the influence of the women's duration of stay in HICs or their age at arrival.
Culture and Religion
Cultural and religious factors played a role on women's ability to exercise, especially women from predominantly Muslim African countries whose culture did not encourage women to mix with men in public spaces and there was a lack of female only centres [51,54]. Women described that their traditional outfits were not suitable for PA and the possibility of people watching while they exercised was an additional barrier to PA [51]. Participation in PA relied on the activities meeting their cultural needs, such as having an informal leader from a similar cultural background, being accompanied by other women, or being able to dress in their traditional attires [51].
PA-Related Knowledge, Beliefs and Perceptions
African women showed an understanding for the need of PA and acknowledged that it is important to health and well-being. Participation in leisure-time or health-related PA was a concept women only became familiar with after migration. For example, women described how PA was normally incorporated into their daily lives in their countries of origin, so they were not familiar with the concept of walking for health or just walking for the sake of it [51]. This was also viewed as a facilitator to PA by women who had experienced positive health outcomes from such activities. Some women believed that leisure-time activities were meant for children [51]. A lack of familiarity was also expressed with some organised physical activities such as cycling or swimming and many had never learned to cycle or swim [54].
Competing Priorities
Other responsibilities competing for African women's time such as family commitments, work, school, and household chores were common barriers to PA [45,51,54]. Living and working in HICs was described as "very busy" and "stressful"; unlike in their countries of origin where most women didn't have to work.
Discussion
This review aimed to synthesise the evidence on dietary and PA behaviours, and determinants of behaviours, among pregnant women and women of childbearing age who have migrated from African countries to live in HICs. Data available for macronutrient intakes were conflicting. Micronutrient analyses suggest low levels of folate, calcium, and iron in pregnant women; while sodium intake was more than twice as high as recommended levels. Deficiencies in multiple micronutrients are a reflection of poor diets and are associated with pregnancy complications and haematological consequences, which may increase the risk of haemorrhage and death during pregnancy [56,57]. A high sodium intake increases the risk of metabolic disorders and pregnancy complications such as preeclampsia [58]. Future intervention research could prioritise improving the diet quality, especially micronutrients, for this population of women.
Findings on dietary patterns showed that no participants completely changed their traditional dietary practices to those in HICs. Rather, dietary patterns were bicultural, with an overlap between HIC and traditional dietary practices. Three overarching patterns were defined according to how much the women adhered to their traditional dietary practices. These were either strict, flexible, or limited. Evidence of adopting host-country behaviours was seen across all three dietary patterns. Examples of adopted "unhealthy" dietary behaviours included increased frequency of snacking, high consumption of processed foods, high intakes of sweets and sweet drinks, cooking less, and eating out more often. Similar behaviours have been observed amongst migrant populations in other reviews [23,59]. A few healthy behaviours were also adopted, such as an increased consumption of fish, fruits, and vegetables; and taking breakfast more regularly [45,53].
Dietary behaviours were shaped by interrelating factors that fell into six main themes: sociodemographic characteristics, migration-related factors, culture and religion, pregnancy, nutrition-related knowledge, beliefs and perceptions, and competing priorities. Post-migration environments had the highest number of factors that shaped the women's dietary behaviours. Religious and cultural beliefs predominantly influenced women from Muslim countries. Major facilitators to maintaining traditional dietary practices included cultural and religious beliefs, having African community and social networks, the availability of ethnic shops in HICs and participants' perceptions of HIC foods. African women highlighted the importance of spiciness and taste in food; and showed a preference for traditional cooking methods. On the other hand, the likelihood of adopting host-country dietary practices was increased by factors such as the abundance of cheap, unhealthy convenience foods in HICs, busy lifestyles and not having enough time to cook African foods. The findings of this review fit in the model of dietary acculturation proposed by Satia-Abouta et al. [60], which shows that dietary changes are governed by sociodemographic, cultural, and environmental factors. Similar influences on dietary behaviours have also been reported in other reviews [23,61]. It is also possible that the process of change to Westernised behaviours could have started before migration, due to the nutrition transition occurring in LMICs. The nutrition transition is characterised by urbanisation and a shift from less energy dense diets and more active lifestyles to a higher consumption of fatty foods and sedentary lifestyles [62]. While the majority of evidence in this review shows that HIC environments contributed unhealthy dietary practices, other studies have found the contrary. For example, a study conducted in Belgium [63] reported increased consumption of healthy foods following migration. This study suggested that exposure to the host-country culture increased nutrition knowledge through media, friendships, and work environments; which led to the adoption of healthier eating habits.
Findings on PA behaviours were inconsistent, although there was a general sense of low levels of physical inactivity amongst women who had migrated from African countries. Inconsistency in results was mainly due to the difference in methods used to assess PA as well as different outcomes reported. The extent of behaviour change after migration may also vary according to country of origin and time of migration, so even individuals belonging to the same ethnic group may be at different levels of the acculturation process. This may explain some of the conflicting results presented in the studies. Determinants of PA behaviours were clustered around five main themes: sociodemographic characteristics, migration-related factors, culture, and religion, PA-related knowledge, beliefs and perceptions, and competing priorities. Culture, religion, and post-migration environments played the biggest role on women's PA behaviours in the evidence reported. Cultural and religious beliefs were seen to prohibit participation in public activities, especially amongst women from Muslim countries. Migration to HICs was also shown to influence a more sedentary lifestyle, unlike the environmental and living conditions in the women's countries of origin which were more favourable to PA. Although women were not familiar with several concepts of PA, participating in leisure-time activities was regarded as a positive adopted behaviour, which women enjoyed, provided the activities were organised to meet the women's cultural customs.
This systematic review is the first to report robust evidence synthesis on African migrant women's dietary and activity behaviours. Methodological strengths include the thoroughness of the search, which involved multiple databases plus supplementary searches, and duplication of all screening, data extraction and quality assessments. An in-depth framework synthesis was carried out to ensure that available evidence, as well as gaps in available evidence, were explicitly captured. Many studies excluded from this review either assessed dietary and PA behaviours in both women and men, but did not provide separate analyses for women, or included "Black" women but did not specify their countries of origin. These partly account for the limited number of relevant studies included in the review. The inclusion of mainly cross-sectional quantitative studies in this review also reflects the types of studies available. Studies predominantly included women from North African countries, with fewer women from West and Sub-Saharan Africa; and there may be different determining factors across African countries which could not be explored in this review. A major limitation across studies was the reliance on self-reported responses which may introduce recall bias, associated with inaccuracies in recollecting past events. Dietary intakes or PA levels may have been over-or under-reported, thereby not giving a true representation the women's behaviours.
There was a paucity of data on dietary and PA behaviours in pregnant women. In addition, data were not available for several a priori factors which were drawn from evidence base of determinants of dietary and PA behaviours in other populations. These include sociodemographic factors (maternal age, parity, maternal education, and socioeconomic status), migration-related factors (duration of residence, age at time of migration, and immigrant status), health status, other health behaviours (smoking or alcohol consumption) and pregnancy-related factors (gestational age or pregnancy specific health conditions such as gestational diabetes). Further studies are required to explore the influence of these factors on women's dietary and PA behaviours. Other factors such as feelings of inclusion or exclusion; or a sense of identity or belonging with either the host country or the participants' countries of origin were also not explored in the included studies. Further studies are needed to understand the influence of these factors, as they could potentially modify the women's behaviours.
There was also a paucity of data which directly compared migrant and host-country populations. Further research is needed to understand differences and commonalities in factors underlying their dietary and PA behaviours.
Conclusions
The findings of this review highlight the need to understand the role of acculturation on dietary and PA behaviours amongst women who have migrated from African countries to live in HICs. The contradictory evidence of migration-related factors highlights that acculturation is a complex and multifactorial process. As both barriers and facilitators were reported in this data, the process of acculturation could either play a protective or detrimental effect on health by influencing positive or negative health behaviours. The continuation of some traditional practices is an indication of the value placed on cultural habits and the need for culturally sensitive approaches in understanding post-migration behavioural changes. Future research is needed to explore the evidence gaps identified in this review to gain a deeper understanding of the factors that bring changes in behaviours after migration. | 2018-08-14T19:12:27.365Z | 2018-08-01T00:00:00.000 | {
"year": 2018,
"sha1": "726203b711d769f887572749c9b9578fb635db23",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6643/10/8/1017/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e226ed165d71e42c9860c4ac6666e294bb6775f4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
} |
259734266 | pes2o/s2orc | v3-fos-license | Deep Learning Unveils Hidden Insights: Advancing Brain Tumor Diagnosis
: Timely detection and treatment are crucial in managing brain tumors, a severe medical condition. MRI is a commonly used diagnostic tool to detect brain tumors. However, because of the complex structure of the brain and the wide range of tumors sizes and forms, MRI scan interpretation can be time-consuming and error-prone. The automated detection and segmentation of brain tumors has shown encouraging results with to recent developments in DL techniques. We suggest a CNN-RNNs and GANs based DL technique for brain tumor identification in this paper. Transfer learning and data augmentation techniques are used in the suggested method to train the CNN on a sizable dataset of MRI images labelled with tumor areas. The suggested strategy, according to experimental findings, is more accurate than the most advanced techniques now available for finding brain tumors. The suggested strategy has the potential to help radiologists identify brain tumors quickly and reliably, improving patient outcomes.
In addition, data augmentation, which generates extra training data from existing images, has been used to improve the performance of CNNs for brain tumor detection. Kamnitsas et al. (2017) used data augmentation to train a 3D CNN on a large dataset of MRI images, achieving an overall Dice coefficient of 0.76 for tumor segmentation on a dataset of 210 patients. Kamnitsas et al. (2018) conducted a systematic review analyzing 56 studies that employed deep learning (DL) models for brain tumor detection and segmentation. They found that DL models consistently achieved high accuracy and were effective in detecting brain tumors. [4] Soltaninejad et al. (2018) proposed a DL-based approach for tumor detection and segmentation, utilizing a combination of CNNs and CRFs. Their method achieved an average dice similarity coefficient (DSC) of 0.82, indicating accurate segmentation of brain tumors. [5] In Farooq et al.'s (2019) study, they presented an approach based on convolutional neural networks (CNNs) to detect and classify brain tumors using MRI data. Their method achieved an impressive accuracy of 97.78%. [1] Jaffar et al. (2018) utilized recurrent neural networks (RNNs) to detect brain tumors by creating a sequence of frames from MRI images. Their approach achieved a high accuracy of 98.5%. [2] International Journal For Global Academic & Scientific Research (IJGASR) In El-Sayed et al.'s (2020) study, they introduced a method for automatic detection and classification of brain tumors using a combination of CNNs and radial basis function networks (RBFNs). Their approach achieved an impressive accuracy of 95.4% in detecting and classifying brain tumors. [6] To summarize, DL methods, particularly CNNs, have displayed remarkable results for brain tumor detection and segmentation, surpassing traditional machine learning methods. Transfer learning and data augmentation techniques have also been demonstrated to enhance the performance of CNNs for this task.
Different DL Approaches for Brain Tumor Detection
Brain tumor detection using DL algorithms such as CNNs, U-Net, ResNets, RNNs, and GANs has shown promising results. The availability and quality of the data, however, provide one of the biggest difficulties in the development of these models. Brain MRI data collection and labelling is a time-consuming and tedious operation that calls both specialized knowledge and practical experience.
This makes it difficult to obtain large and diverse datasets, especially in low-resource settings.
Additionally, the quality of MRI data can vary, with different contrasts, resolutions, and artifacts affecting the performance of DL models. Brain tumors can also have different shapes, sizes, and locations, with heterogeneous appearances, making it challenging to accurately detect and segment them.
To overcome these obstacles, researchers have used a variety of techniques, including data augmentation, transfer learning, and the use of multi-modal MRI data. Data augmentation involves generating synthetic data from existing images by applying transformations such as rotations, translations, and scaling. For the purpose of detecting brain tumors, transfer learning entails finetuning previously trained models using massive datasets. Using multi-modal MRI data, such as combining T1-weighted, T2-weighted, and contrast-enhanced images, can also provide complementary information for more accurate detection and segmentation. [7] Despite these efforts, the scarcity and variability of brain tumor data remain a significant challenge in developing accurate and robust DL models. To address this issue, collaboration between researchers, clinicians, and data providers is necessary to collect and share large and diverse datasets for training and testing DL models. In summary, while CNNs and U-Net have been the most commonly used lDL algorithms for brain tumor detection and segmentation, ResNets, RNNs, and GANs have also shown promising results. However, addressing the brain tumor data issue remains crucial for further advancements in this field.
Data Augmentation
Increasing the size and diversity of a training dataset can be accomplished through data augmentation, a technique commonly used in DL-based brain tumor detection to overcome the challenge of limited and imbalanced data. Several methods can be utilized for data augmentation for brain tumor detection, including rotation, flipping, scaling, translation, elastic transformation, intensity shift, and Gaussian noise. [8] These methods can generate a large and diverse dataset for training DL models, which can enhance the model's performance by reducing overfitting and increasing generalization to new data. Nevertheless, it is crucial to apply data augmentation carefully to maintain the original data's properties, and the augmented data should be validated to ensure that they represent real data and do not introduce biases or errors in the process of training.
For the purpose of detecting brain tumors, there are numerous techniques for data augmentation which include: Rotation: A specific angle of rotation of the MRI image can produce new images for training.
There are two possible rotation angles: random and fixed. However, it is crucial to use data augmentation carefully in a way that preserves the properties of the original data. The augmented data should also be validated to ensure that they are representative of the real data and do not introduce any biases or errors in the process of training
Detecting various types of Brain tumor through DL
Deep learning has proven to be an effective tool for detecting various types of brain tumors.
Gliomas are the most common type of malignant brain tumor, and they can be accurately detected and classified using DL techniques. Meningiomas, on the other hand, are usually benign, but still require treatment, and DL can distinguish them from other types of brain tumors and segment them with high accuracy. Pituitary tumors, which affect the gland that regulates hormones, can also be accurately detected and segmented using DL models.
Medulloblastomas, which primarily affect children, are highly aggressive and require prompt treatment. DL can assist in the accurate detection and segmentation of these tumors, allowing for faster diagnosis and treatment. Metastatic brain tumors, which have spread from other parts of the body, can also be detected and segmented accurately using DL techniques. can be used to train DL models for brain tumor detection. These models can be designed to perform binary or multi-class classification, depending on the needs of the medical professional. Evaluation of the model's performance can be done using metrics such as sensitivity, specificity, accuracy, and dice similarity coefficient (DSC).
DL is a powerful tool that can aid medical professionals in accurately detecting and treating various types of brain tumors. By leveraging the capabilities of DL models, medical professionals can make more informed decisions and improve patient outcomes. However, it is important to note that DL models should always be used in conjunction with clinical expertise and should never replace the judgment of medical professionals.
Methodology
There are numerous critical steps involved in the process of employing DL to find brain tumors.
First, it is necessary to compile a dataset of brain MRI scans that includes both tumor and nontumor cases. Different modalities such as T1-weighted, T2-weighted, and contrast-enhanced MRI may be used, and the data may need to be pre-processed through techniques such as skull stripping and normalization. Data augmentation methods such as rotation, flip, and scaling can then be applied to create additional training data and improve the model's ability to generalize.
Next, an appropriate DL model must be selected, with popular choices including approaches like CNNs, GAn and U-Nets. The model architecture should be designed to optimize accuracy while remaining computationally efficient. After that, the prototypical is trained on the larger dataset in an effort to improve its parameters and lower its loss function. The training process may require several epochs to reach a stable solution, and the model's generalization performance is assessed using a separate validation dataset.
The performance of the trained model is evaluated using a separate test dataset, which includes cases that were not used in the training or validation sets. Performance metrics such as sensitivity, specificity, accuracy, and dice similarity coefficient (DSC) are used to evaluate the model's visualized and compared with the ground truth to assess the model's performance. The process of DL-based brain tumor detection involves multiple steps, each of which requires careful consideration and optimization to achieve accurate and robust tumor detection and segmentation.
The most important is to deeply understand the RNN,CNN and GAn architecture for detection of brain tumor .
Following is the Proposed Methodology architecture for detection of brain tumor a) Many convolutional layers extract information from the input image in a CNN design, which is typically followed by fully connected layers that carry out the final classification or segmentation. Below is an illustration of a straightforward CNN architecture for brain tumor identification. The input image is used, three sets of convolutional and max-pooling layers are applied, the output is flattened, a dense layer is applied, and the output is then produced.
In order to track the development of brain tumors over time, RNNs are a class of neural network that are well-suited for analyzing sequential data, such as time-series or video data.
RNN architectures often start with a recurrent layer, like an LSTM or GRU, that processes the input sequence, then one or more fully connected layers that carry out the classification or regression at the end. The image below illustrates a straightforward RNN architecture for predicting brain tumor progression. The final output is produced after the input sequence has gone through an LSTM layer.
Input -> LSTM -> Dense -> Output c) To supplement small or unbalanced datasets or to improve photos, GANs can be helpful in creating artificial medical images. A generator network, which uses a random noise vector as input and output to create a synthetic image, and a discriminator network, which can tell the difference between real and fake images, are the two main components of GAN designs.
The generator and discriminator networks are trained in an adversarial way, with the generator attempting to deceive the discriminator into believing that its output is real and the discriminator attempting to accurately identify the input images as real or synthetic.
Below is an illustration of a straightforward GAN architecture for creating artificial images of brain tumors. A noise vector is used as the input, which is then processed through three sets of dense layers, reshaped, and then processed through three sets of deconvolutional layers with batch normalisation and ReLU activation to produce the output image.
It is crucial to remember that these architectures are simply illustrations and can be changed depending on the precise task and dataset. The performance of the models can also be strongly impacted by the selection of hyperparameters, such as the number of layers, the activation function, and the learning rate. For each every application, it is crucial to carefully develop and optimize the architecture and hyperparameters.
Implementation
The successful implementation of DL-based brain tumor detection involves a series of meticulous steps that require careful attention. The following guidelines should be followed for a fruitful implementation: First and foremost, it is essential to set up the development environment by installing the necessary software and libraries. This includes the latest versions of Python, TensorFlow, Keras, and other DL frameworks. If GPU acceleration is utilized, appropriate GPU drivers and libraries should also be installed.
The subsequent step involves the collection and pre-processing of MRI images. It is crucial to preprocess the collected images to ensure high quality and consistency. Pre-processing steps may involve skull stripping, normalization, and resizing to achieve these objectives. The role of padding, pooling, and activation functions in the context of brain tumor detection through CNNs, RNNs, and GANs is crucial for processing and extracting features from the input data. [10] Padding refers to the process of adding additional pixels to the input image in CNNs to prevent the loss of information at the edges of the image during convolution. This process can be accomplished using different methods such as 'same' or 'valid'. RNNs also employ padding, which involves adding additional time steps to sequences to ensure that all sequences have the same length. Pooling is a technique used to reduce the dimensionality of feature maps while retaining important information. Max pooling is the most commonly used pooling operation, which selects the maximum value within a given region of the feature map. This technique helps reduce the size of the feature map, making the model more computationally efficient.
Simulation
The process of detecting brain tumors using DL involves multiple steps, and the specific implementation and dataset used can affect the simulation, output, and results. Here is an example of a workflow and output commonly used for brain tumor detection through DL Dataset preparation: The MRI images of both healthy brains and brains with tumors are collected and preprocessed. The dataset is then split into three subsets: training, validation, and test sets. Here is an example output for the detection of brain tumors through DL:
Fig 2 MRI brain tumor segmentation & Classification
In this example, the DL model was trained to detect brain tumors with dataset of 256x256 MRI images. The model was trained for 20 epochs and achieved an accuracy of 95.6% on the test set.
The model output is a segmentation of the tumor area, shown in red. The segmentation was compared with the ground truth segmentation, shown in green, to evaluate the model's accuracy.
The model was able to accurately detect the tumor in the MRI image, as shown by the high overlap between the model output and the ground truth segmentation.
Detecting brain tumors using RNN, CNN, and GANs involves training these models on a dataset of brain images to identify the presence of a tumor. The simulation process would involve the
RNN:
The output of an RNN model for brain tumor detection may be a time-series plot that shows the changes in brain activity over time. The plot may highlight the regions of the brain that are affected by the tumor and how they change over time.
GAN:
The output of a GAN model for brain tumor detection may be a synthetic medical image that looks similar to a real medical image. The synthetic image may contain a tumor that is similar in appearance and location to the tumors in the real images. This synthetic image can be used for data augmentation or to train other models.
Keep in mind that the output of these models would depend on several factors such as the model architecture, the dataset, and the specific task. The output may also vary in terms of format, depending on the specific implementation of the model.
The use of AI techniques such as RNNs, CNNs, and GANs in the detection of brain tumors involves classification of images or data into two categories: those that contain tumors and those that do not.
RNNs can be used to classify time-series data from brain scans as abnormal or normal. Abnormal data is indicative of the presence of a tumor, while normal data indicates the absence of a tumor.
CNNs can be used to classify images of the brain as either containing a tumor or not. CNNs analyze the features of the images and learn to differentiate between images that contain tumors and those that do not.
GANs, on the other hand, can be used to generate synthetic images of the brain that contain tumors or not. These synthetic images can be used to train CNNs to improve their accuracy in classifying real images as containing tumors or not.
In all cases, the goal is to classify brain scans as accurately as possible to aid in the detection and treatment of brain tumors.
Fig 6 Metric Evaluation of Detecting Brain Tumor Through DL
General information on the accuracy of these models in medical image analysis.
CNNs have been shown to achieve state-of-the-art results in brain tumor detection and segmentation tasks. Several studies have reported accuracies ranging from 90% to 99% for CNN models on benchmark datasets such as the BraTS (Brain Tumor Segmentation) challenge dataset.
RNNs have also been used for brain tumor detection and classification tasks, but they are more commonly used for analyzing sequential medical data such as EEG signals or fMRI data. The accuracy of RNN models would depend on the specific task and dataset, but they can achieve high accuracy in some cases.
GANs have shown promising results in image synthesis and data augmentation tasks, but their accuracy in tumor detection would depend on the specific implementation of the model and the quality of the generated images. GANs can be useful for generating realistic synthetic images that can be used to augment small or imbalanced datasets, which can improve the accuracy of other models.
In conclusion, the accuracy of these models would depend on several factors such as the model architecture, the dataset, and the specific task. The accuracy may also vary depending on the specific implementation of the model and the hyper parameters used during training.
Observation and Discussion
RNNs are a type of neural network that can handle sequential data such as time series, speech, and text. In the case of brain tumor detection, RNNs can be used to analyze the time-series data from brain scans to detect abnormalities. RNNs have been used in some studies to analyze electroencephalography (EEG) data to detect brain tumors, but their effectiveness in detecting brain tumors in other types of brain scans such as MRI or CT is still being explored.
CNNs are a type of neural network that can extract features from images. CNNs have been used in many studies to detect brain tumors in MRI scans. CNNs can detect features such as shape, texture, and edges, and can use these features to classify images as containing a tumor or not.
GANs are a type of neural network that can generate new data that is similar to the input data. In the case of brain tumor detection, GANs can be used to generate synthetic images of the brain that contain tumors. These synthetic images can be used to train CNNs to improve their accuracy in detecting tumors in real images.
In summary, RNNs, CNNs, and GANs can all be used in the detection of brain tumors, but their effectiveness may vary depending on the type of data being analyzed and the specific task at hand. Further research and development are needed to determine the best approach for detecting brain tumors using AI techniques.
Conclusion
The use of AI techniques such as RNNs, CNNs, and GANs in the detection of brain tumors shows promising results. RNNs can be used to analyze time-series data from brain scans to detect abnormalities, while CNNs are effective in extracting features from images and classifying them as containing tumors or not. GANs can generate synthetic images of the brain that contain tumors, which can be used to train CNNs to improve their accuracy. However, the effectiveness of these AI techniques may vary depending on the type of data being analyzed and the specific task at hand.
Further research and development are needed to determine the best approach for detecting brain tumors using AI techniques. | 2023-07-12T06:29:24.690Z | 2023-06-30T00:00:00.000 | {
"year": 2023,
"sha1": "854d77a15c0acecf93bbf9024fefe9063b74ca46",
"oa_license": "CCBY",
"oa_url": "https://journals.icapsr.com/index.php/ijgasr/article/download/45/113",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "df6f6e86d1c4f67741cb0c79775a41484c0d62e6",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
} |
10531205 | pes2o/s2orc | v3-fos-license | A Stronger Multi-observable Uncertainty Relation
Uncertainty relation lies at the heart of quantum mechanics, characterizing the incompatibility of non-commuting observables in the preparation of quantum states. An important question is how to improve the lower bound of uncertainty relation. Here we present a variance-based sum uncertainty relation for N incompatible observables stronger than the simple generalization of an existing uncertainty relation for two observables. Further comparisons of our uncertainty relation with other related ones for spin- and spin-1 particles indicate that the obtained uncertainty relation gives a better lower bound.
Uncertainty relation is one of the fundamental building blocks of quantum theory, and now plays an important role in quantum mechanics and quantum information [1][2][3][4] . It is introduced by Heisenberg 5 in understanding how precisely the simultaneous values of conjugate observables could be in microspace, i.e., the position X and momentum P of an electron. Kennard 6
and Weyl 7 proved the uncertainty relation
where the standard deviation of an operator X is defined by ψ ψ ψ ψ ∆ = − X X X 2 2 . Later, Robertson proposed the well-known formula of uncertainty relation 8 which is applicable to arbitrary incompatible observables, and the commutator is defined by [A, B] = AB − BA.
The uncertainty relation was further strengthed by Schrödinger 9 with the following form Here the commutator defined as {A, B} ≡ AB + BA. It is realized that the traditional uncertainty relations may not fully capture the concept of incompatible observables as the lower bound could be trivially zero while the variances are not. An important question in uncertainty relation is how to improve the lower bound and immune from triviality problem 10,11 . Various attempts have been made to find stronger uncertainty relations. One typical kind of relation is that of Maccone and Pati, who derived two stronger uncertainty relations , and the sign on the right-hand side of the inequality takes + (− ) while i〈 [A, B]〉 is positive (negative). The basic idea behind these two relations is adding additional terms to improve the lower bound. Along this line, more terms [12][13][14] and weighted form of different terms 15,16 have been put into the uncertainty relations. It is worth mentioning that state-independent uncertainty relations can immune from triviality problem [17][18][19][20] . Recent experiments have also been performed to verify the various uncertainty relations [21][22][23][24] .
Besides the conjugate observables of position and momentum, multiple observables also exist, e.g., three component vectors of spin and angular momentum. Hence, it is important to find uncertainty relation for multiple incompatible observables. Recently, some three observables uncertainty relations were studied, such as Heisenberg uncertainty relation for three canonical observables 25 , uncertainty relations for three angular momentum components 26 , uncertainty relation for three arbitrary observables 14 . Furthermore, some multiple observables uncertainty relations were proposed, which include multi-observable uncertainty relation in product 27,28 and sum 29,30 form of variances. It is worth noting that Chen and Fei derived an variance-based uncertainty relation 30 for arbitrary N incompatible observables, which is stronger than the one such as derived from the uncertainty inequality for two observables 10 .
In this paper, we investigate variance-based uncertainty relation for multiple incompatible observables. We present a new variance-based sum uncertainty relation for multiple incompatible observables, which is stronger than an uncertainty relation from summing over all the inequalities for pairs of observables 10 . Furthermore, we compare the uncertainty relation with existing ones for a spin-1 2 and spin-1 particle, which shows our uncertainty relation can give a tighter bound than other ones.
Results
Theorem 1. For arbitrary N observables A 1 , A 2 , … , A N , the following variance-based uncertainty relation holds The bound becomes nontrivial as long as the state is not common eigenstate of all the N observables.
Proof: To derive (7), start from the equality we obtain the uncertainty relation (7) QED.
When N = 2 we have the following corollary
Corollary 1.1. For two incompatible observables A and B, we have
which is derived from Theorem 1 for N = 2, and stronger than uncertainty relation (5).
To show that our relation (7) has a stronger bound, we consider the result in ref. 10, the relation (5) is derived from the uncertainty equality Using the above uncertainty equality, one can obtain two inequalities for arbitrary N observables, namely The bound in (6) is tighter than the one in (12) 30 . However, the lower bound in (6) is not always tighter than the one in (13) (see Fig. 1).
, the relation (12) becomes Simplify the above inequality, we obtain which is equal to the relation (12).
Similarly, by using , our relation (7) becomes Simplify the above inequality, we get which is equal to the relation (7). It is easy to see that the right-hand side of (17) is greater than the right-hand side of (15). Hence, the relation (7) is stronger than the relation (12).
Comparison between the lower bound of our uncertainty relation (7) with that of inequalities (6) and (13). Here, we will show the uncertainty relation (7) is stronger than inequalities (13) and (6) for a spin-1 2 particle and measurement of Pauli-spin operators σ x , σ y , σ z . Then the uncertainty relation (7) has the form We consider a qubit state and its Bloch sphere representation where σ σ σ σ = ( , , ) x y z are Pauli matrices and the Bloch vector = 2 . The relation (18) has the form
. And the relation (19) becomes
Let us compare the lower bound of (22) with that of (23). The difference of these two bounds is x y x z y z x y z x y x z y z 1 9 for all ∈ − x y z , , [ 1,1]. When = = = ± x y z 1/ 3, the above inequality becomes equality, then the Eq. (24) has the minimum value > 1/2 0. This illustrates that the uncertainty relation (7) is stronger that the one (13) for a spin-1 2 particle and measurement of Pauli-spin operators σ x , σ y , σ z . Let us compare the uncertainty relation (18) with (20). The relation (20) has the form where we define β = − + 2 . Then the difference of these two bounds of relation (22) and (25) . This illustrates that the uncertainty relation (7) is stronger that the one (6) for a spin-1 2 particle and measurement of Pauli-spin operators σ x , σ y , σ z .
Conclusion
We have provided a variance-based sum uncertainty relation for N incompatible observables, which is stronger than the simple generalizations of the uncertainty relation for two observables derived by Maccone and Pati [Phys. Rev. Lett. 113, 260401 (2014)]. Furthermore, our uncertainty relation gives a tighter bound than the others by comparison for a spin-1 2 particle with the measurements of spin observables σ x , σ y , σ z . And also, in the case of spin-1 with measurement of angular momentum operators L x , L y , L z , our uncertainty relation predicts a tighter bound than other ones. | 2017-05-24T16:24:21.000Z | 2017-01-04T00:00:00.000 | {
"year": 2017,
"sha1": "a22cf273e10ac3915c7f720b34789df54c8288f0",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep44764.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1335b53d7f7546c169bfde2cde865a9674fd11fa",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
25984138 | pes2o/s2orc | v3-fos-license | The ubiquitin E3 ligase LOSS OF GDU2 is required for GLUTAMINE DUMPER1-induced amino acid secretion in Arabidopsis.
Amino acids serve as transport forms for organic nitrogen in the plant, and multiple transport steps are involved in cellular import and export. While the nature of the export mechanism is unknown, overexpression of GLUTAMINE DUMPER1 (GDU1) in Arabidopsis (Arabidopsis thaliana) led to increased amino acid export. To gain insight into GDU1's role, we searched for ethyl-methanesulfonate suppressor mutants and performed yeast-two-hybrid screens. Both methods uncovered the same gene, LOSS OF GDU2 (LOG2), which encodes a RING-type E3 ubiquitin ligase. The interaction between LOG2 and GDU1 was confirmed by glutathione S-transferase pull-down, in vitro ubiquitination, and in planta coimmunoprecipitation experiments. Confocal microscopy and subcellular fractionation indicated that LOG2 and GDU1 both localized to membranes and were enriched at the plasma membrane. LOG2 expression overlapped with GDU1 in the xylem and phloem tissues of Arabidopsis. The GDU1 protein encoded by the previously characterized intragenic suppressor mutant log1-1, with an arginine in place of a conserved glycine, failed to interact in the multiple assays, suggesting that the Gdu1D phenotype requires the interaction of GDU1 with LOG2. This hypothesis was supported by suppression of the Gdu1D phenotype after reduction of LOG2 expression using either artificial microRNAs or a LOG2 T-DNA insertion. Altogether, in accordance with the emerging bulk of data showing membrane protein regulation via ubiquitination, these data suggest that the interaction of GDU1 and the ubiquitin ligase LOG2 plays a significant role in the regulation of amino acid export from plant cells.
Amino acids are the main form of organic nitrogen transported in the xylem and the phloem in plants (Peoples and Gifford, 1990). Besides a role in nitrogen transfer between plant organs, amino acids are impor-tant to coordinate shoot and root metabolism in response to environmental conditions. Cycling of amino acids between shoots and roots, successively transported by the phloem and the xylem, has been proposed to carry information about the nitrogen status of roots or shoots to the other organ system. In particular, amino acids have been shown to inhibit root nitrate uptake, adjusting inorganic nitrogen uptake to shoot demand (Miller et al., 2007).
Amino acid cycling involves successive transfers across the plasma membrane from symplasm (phloem and cytosol) to apoplasm (xylem and cell wall) and vice versa. Because membranes are fairly impermeable to these solutes, both import and export are catalyzed by integral membrane proteins. Identified in early physiological studies Bush, 1990, 1992), amino acid importers are proton gradient-dependent transporters with broad amino acid specificities (Tegeder and Rentsch, 2010). In stark contrast to importers, molecular mechanisms of plasma membrane amino acid export remain largely unknown. While export across the plasma membrane has been measured at the physiological level (Secor and Schrader, 1984;De Jong et al., 1997;Lesuffleur and Cliquet, 2010), no transporter has yet been identified that mediates this process (for review, see Okumoto and Pilot, 2011). The first cloned plant amino acid exporter, GAMMA AMINO BUTYRIC ACID PERMEASE (GABP), mediates g-aminobutyrate transport from the cytosol to the mitochondrion (Dü ndar and Bush, 2009;Michaeli et al., 2011). Because of its localization at the mitochondrial membrane, GABP is not expected to mediate amino acid export at the plasma membrane.
The first gene suggested to be involved in plasma membrane amino acid export is GLUTAMINE DUMPER1 (GDU1). The gdu1-1D mutant, which overexpresses GDU1, was isolated in an activation-tag screen for plants with altered hydathode function. gdu1-1D exhibited high free amino acid content in the phloem, xylem, and guttation stream, leading to Gln crystallization at the hydathodes (Pilot et al., 2004). The Gdu1D phenotype also entailed plant size reduction, constitutive necrotic lesions, and resistance to toxic concentrations of amino acids (Pilot et al., 2004;Pratelli and Pilot, 2007;Liu et al., 2010). gdu1-1D was later found to constitutively export amino acids from plant cells (Pratelli et al., 2010). GDU1 encodes a small protein with a single transmembrane domain. Paralogs are found in Arabidopsis (Arabidopsis thaliana; named GDU2-GDU7), and homologs are present in higher and lower plant genomes (Pratelli and Pilot, 2006). The physiological function of the GDU family is unknown, but overexpression of any GDU gene causes phenotypes reminiscent of Gdu1D (Pratelli et al., 2010). Apart from the membrane domain, GDU proteins also share a VIMAG domain (representing the amino acids Val-Ile-Met-Ala-Gly). The log1-1 allele of GDU1 (GDU1 G100R , in which the VIMAG Gly is mutated to Arg) was shown to suppress the Gdu1D phenotype (Pratelli and Pilot, 2006), suggesting that the VIMAG domain is essential for GDU1 function. The molecular basis for this suppression was not determined.
In spite of its effect on amino acid export, the predicted structure of GDU1 makes it unlikely to be a transporter (Pilot et al., 2004), and it had been suggested that GDU1 could be a transporter subunit (Pratelli et al., 2010) by analogy to mammalian heteromeric amino acid transporters (Palacín et al., 2005). Small, single transmembrane domain proteins have also been shown to be involved in the organization of large membrane protein complexes (for review, see Zickermann et al., 2010). Finding interacting partners of GDU1, therefore, is a necessary step to gain insight about its role. To this end, we performed yeast twohybrid and ethyl-methanesulfonate (EMS) suppression screens. We report here that both of these approaches uncovered the same RING-type ubiquitin E3 ligase, subsequently named LOSS OF GDU2 (LOG2). In a survey of Arabidopsis RING fingercontaining E3s, LOG2 was shown to exhibit in vitro E3 activity, but the biological function of LOG2 was not investigated in that study .
E3 ligases facilitate the covalent attachment of the small protein ubiquitin to other proteins (ubiquitination). Ubiquitination is an efficient and highly specific mechanism by which intracellular protein activity, localization, and/or stability are governed in eukaryotes. Following ATP-dependent activation by ubiquitinactivating enzyme (UBA or E1) and transthioesterification to an ubiquitin-conjugating enzyme (UBC or E2), substrate ubiquitination is catalyzed by E3 ubiquitin ligases (E3s; Deshaies and Joazeiro, 2009). E3s bind protein substrates and E2 ubiquitin thioesters in a conformation that facilitates ubiquitin transfer to substrates. In the case of RING-type E3s, the RING domain enables interaction with E2s (Joazeiro et al., 1999). In addition to promoting nuclear and cytosolic protein degradation via the 26S proteasome, ubiquitination by E3s can also regulate plasma membrane protein abundance via lysosomal/vacuolar proteolysis (Komander, 2009;Léon and Haguenauer-Tsapis, 2009).
Here, we report that the ubiquitin ligase LOG2 interacts with GDU1 in vitro and in planta and that reduction of LOG2 expression suppresses the Gdu1D phenotype. In addition, the previously described log1-1 mutation abolishes GDU1 interaction with LOG2. Altogether, these data support a model whereby LOG2 and its interaction with GDU1 are required for the increased amino acid export observed upon GDU1 overexpression.
GDU1 Interacts with LOG2, a RING Domain-Containing Protein
The physiological consequences of GDU1 overexpression have been studied extensively, but the mechanism by which GDU1 activates amino acid efflux remains unclear. GDU1 is unlikely to be a transporter; thus, it must interact with other proteins to activate efflux. A screen for interacting proteins was performed using a yeast two-hybrid strategy, in which the region C terminal to the putative transmembrane domain of GDU1 (cGDU1; amino acids 61-158) was used as bait against an Arabidopsis cDNA library. Three clones, whose inserts contained partial open reading frames of genes At3g09770 and At5g03200 (Supplemental Fig. S1, A and C), were shown to restore yeast prototrophy when coexpressed with cGDU1 in the yeast twohybrid screen. At3g09770 and At5g03200 encode members of the same subfamily of RING finger ubiquitin E3 ligases Stone et al., 2005). At3g09770 and At5g03200 were named LOG2 (see below) and LOG2-LIKE UBIQUITIN LIGASE1 (LUL1), respectively. The three other paralogs were designated LUL2 (At3g53410), LUL3 (At5g19080), and LUL4 (At3g06140). Full-length LOG2 and LUL1 also interacted with cGDU1 in the yeast two-hybrid assay (Fig. 1A). Yeast coexpressing LUL1 and cGDU1 grew slower than yeast coexpressing LOG2 and cGDU1, while the proteins were expressed at similar levels (data not shown). This observation was consistent with the activity of the lacZ reporter gene. b-Galactosidase activity of yeast cells was 0.14 6 0.13 nmol o-nitro phenol (ONP) h 21 0.10 27 cells when cGDU1 was coexpressed with LUL1 and 3.02 6 0.18 nmol ONP h 21 0.10 27 cells when coexpressed with LOG2, suggesting that the interaction between cGDU1 with LUL1 was weaker compared with that of LOG2.
In addition to the presence of a C-terminal C3HC4 RING finger domain and a predicted N-terminal myristoylation site (see below), the five LOG2 subfamily members contain a functionally uncharacterized region N terminal to the RING finger that was previously designated Domain Associated with RING2 (DAR2; Supplemental Fig. S1C; Stone et al., 2005). LOG2-like genes are present in the genomes of diverse taxa, including monocots, lower plants, and mammals, but surprisingly not in fungi. While LOG2 function has not been studied in plants, the mammalian homolog MAHOGUNIN RING FINGER1 (MGRN1) plays an unclear role in the down-regulation of melanocortin signaling and the maturation of endosomes (He et al., 2003;Kim et al., 2007;Cooray et al., 2011). MGRN1 contains a large C-terminal domain not found in plant LOG2/LUL proteins (Supplemental Fig. S1C) that seems to play an important role in endosomal trafficking (Kim et al., 2007), suggesting divergence in function from plant LOG2/LULs.
The interaction of cGDU1 with LOG2 and LUL1 was tested by glutathione S-transferase (GST) in vitro pulldown assays. The interaction of GST-LOG2 with cGDU1 was the only one to be detected (Fig. 1B, top panel, lane 2), indicating that cGDU1 binds directly to LOG2 and more strongly to LOG2 than to LUL1 in this assay. The interaction between GDU1 and LOG2 was then tested in planta using proteins transiently expressed in Nicotiana benthamiana leaves. Wild-type LOG2 was expressed at levels too low to be suitable for coimmunoprecipitation experiments (data not shown). Hypothesizing that autoubiquitination may contribute to protein instability, we generated a LOG2 mutant with a catalytically inactive RING domain (LOG2 CC354+357AA , called mLOG2; see below and Fig. 2C), which could successfully be expressed at high levels. Full-length GDU1 and mLOG2 with C-terminal Myc or hemagglutinin (HA) tags were then coexpressed in N. benthamiana leaves, and the Myc-fused Figure 1. Interaction assays between GDU family members and the E3 ubiquitin ligases of the LOG2 family. A, yeast two-hybrid interaction of the cytosolic domain of the GDU proteins (cGDU) with LOG2 and LULs. The panels show swapping of inserts between the bait (pGBT9) and prey (pACT2) plasmids. Yeast coexpressing the protein pairs were grown for 4 d on medium selecting for protein interaction, lacking Leu, Trp, adenine, and His. All yeast grew on a medium lacking Leu and Trp, selecting for the plasmids only (data not shown). B, GST pull-down (PD) assay of flag-cGDU1 or flag-cGDU1 G100R with GST-LOG2, GST-LUL1, or GST alone. Top, pulled-down samples; middle, cGDU input; bottom, GST protein input. Arrowheads indicate GST (approximately 27 kD) or full-length GST-tagged LOG2 and LUL1. C, Coimmunoprecipitation (IP) assay after the expression of GDU1 or GDU1 G100R and mLOG2 or mLOG2 R12K in transiently infiltrated N. benthamiana leaves. Top, Myc coimmunoprecipitation samples probed with a-HA (H); middle, Myc coimmunoprecipitation samples probed with a-Myc (M); bottom, HAprotein input. Asterisks and diamonds indicate LOG2 and GDU1 proteins, respectively. Numbers on the right in B and C indicate molecular mass in kD. ORF, Open reading frame; WB, western blot. protein was immunoprecipitated. Independent of which protein contained the Myc epitope tag, GDU1 and mLOG2 coimmunoprecipitated (Fig. 1C, lanes 1 and 3), showing that the interactions detected by yeast twohybrid and in vitro assays are observed in planta as well.
The GDU1-LOG2 Interaction Is Affected by a Suppressor Mutation in GDU1 It had previously been reported that the EMSgenerated log1-1 mutant, corresponding to a G100R substitution in the VIMAG motif of GDU1, suppresses multiple features of the Gdu1D phenotype Pilot, 2006, 2007). Although the C-terminal domains of the seven Arabidopsis GDUs show limited sequence conservation outside the VIMAG domain, all cGDUs could interact with LOG2 in yeast, and most could interact with LUL1 (Fig. 1A). This suggested that the interaction with LOG2 is mediated through the VIMAG domain. To test the hypothesis that the G100R mutation alters the interaction with LOG2, the interactions of cGDU1 G100R (log1-1) and D-cGDU1 (in which the whole VIMAG domain has been deleted) with LOG2 and LULs were tested using the yeast twohybrid system. While these mutations did not alter protein abundance (data not shown), neither log1-1 nor D-cGDU1 could interact with LOG2 or LUL1 (Fig. 1A). Consistent with these results, flag-tagged cGDU1 G100R failed to interact with GST-LOG2 in a GST pull-down assay (Fig. 1B, lane 5), and coimmunoprecipitation of GDU1 G100R and mLOG2 expressed in N. benthamiana was greatly reduced (Fig. 1C, lanes 5 and 6). These data indicate that the GDU1-LOG2 interaction requires the conserved VIMAG domain of GDU1 and that the suppression of the Gdu1D phenotype observed in log1-1 plants (Pratelli and Pilot, 2006) may result from the loss of interaction with LOG2.
cGDUs Interact with LOG2 and LUL Proteins
To systematically characterize interactions between the LOG2/LUL and GDU families, yeast two-hybrid assays were conducted between the C-terminal domains of all Arabidopsis GDUs and LOG2/LUL proteins. In contrast to LOG2, cGDU-LUL interactions were typically specific for a pair of bait-prey combinations (Fig. 1A). LUL2 and LUL3 interactions were observed only for cGDU2 or cGDU3, and LUL4 did not interact with any cGDU. As expected, deletion of the VIMAG motif from cGDU1 abolished all LUL interactions. The interaction between GDU and LOG2-like proteins appears to be facilitated by the VIMAG motif of the GDUs and a conserved domain among LOG2 homologs (most likely DAR2; see below).
LOG2/LULs Exhibit E3 Ligase Activity, but cGDU1 Is Ubiquitinated Exclusively by LOG2 in Vitro LOG2 had previously been shown to polymerize ubiquitin in vitro . To examine the E3 activities of the LOG2 paralogs, GST fusions of LOG2 and LUL1 to -4 were expressed in Escherichia coli, purified, and assayed with ubiquitin pathway proteins (E2, E1, and ubiquitin). Mirroring previously published results with LOG2 Stone et al., 2005), LUL1 to -4 were active, forming high-M r ubiquitinated proteins in the presence of all ubiquitin pathway components, while omission of the E2 prevented their production ( Fig. 2A).
Only the proteins demonstrated to interact with GDU1 (i.e. LOG2 and LUL1 [ Fig. 1]) were tested for ubiquitination of cGDU1 using in vitro assays. Only LOG2 could ubiquitinate cGDU1 above the background of the E3 reaction (Fig. 2B). Substitution to Ala of two zinc-coordinating Cys residues (mLOG2) or an Ile residue in the RING domain (LOG2 I321A ) was previously described for other RING E3s to abolish E3 activity or hinder association with E2s, respectively (Lorick et al., 1999;Brzovic et al., 2003). As predicted, these modifications of LOG2 led to abolished or compromised cGDU1 ubiquitination, respectively (Fig . 2C). Interestingly, truncation of the nonconserved 125-amino acid region N terminal to DAR2 did not impede cGDU1 ubiquitination by LOG2 (Fig. 2C). cGDU1 G100R (log1-1) was then assayed with or without LOG2-V5-His 6 in the presence of ubiquitin pathway components. While cGDU1 formed ladders in the presence of LOG2, only weak monoubiquitination of cGDU1 G100R was observed (Fig. 2D). These data show that the decreased interaction of cGDU1 G100R with LOG2 ( Fig. 1B) results in decreased in vitro ubiquitination of cGDU1 G100R and suggest that the specificity determinants of the GDU1-LOG2 interaction lie in the DAR2 of LOG2.
Similar to GDU1, LOG2 Is Expressed in the Vasculature
The interaction between GDU1 and LOG2, confirmed so far in vitro and after coexpression in planta, is physiologically relevant only if the two proteins are expressed in the same cell types. To investigate LOG2 expression pattern, the LOG2 promoter region fused to the coding sequence of the uidA gene from E. coli, encoding GUS, was introduced into the Arabidopsis genome. GUS activity was detected in vascular tissues of roots, leaves, and stems (Fig. 3, A-D), and more precisely in both phloem and xylem parenchyma cells, as revealed by stem cross-sections (Fig. 3C). The LOG2 promoter was active in all shoot cells: a light background staining appeared in leaves and in nonvascular parenchyma cells of the stems (Fig. 3, A and C). Examination of lightly stained plants showed that this background staining did not result from diffusion of the product of the histochemical reaction out of the vascular tissues (data not shown). The presence of staining in roots up to the division zone indicated that the LOG2 promoter is active in the root phloem ( Fig. 3D). Interestingly, the LOG2 promoter was only active in cells from the root stele and root tip; even with longer reaction time, no staining could be observed in GDU1 Interacts with the Ubiquitin Ligase LOG2 the cortex and epidermis (Fig. 3, D and E). In reproductive organs, strong GUS activity was detected in the style, connective tissue, and the base of the flower (Fig. 3, F and G), which persisted along the development of the silique (data not shown). Finally, the LOG2 promoter was active in pollen grains (Fig. 3H). The observed expression pattern is in good agreement with quantitative reverse transcription (RT)-PCR results (Supplemental Fig. S2), the AtGenExpress data set (Schmid et al., 2005), and the Arabidopsis Gene Ex-pression Database (Cartwright et al., 2009), where stronger expression was detected in vasculature-rich tissues (stems), pollen, columella, and phloem and xylem cells (data not shown). The expression pattern of LOG2 largely overlaps with that of GDU1, shown to be expressed in the vasculature of roots, stems, and petioles (Pilot et al., 2004). GDU1 and LOG2 Localize to Microsomes and Are Enriched in the Plasma Membrane GDU1 is predicted to be a single-pass transmembrane domain protein (Pilot et al., 2004). In order to interact with GDU1, LOG2 needs to be recruited to the same membrane(s). Confocal imaging of Arabidopsis protoplasts transiently expressing GDU1-GFP had suggested that GDU1 was targeted to the plasma membrane (Pilot et al., 2004). We created transgenic Arabidopsis that overexpressed Myc-tagged GDU1 under the control of the cauliflower mosaic virus (CaMV) 35S promoter. One line, designated 35S-GDU1-Myc, showed stable expression of GDU1-Myc over several generations and recapitulated the smaller leaf size and overaccumulation of amino acids seen in the activation-tagged gdu1-1D line (Supplemental Fig. S3; Supplemental Table S1). To examine the subcellular localization of GDU1, rosette leaf proteins were fractionated into cytosolic and total microsome fractions, after which total microsomes were further processed into plasma membrane-depleted vesicles (PDVs) and plasma membrane-enriched vesicles (PEVs). GDU1-Myc was highly enriched in the microsomal fraction compared with the soluble fraction, confirming membrane localization (Fig. 4A, lanes 1 and 2). Similar to the plasma membrane-localized PMA2 used as a control (Dambly and Boutry, 2001;Elmore and Coaker, 2011), GDU1 was enriched in PEVs compared with PDVs ( Fig. 4A, lanes 3 and 4), indicating enrichment at the plasma membrane. Consistent with these results, GDU1-GFP transiently expressed in N. benthamiana epidermal cells localized at the plasma membrane and in small dots (Fig. 5A), which could be labeled with the endosome marker FYVE-GFP (Voigt et al., 2005;Fig. 5B). Limited overlap was found with fluorescent markers specific to the Golgi apparatus, and no localization could be detected in mitochondria, chloroplasts, the endoplasmic reticulum, lysosomes, or the cytosol (data not shown).
To explore the membrane association of GDU1, microsomes from 35S-GDU1-Myc leaves were isolated and treated with NaCl, alkaline sodium carbonate, or Triton X-100 detergent (i.e. reagents that extract peripheral, lumenal, or integral membrane proteins, respectively; Santoni et al., 1999;Rolland et al., 2006). GDU1 was retained in the microsomal pellet after incubation with salt or base, whereas it was solubilized by the detergent (Fig. 4B). In accordance with earlier characterizations of protein-membrane interactions (Santoni et al., 1999;Rolland et al., 2006), this result indicates that GDU1 is an integral membrane protein.
LOG2 was found to interact directly with cGDU1 in vitro (Fig. 1B) and to coimmunoprecipitate with GDU1 in planta (Fig. 1C), but unlike GDU1, LOG2 lacks a predicted transmembrane domain. To determine whether LOG2 is membrane associated, microsomes, PDVs, and PEVs were prepared from Arabidopsis stably expressing LOG2-HA under the control of the CaMV 35S promoter (Fig. 4C) or after transient expression in N. benthamiana leaves (Fig. 4D, left). Mirroring the localization profile of GDU1, LOG2 was found primarily in the total microsomal fraction and enriched in PEVs. In accordance with these data, expression in N. benthamiana epidermal cells of LOG2-GFP led to a continuous fluorescence at the cell periphery, suggesting plasma membrane localization (Fig. 5C, left). This pattern was identical to the pattern obtained with BRI1, a known plasma membrane protein (Friedrichsen et al., 2000;Supplemental Fig. S4). Microsomal LOG2-Myc expressed in N. benthamiana was, like GDU1, resistant to NaCl, but some protein was released with alkaline sodium carbonate and Triton X-100 detergent, indicating a weaker association with the membrane than GDU1 (Fig. 4E, left).
Myristoylation of LOG2 Is Important for Its Membrane Localization and the GDU1-LOG2 Interaction
We reasoned that the membrane association of LOG2 could result from covalent lipid modifications such as myristoylation, prenylation, or palmitoylation. Myristoylator (Bologna et al., 2004) and The MYR Predictor (http://mendel.imp.ac.at/myristate/ SUPLpredictor.htm) predicted N-myristoylation sites for LOG2 and LUL1-4. Moreover, in vitro myristoylation of LUL1 was recently demonstrated (Yamauchi et al., 2010). To experimentally verify that LOG2 could be myristoylated, LOG2 and the corresponding G2A mutant (bearing a substitution known to inhibit N-terminal myristoylation; Gordon et al., 1991) were expressed in rabbit reticulocyte lysate-coupled transcription-translation systems in the presence of [ 3 H]myristic acid or [ 3 H]Leu. These lysates contain the enzymes necessary for myristoylation (Heuckeroth et al., 1988). Both proteins were expressed at similar levels, and LOG2 incorporated [ 3 H]myristic acid. As expected, LOG2 G2A was not myristoylated (Fig. 4F). Similar to LUL1 and LOG2, LUL3 was also myristoylated in a Gly-2-dependent manner (Supplemental Fig. S5), suggesting that all LOG2-like proteins are myristoylated in planta.
To determine whether myristoylation affects LOG2 localization in planta, plasma membrane vesicles were prepared from N. benthamiana transiently expressing mLOG2-HA or mLOG2 G2A -HA. While both proteins were found primarily in the microsomal fraction, mLOG2 G2A was depleted from PEVs compared with wild-type mLOG2 (Fig. 4D), suggesting that LOG2 myristoylation is important for localization to the plasma membrane. Extraction of mLOG2 G2A with sodium carbonate and Triton X-100 was markedly enhanced compared with mLOG2 (Fig. 4E), showing that the G2A mutation reduced the strength of binding to the membrane. While wild-type LOG2-GFP located exclusively at the plasma membrane in N. benthamiana, LOG2 G2A -GFP also localized in the cytoplasm (Fig. 5C, middle). In accordance with biochemical data (Fig. 4D), the fluorescence pattern of LOG2 G2A -GFP overlapped extensively with cytosolic mCherry (Fig. 5D), suggesting that suppression of myristoylation prevented a fraction of LOG2 from being anchored to the membrane.
To assess the effect of the G2A mutation on the GDU1-LOG2 interaction in planta, GDU1-HA was coexpressed with mLOG2-Myc or mLOG2 G2A -Myc in N. benthamiana, and Myc-tagged proteins were immunoprecipitated as in Figure 1C. GDU1 coimmunoprecipitated with less efficiency with mLOG2 G2A than mLOG2 (Fig. 4G), indicating that myristoylation enhances the LOG2 interaction with GDU1 in planta. The localization of the LOG2-GDU1 interaction was further studied by expressing in N. benthamiana mLOG2 and GDU1 fused to mCherry and GFP, respectively. LOG2-expressing Agrobacterium strains were infiltrated at low density, leading to heterogeneous expression in the epidermis, while GDU1-expressing Agrobacterium was infiltrated at a density enabling expression in all cells. GDU1 localized at the plasma membrane and in endosomes in cells expressing mLOG2 at low levels ( Fig. 5E, middle cell) but mainly at the plasma membrane in cells expressing mLOG2 at higher levels ( Fig. 5E, lateral cells). This change in localization pattern suggests that the interaction of LOG2 and GDU1 stabilized GDU1 localization at the plasma membrane.
T-DNA Disruption of LOG2 Suppresses the Gdu1D Phenotype Conferred by GDU1 Overexpression
The observation that disruption of the LOG2-GDU1 interaction by the log1-1 mutation suppresses the Gdu1D phenotype (Pratelli and Pilot, 2006) suggested that LOG2 is involved in the pathway altered by GDU1 overexpression. To test this hypothesis, gdu1-1D was crossed with a plant carrying a T-DNA insertion (log2-2) in the first intron of LOG2 (Supplemental Fig. S1B). Reduction of LOG2 wild-type mRNA in the homozy-gous log2-2 mutant was confirmed by RT-PCR (Supplemental Fig. S6). In log2-2 gdu1-1D double homozygous plants, GDU1 mRNA levels remained unaffected by the log2-2 mutation (Supplemental Figs. S7, A and B, and S8C). Five-week-old log2-2 and wild-type plants had similar rosette sizes, while as previously observed, gdu1-1D individuals exhibited characteristically smaller rosettes (Fig. 6A). In contrast to gdu1-1D segregants, gdu1-1D log2-2 double mutant plants developed rosettes similar in size to wild type (Fig. 6A) and grew to normal height (data not shown), indicating suppression of the Gdu1D growth phenotype in the log2-2 background. Figure 6. Suppression of the Gdu1D phenotype by T-DNA insertion in the LOG2 gene. A, gdu1-1D was crossed to plants harboring a T-DNA in the first intron of LOG2 (log2-2). Five-week old F3 plants were recovered from F2 parents with the indicated genotypes. B, gdu1-1D, log2-2, and log2-2 gdu1-1D seeds were sown on medium containing 10 mM of the indicated amino acid or no added amino acid (top left plate). Each plate is oriented with quadrants as shown in the model above. Clockwise from top left: the wild type (WT), gdu1-1D, log2-2, and log2-2 gdu1-1D double homozygote. Experiments were repeated three or more times with 25 seeds from each line. Representative images are shown. gdu1-1D plants have been shown to be more tolerant to toxic concentrations of amino acids compared with the wild type (Pratelli and Pilot, 2007). To determine whether this other aspect of the Gdu1D phenotype is suppressed in the gdu-1D log2-2 double mutant, the susceptibilities of wild type, log2-2, gdu1-1D, and gdu1-1D log2-2 seedlings to 10 mM Phe, Met, and Leu were compared. Wild-type, log2-2 and gdu1-1D log2-2 plants did not grow in the presence of these amino acids, while most gdu1-1D plants developed green cotyledons (Fig. 6B). The phenotype of the soil-grown plants and amino acid tolerance assays indicate that the loss of LOG2 expression suppresses all characterized aspects of the Gdu1D phenotype.
Expression of Artificial MicroRNA Targeting LOG2 Suppressed the Gdu1D Phenotype
Of the available T-DNA insertion lines, only the log2-2 T-DNA was found to repress LOG2 transcript accumulation. To confirm the results obtained with the log2-2 line, four artificial microRNAs (amiRNAa, -b, -c, and -d) directed against the LOG2 transcript were created (Supplemental Fig. S1B). The amiRNAs were expressed in the gdu1-1D line under the control of the cassava vein mosaic virus (CsVMV) promoter (Verdaguer et al., 1998), a viral promoter of activity comparable to that of the CaMV 35S. The CsVMV promoter was used to avoid silencing of GDU1, driven in gdu1-1D by the 35S enhancer. Similar results were obtained with amiRNA expression as seen with the log2-2 mutation in the gdu1-1D background. Transformants created using LOG2-amiRNAa and -b had a phenotype similar to wild-type plants in the T2 generation at the expected 75% ratio (Fig. 7A). This phenotypic change occurred while GDU1 transcripts remained at levels similar to the progenitor line (Supplemental Fig. S8A). Amino acid tolerance to Leu, Phe, and Met was tested for five lines (three from LOG2-amiRNAa and two from LOG2-amiRNAb). The tolerance of the transformants was intermediate between gdu1-1D and the wild type, with lines a2 and b3 showing almost wild-type susceptibility on Leu and Phe ( Fig. 7B; Supplemental Fig. S8B).
While the expression of LOG2-directed amiRNAs did suppress the Gdu1D phenotype, it was not resolved if the suppression affected GDU1 protein content. The 35S-GDU1-Myc line (see above) was used to probe GDU1 protein accumulation in a parallel experiment. amiRNAb was expressed in the 35S-GDU1-Myc line. Similar to the above results, expression of LOG2-amiRNAb in 35S-GDU1-Myc suppressed the Gdu1D phenotype in most of the transformants. Four independent transformation lines (lines 249A to -D) were chosen and studied for GDU1-Myc protein accumulation, LOG2 and GDU1 mRNA contents, and amino acid levels. Lines 249A and -D exhibited a mild Gdu1D phenotype, while lines 249B and -C showed wild-type morphology (Fig. 7C). Expression of LOG2-amiRNAb led to an 80% decrease in LOG2 mRNA in the four lines (Fig. 7D). GDU1 mRNA content was not significantly changed compared with the untransformed control, with about 5,000-fold overaccumulation compared with the wild type (Fig. 7D). Amino acid contents were reduced in the suppressed lines but were not equivalent to those in the wild type (Supplemental Table S1). Significantly, GDU1-Myc protein content was only slightly increased by the expression of LOG2-amiRNAb (Fig. 7D).
An R12K Substitution in LOG2 Suppresses the Gdu1D Phenotype but Not the GDU1-LOG2 Interaction
A suppressor mutation, log2-1, was isolated in the same screening that led to the isolation of the log1-1 mutation (Pratelli and Pilot, 2006; Fig. 8A; Supplemental Text S1). Analyses of phenotypic segregation after crosses of log2-1 with the wild type and the gdu1-5D parental line validated the hypothesis that log2-1 is a single recessive mutation, independent of the log1-1 mutation (Supplemental Text S1). Positional cloning of log2-1 showed that the mutation was very close to the LOG2 gene (Supplemental Fig. S9). Sequencing of the LOG2 gene in log2-1 revealed the presence of a G-to-A mutation 35 bp after the ATG, leading to an Arg-to-Lys mutation (R12K; Supplemental Fig. S1B). Transformation of the log2-1 gdu1-5D double mutant by a genomic fragment containing the wild-type LOG2 gene complemented the log2-1 mutation. This complementation proved that suppression of the Gdu1D phenotype in Figure 8. Analysis of the effect of the log2-1 mutation on LOG2 protein properties. A, Complementation of the log2-1 mutation. A wild-type (WT) genomic fragment (8,511-bp XhoI-PstI from bacterial artificial chromosome F11F8) was cloned into a hygromycin resistance-conferring binary vector and inserted into the genome of the log2-1 gdu1-5D double mutant. GDU1 mRNA content was estimated by quantitative RT-PCR and is given as the double difference between the qPCR cycle threshold (Ct) of GDU1 and Actin2 mRNAs obtained in the wild type and the mutants (DDCt). Errors bars are from two biological replicates. B, GST-LOG2 and GST-LOG2 R12K ubiquitination assays without sub-strate. C, GST pull-down assay using flag-cGDU1 and GST-LOG2 or GST-LOG2 R12K . D, Yeast two-hybrid interaction assay of LOG2 R12K with cGDU1, cGDU1 G100R , or DcGDU1. E, In vitro ubiquitination assay with LOG2-V5 or LOG2 R12K -V5 and flag-cGDU1. the log2-1 gdu1-5D double mutant resulted from the R12K mutation in LOG2 (Fig. 8A).
The fact that log2-1 is recessive suggested a loss-offunction mutation, but, despite extensive trials, no difference in functional properties has been found to date between LOG2 R12K and LOG2: the R12K mutation did not impair the ubiquitin ligase activity of the protein (Fig. 8B) or the ability to ubiquitinate cGDU1 in vitro (Fig. 8E). In addition, yeast two-hybrid, GST pull-down, and in planta coimmunoprecipitation assays showed that LOG2 R12K interacted with GDU1 very similarly to LOG2 (Figs. 1C and 8, C and D). LOG2 R12K -GFP also localized to the plasma membrane like wild-type LOG2-GFP in transiently transformed N. benthamiana leaves (Fig. 5C, right). The R12K mutation had no effect on GDU1-Myc accumulation when log2-1 was introduced into the 35S-GDU1-Myc line (Fig. 8F). Leading to significant suppression of the Gdu1D phenotype with no detectable effect on the LOG2-GDU1 interaction, the nature of the defect in LOG2 R12K is currently unknown.
LOG2 Is Necessary for the Gdu1D Phenotype
The GDU1 protein has been suggested to be involved directly or indirectly in the control of amino acid export, potentially interacting with an amino acid exporter (Pilot et al., 2004;Pratelli et al., 2010). Parallel attempts at characterizing GDU1 function using EMS suppressor and yeast two-hybrid screens led to the isolation of LOG2, an E3 ubiquitin ligase. Subsequent biochemical analyses showed that LOG2 and GDU1 interacted in in vitro pull-down and in planta coimmunoprecipitation assays (Figs. 1,B and C,and 2B). We also establish here the functional significance of this interaction in multiple assays. Previous characterization of the log1-1 suppressor allele (GDU1 G100R ) validated the relevance of the conserved GDU VIMAG motif (Pratelli and Pilot, 2006), but the molecular mechanism of the suppressor effect was not explained. When tested by yeast two-hybrid, GST pull-down, in planta coimmunoprecipitation, and in vitro ubiquitination assays, the log1-1 mutation diminished the interaction between LOG2 and GDU1 (Figs. 1, A-C, and 2D, respectively). This indicated that the phenotypic suppression conferred by the log1-1 allele could result from impaired interaction with LOG2. Genetic interaction assays with plants affected in LOG2 expression further support this conclusion. Reduction of the Gdu1D phenotype was observed in three independent genetic contexts: in gdu1-1D plants homozygous for the log2-2 T-DNA (Fig. 6), upon expression of LOG2-directed amiRNAs in either gdu1-1D or 35S-GDU1-Myc backgrounds (Fig. 7), and in gdu1-5D plants homozygous for the log2-1 loss-of-function allele (LOG2 R12K ; Fig. 8). The nature of phenotypic suppression was similar in all three cases: plant size increased (Figs. 6A, 7, A and C, and 8A), amino acid sensitivity decreased (Figs. 6B and 7B;Supplemental Fig. S8B), and amino acid accumulation decreased (Supplemental Table S1). Notably, the suppression effects observed in multiple LOG2 mutation/GDU1 overexpressor combinations reinforce the hypothesis that LOG2 is necessary for the development of the Gdu1D phenotype upon GDU1 overexpression. These findings presage a role for LOG2 in amino acid homeostasis.
Localization of LOG2 and GDU1 at the Plasma Membrane LOG2 and GDU1 were found almost exclusively in the microsomal membrane fraction and were particularly enriched at the plasma membrane. GDU1 was previously postulated to be an integral membrane protein (Pratelli and Pilot, 2006), and the sensitivity of GDU1 to a high-ionic-strength buffer, high pH, and detergent, typical of integral membrane proteins, confirmed this hypothesis (Fig. 5B). Although LOG2 is not predicted to contain a membrane-spanning domain, LOG2 could be myristoylated in vitro (Fig. 4F), a lipid modification that can anchor otherwise cytosolic proteins to membranes (Resh, 1999). Inhibition of LOG2 myristoylation did not completely abolish membrane association per se, as most LOG2 G2A still partitioned into microsomes (Fig. 4, D and E). However, LOG2 G2A was significantly depleted from plasma membraneenriched vesicles (Fig. 4D), its association with microsomes was markedly more sensitive to extraction (Fig. 4E), and the fluorescence signal of C-terminally GFPtagged LOG2 was no longer confined to the plasma membrane (Fig. 5C, left and center panels). Possible reasons for the partial membrane retention of LOG2 G2A include its ability to bind to integral membrane proteins and/or the presence of an N-terminal poly-Arg tract (Supplemental Fig. S1C). Polybasic regions can facilitate the binding of soluble proteins to the negatively charged intracellular leaflets of lipid bilayers via electrostatic interactions (Murray et al., 1998). However, other lipid modifications or undetected reentrant loops could account for persistent membrane anchoring. The observed weakened in planta interaction between GDU1 and LOG2 G2A (Fig. 4G) hints that myristoylation may potentiate the interaction of LOG2 with GDU1, possibly by shifting LOG2 into the same membrane microdomains as GDU1.
Role of LOG2 with Respect to GDU1
Multiple tagged forms of LOG2 ubiquitinated cGDU1 (Fig. 2, B-D), an observation consistent with the possibility that GDU1 is a substrate of LOG2 in vivo. Polyubiquitination will often destine a substrate protein for degradation, either via the 26S proteasome or, in the case of many mammalian and yeast plasma membrane proteins, the vacuole or lysosome upon endocytosis (Jehn et al., 2002;Korolchuk et al., 2010). Such ubiquitin ligases are thus negative regulators of the activity of the substrate protein. Our data do not support the hypothesis that LOG2 negatively regulates GDU1. Mutations that inhibit GDU1 (log1-1) association with LOG2, or that eliminate LOG2 (log2-2), would be expected to enhance the Gdu1D phenotype if LOG2 were a negative regulator of GDU1. However, the opposite was observed: in a GDU1 overexpression background, log1-1 and log2-1 homozygotes are phenotypically wild type (Pratelli and Pilot, 2006; Fig. 8A). GDU1 protein overaccumulation does not cause the Gdu1D phenotype in the absence of LOG2 or when GDU1 is unable to interact with LOG2. In conclusion, LOG2 appears to be a required component of the pathway affected by GDU1 overexpression.
Our data are consistent with a model in which either LOG2 activates GDU1 via ubiquitination or GDU1 acts as an adaptor protein for LOG2 to recognize an unidentified substrate (for review, see Woelk et al., 2007;Léon and Haguenauer-Tsapis, 2009). Examples for both hypotheses in other systems have been reported. In mammals, activation of a kinase in the inflammatory response requires its monoubiquitination (Hinz et al., 2010). Plasma membrane-associated E3 ligase adaptors have also been described. For instance, the adaptor Grb2 binds to the RING E3 c-Cbl and facilitates its association with activated plasma membrane epidermal growth factor receptor in mammals, resulting in ubiquitination of the latter (Huang and Sorkin, 2005). Similar E3 adaptors have also been described in yeast (Léon and Haguenauer-Tsapis, 2009).
LOG2-Associated Proteins Probably Contribute to Amino Acid Export
By analogy to mammalian heteromeric amino acid transporters (Palacín et al., 2005), GDU1 has been hypothesized to be a subunit of an amino acid exporter (Pilot et al., 2004;Pratelli et al., 2010). One viable hypothesis is that GDU1 overexpression alters the localization of an unknown amino exporter through a LOG2dependent pathway, either by promoting the degradation of a yet-to-be-identified negative regulator of the exporter or by directly influencing the movement of the transporter to the cell surface. The log2-1 mutation suppresses the Gdu1D phenotype without discernibly affecting its subcellular localization, ubiquitin ligase activity, or interaction with GDU1 (Figs. 5C, right panel, and 8, B-D). It is possible that this mutation alters the recognition/interaction with as-yet-undiscovered LOG2 substrate(s). While an Arg-to-Lys mutation is often regarded as conservative, Lys residues are common sites of modification such as acetylation, methylation, and ubiquitination. Modification of Lys-12 of LOG2 R12K could disrupt interactions between LOG2 R12K and its substrate (s) or other interacting proteins, leading to suppression of the Gdu1D phenotype. This hypothesis suggests that all the proteins involved in the GDU1 pathway are not discovered. The identification of these proteins and their role in membrane protein trafficking or amino acid export will be the subject of further investigations.
Cloning and Constructs
Primer sequences used for cloning and site-directed mutagenesis are listed in Supplemental Table S2. For the yeast two-hybrid screen, the region encoding the C-terminal domain of GDU1 (residues 61-158) was cloned into pGBKT7 or pGBT9 (Clontech) using specific primers and the EcoRI and PstI sites. For the yeast two-hybrid interaction matrix, cDNAs encoding the C-terminal soluble part of the GDUs and the full sequences of LOG2 and LULs were amplified by PCR with primers containing the attB Gateway sequences, cloned into pDONRZeo (Invitrogen), sequenced, and transferred into pACT2 and pGBT9 vectors, which were previously made Gateway compatible by insertion of the Gateway cassette (R. Pratelli and G. Pilot, unpublished data). To obtain GST-and V5-tagged LOG2 and LUL1, the coding sequences were amplified by PCR, cloned into pDONR201 (Invitrogen) using the Gateway technology, and recombined into pDEST15 and pET-DEST42 (Invitrogen), respectively. flag-cGDU1 was produced similarly with the destination vector pEAK2 (Kraft, 2007). Site-directed mutagenesis was performed with the QuikChange kit (Stratagene) to create RING-dead LOG2 (LOG2 C354/ C557AA ; mLOG2) RING-weak LOG2 (LOG2 I321A ), and myristoylation-inhibited LOG2 (LOG2 G2A ). For the in vitro myristoylation assay, LOG2 was recombined into pEXP2 and pET-DEST42 (Invitrogen). GFP and mCherry fusion constructs were created by Gateway cloning into pPWGTkan and pPWMTkan, derivatives of pJHA212K (Yoo et al., 2005), where the Gateway cassette was inserted between the CaMV 35S promoter and the eGFP or mCherry coding sequence (R. Pratelli and G. Pilot, unpublished data). The HA fusion construct used for Arabidopsis transformation was obtained by Gateway cloning into pGWB14 (Nakagawa et al., 2007). HA and Myc fusions used for transient assays in N. benthamiana were obtained using vectors similar to pPWGTkan, where a double HA tag or a triple Myc tag replaced the eGFP. The GDU1 VIMAG domain was deleted by cloning next to each other in pBluescript two PCR fragments corresponding to the regions upstream and downstream of the domain and sharing an EcoRI site at the exact place of the VIMAG domain. The resulting construct was used as a template for PCR toward Gateway cloning into pDONR221 (Invitrogen). amiRNAs were designed following the guidelines found in WMD3 (http://wmd3.weigelworld.org/; Schwab et al., 2010). The primers corresponding to pRS300 (Schwab et al., 2010) used for amplification of the miRNAs contained the Gateway attB sites. The final PCR fragment was cloned into pDONRZeo (Invitrogen), sequenced, and transferred into the pSWsNkan binary vector, another derivative of pJHA212K (Yoo et al., 2005, R. Pratelli and G. Pilot, unpublished data), between the CsVMV promoter (Verdaguer et al., 1998) and the terminator of the small subunit of the Rubisco from pea (Pisum sativum; accession no. X00806). For log2-1 complementation, a XhoI-PstI 8.5-kb genomic DNA fragment from bacterial artificial chromosome F11F8 containing the wild-type LOG2 gene was cloned into the pTkan binary vector, a derivative of pJHA212K.
Yeast Two-Hybrid Screening and Interaction Assays
For screening, the bait plasmids were cointroduced along with an Arabidopsis cell cDNA library (Németh et al., 1998) into yeast strain AH109 (Clontech) using the TRAFO protocol (Gietz and Schiestl, 2007). About 2 million transformants were selected on SC medium lacking Leu, Trp, His, and adenine (2% Glc, 6.7 g L 21 yeast nitrogen base without amino acid [BD Biosciences], pH 6.3, and dropout amino acid mix). Plasmids were extracted from the colonies displaying auxotrophy for the four amino acids and introduced back into yeast together with the cGDU1 construct. The inserts of the eight plasmids restoring yeast growth on selective medium were then sequenced. For interaction matrices, yeast strains AH109 (MATa) and Y187 (MATa) were transformed as described above with the prey and bait vectors, respectively, and selected on medium lacking Trp or Leu. Several colonies were scrapped and resuspended together in 250 mL of water. Prey-bait pairs were mixed together (10 mL of each suspension and 50 mL of water), and 5 mL was spotted on YPDA (1% [w/v] yeast extract, 2% [w/v] bacto peptone, 2% [w/v] Glc, 80 mg L 21 adenine, and 1.5% agar) for mating. After overnight growth at 30°C, the cell spots were scrapped and resuspended in 100 mL of water, and 5 mL was spotted on SC medium lacking Trp and Leu. After growth at 30°C for 2 d, the spots were scrapped and resuspended in 100 mL of water, and 5 mL was spotted on SC medium lacking Trp, Leu, adenine, and His. Growth was assessed after 2, 3, and 6 d to identify positive interactions. b-Galactosidase activity of yeast was measured using a protocol obtained from Clontech (manual no. PT3024-1). Briefly, cells grown to an optical density at 600 nm of 0.5 were harvested by centrifugation, resuspended in 100 mM Na 2 HPO 4 , pH 7, 10 mM KCl, 1 mM MgSO 4 , and 5 mM b-mercaptoethanol (Z buffer), and subjected to three freeze-thaw cycles in liquid N 2 and at 37°C. The broken cell suspension (100 mL) was added to 900 mL of 800 mg mL 21 orthonitrophenyl-b-galactoside in Z buffer and incubated for several hours at 37°C. The reaction was stopped by the addition of 400 mL of 1 M Na 2 CO 3 , and optical density at 420 nm was measured.
Recombinant Protein Expression and Purification
GST-, V5-His 6 -, and His 6 -flag fusion proteins were expressed in BL21-pLysS Escherichia coli essentially as described by Kraft et al. (2005) with a few modifications. The lysis buffer comprised 50 mM Tris-HCl, pH 7.5, 500 mM NaCl, 1 mM dithiothreitol (DTT), 0.1% (v/v) Nonidet P-40, and 0.53 Complete Protease Inhibitors (Roche Diagnostics) for GST-tagged proteins and was supplemented with 20 mM imidazole for His-tagged proteins. Cells were lysed by sonication. For GST proteins, glutathione bead slurries were brought to 20% (v/v) glycerol after the final wash, flash frozen in liquid nitrogen, and stored at 280°C. His-tagged proteins were eluted from nickel Sepharose beads, brought to 20% glycerol, and flash frozen. His 6 -flag-cGDU1 was purified likewise, except that buffers contained 50 mM K 2 HPO 4 -KH 2 PO 4 , pH 5.75, in place of Tris-HCl. To further purify cGDU1, eluted protein was centrifuged through a 30-kD NMWL Amicon concentrator (Millipore), and the eluate was concentrated and buffer exchanged in a 5-kD NMWL Amicon concentrator with 50 mM K 2 HPO 4 -KH 2 PO 4 , pH 5.75, 150 mM KCl, 1 mM DTT, 0.1% Nonidet P-40, and 0.53 Complete Protease Inhibitors. The final retentate was brought to 20% glycerol and flash frozen.
GST Pull-Down Assays
In vitro pull-downs were performed as described by Gilkerson et al. (2009) with the following differences: GST proteins on beads were washed once in 50 mM Tris-HCl, pH 7.5, 300 mM NaCl, 1 mM DTT, 1% (v/v) Nonidet P-40, and 0.53 Complete Protease Inhibitors (wash buffer) prior to being mixed with a soluble prey protein in 400 mL of wash buffer and incubated at 4°C for 2 h. Proteins were eluted by suspending beads in 60 mL of elution buffer (wash buffer + 50 mM glutathione) and shaking at 4°C for 15 min.
Ubiquitination Assays
In vitro ubiquitination assays were conducted essentially according to Hsia and Callis (2010) with slight modifications: 4 mg of bovine ubiquitin (Sigma-Aldrich), approximately 2 mg of GST-or V5-His 6 -LOG2, and approximately 1.5 mg of His 6 -flag-cGDU1 were used. Reactions were quenched with 10 mL of 53 Laemmli sample buffer (200 mM Tris, pH 6.8, 32% [v/v] glycerol, 6.4% [w/v] SDS, 0.32% [w/v] bromophenol blue, and 200 mM DTT), boiled for 5 min, and separated via SDS-PAGE. Proteins were visualized by western blot using anti-flag-linked horseradish peroxidase (Sigma-Aldrich) and anti-GST (Santa Cruz Biotechnology) according to the manufacturers' recommendations or anti-ubiquitin antibodies. Anti-ubiquitin antibodies were raised against bovine full-length ubiquitin prepared according to Haas and Bright (1985) by Aves Labs, affinity purified, and used at 1:5,000 dilution.
In Planta Coimmunoprecipitation Assay
Proteins were transiently expressed in N. benthamiana leaves. Three days after infiltration, 500 mg of leaves was ground with 1.5 mL of extraction buffer on ice (50 mM Tris-HCl, pH 7.3, 150 mM NaCl, 10 mM MgCl 2 , 0.5% Nonidet P-40, 10 mM DTT, and 13 Complete Protease Inhibitors). Homogenate was centrifuged at 10,000g and 4°C for 15 min. The supernatant was filtered through several layers of Miracloth (EMD Biochemicals) and quantitated using the Bradford assay (Fermentas). Proteins were coimmunoprecipitated use the ProFound c-Myc Tag IP/Co-IP Kit (Pierce): 3 mg of proteins was placed on a rotary wheel overnight at 4°C in a coimmunoprecipitation spin column with cMyc-agarose; after washing, the proteins were eluted three times with 10 mL of the supplied elution buffer and neutralized by 1.5 mL of 1 M Tris, pH 9.5. One microliter of coimmunoprecipitation eluate and 10 mg of total proteins were analyzed by SDS-PAGE (4%-12% polyacrylamide MES gel; Invitrogen) and western blotting. Proteins were transferred on a nitrocellulose membrane (GE Healthcare) and detected using anti-cMyc (clone A-14; Santa Cruz; 1:10,000) or anti-HA (clone 3F10; Roche Diagnostics; 1:5,000) primary antibodies, anti-rabbit or anti-rat (Thermo Scientific) secondary antibodies, and the ECL-Plus western-blotting detection system (GE Healthcare).
Subcellular Fractionation
Endomembranes were prepared from Arabidopsis and N. benthamiana according to Liu et al. (2009) with the following modifications: 5 to 30 g of 8-dold Arabidopsis seedlings, 5-week-old Arabidopsis rosette leaves, or N. benthamiana leaves 3 d after infiltration were homogenized with a Waring blender in 60 mL of homogenization buffer (50 mM MOPS-KOH, pH 6.8, 5 mM EDTA, 0.33 M Suc, 2 mM ascorbic acid, 1.5 mM DTT, 0.5 mM phenylmethylsulfonyl fluoride, and 0.2% [w/v] polyvinylpolypyrrolidone). The sensitivity of GDU1 or LOG2 membrane association to detergent, salt, and pH was examined as described by Phan et al. (2008) with a 1-h incubation on ice. PEVs and PDVs were purified from total microsomes according to Larsson et al. (1994): upper phases 4 and 5 were combined to afford PEVs, while lower phase 1 was extracted five times with fresh upper phase to afford PDVs. Plasma membrane enrichment and depletion were qualitatively assessed with the a-PMA2 antibody (M. Boutry, Université Catholique de Louvain). Protein concentration was assessed by the Bradford or bicinchoninic acid protein assays.
Localization of Expression and Imaging
About 3 kb of the region upstream from LOG2 ATG was amplified by PCR and cloned using BamHI and PstI in pUTkan (Pratelli et al., 2010). GUS histochemical staining was performed as described (Pratelli et al., 2010). N. benthamiana epidermis cells were visualized with the Zeiss LSM510 META confocal system on an Axio Observer.Z1 microscope using a C-Apochromat 403 water-immersion, numerical aperture 1.2 objective (Carl Zeiss), a 488-nm argon multiline gas laser, and a 543-nm helium-neon gas laser, with band-pass 505 to 550 and long-pass 560 emission filters, respectively. Serial images were captured and processed by the Zen 2009 software (Carl Zeiss) using maximal projection.
Nucleic Acid Isolation and PCR
Genomic DNA was extracted from Arabidopsis plants using the cetyl-trimethyl-ammonium bromide method (Murray and Thompson, 1980). Total RNA was extracted either with the RNeasy kit (Qiagen) or by 1 mL of TRI Reagent (Sigma-Aldrich). cDNAs were synthesized using the SuperScript III system (Invitrogen) according to the manufacturer's instructions. For quantitative PCR, the efficiency-calibrated method was implemented (Pfaffl, 2001). Five microliters of primer mix (1 mM each) and 5 mL of the RT product (made from 2 mg of total RNA) diluted 50 times were mixed with 10 mL of 23 SYBR Green PCR Master Mix (Applied Biosystems) and subjected to the following cycles: 50°C for 2 min, 95°C for 10 min, followed by 40 times at 95°C for 15 s, 55°C for 15 s, and 72°C for 1 min (in a 7300 Real Time PCR System; Applied Biosystems).
Amino Acid Quantitation
Tissues were frozen in liquid nitrogen, freeze dried, weighed, and ground with three 3-mm glass beads in an Ultramat Amalgamator (SDI, Inc.). Amino acids were extracted from the dry powder by 200 mL of 10 mM HCl and 200 mL of chloroform. After vortexing for 2 min, the solution was centrifuged for 5 min at 16,000g, and 150 mL of the supernatant was dried under vacuum. The dried extract was solubilized in 500 mL of 50% acetonitrile in water and 0.05% heptafluorobutyric acid, and the metabolites were separated by ion-pairing liquid chromatography and analyzed by mass spectrometry (Supplemental Text S2).
Supplemental Data
The following materials are available in the online version of this article.
Supplemental Figure S1. Structure and alignment of LOG2 and LUL1 proteins.
Supplemental Figure S2. Accumulation of LOG2 mRNA in the organs of the plant.
Supplemental Figure S4. Co-localization of mLOG2 and BRI1 in N. benthamiana epidermis cells.
Supplemental Figure S5. LUL3 can be myristoylated in vitro.
Supplemental Table S2. Sequence of the oligonucleotides used for this study.
Supplemental Text S1. EMS mutagenesis and positional cloning.
Supplemental Text S2. LC-MS analysis details. | 2018-04-03T04:46:48.356Z | 2012-01-30T00:00:00.000 | {
"year": 2012,
"sha1": "ca30e2c4779e0d933e7345f0171ec3e362e00f71",
"oa_license": "CCBY",
"oa_url": "http://www.plantphysiol.org/content/plantphysiol/158/4/1628.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "Highwire",
"pdf_hash": "eaf11b59e53e423b02e83462b198ebc46d5604ae",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
12488372 | pes2o/s2orc | v3-fos-license | Chemical analysis, inhibition of biofilm formation and biofilm eradication potential of Euphorbia hirta L. against clinical isolates and standard strains
Background The frequent occurrences of antibiotic-resistant biofilm forming pathogens have become global issue since various measures that had been taken to curb the situation led to failure. Euphorbia hirta, is a well-known ethnomedicinal plant of Malaysia with diverse biological activities. This plant has been used widely in traditional medicine for the treatment of gastrointestinal, bronchial and respiratory ailments caused by infectious agents. Methods In the present study, chemical compositions of methanol extract of E. hirta L. aerial part was analyzed by gas chromatography and gas chromatography coupled to mass spectrometry. A relevant in vitro model was developed to assess the potency of the E. hirta extract to inhibit the bacterial biofilm formation as well as to eradicate the established biofilms. Besides biofilm, E. hirta extract was also evaluated for the inhibition efficacy on planktonic cells using tetrazolium microplate assay. For these purposes, a panel of clinically resistant pathogens and American type culture collection (ATCC) strains were used. Results The methanolic extract of aerial part of E. hirta was predominantly composed of terpenoid (60.5%) which is often regarded as an active entity accountable for the membrane destruction and biofilm cell detachment. The highest antibacterial effect of crude E. hirta extract was observed in the clinical isolates of Pseudomonas aeruginosa with minimum inhibitory concentration (MIC) value of 0.062 mg/ml. The extract also displayed potent biofilm inhibition and eradication activity against P. aeruginosa with minimum biofilm inhibition concentration (MBIC) and minimum biofilm eradication concentration (MBEC) values of 0.25 mg/ml and 0.5 mg/ml, respectively. Conclusions The crude methanol extract of E. hirta has proven to have interesting and potential anti-biofilm properties. The findings from this study will also help to establish a very promising anti-infective phytotherapeutical to be exploited in the pharmaceutical industries.
Background
Ever since the advent of humanity on earth, plants have served unlimited source of phytotherapeuticals for various diseases [1]. In recent years, the emergence of biofilm infections have generated an urgent alarm in research and development field in seeks for novel antimicrobials from ethnomedicinal plant. National Institutes of Health (NIH) has estimated 70% of all microbial infections in the world are associated with biofilms [2]. Biofilms are developed by planktonic microorganisms aggregating together forming thin layer. Soon, close contact of few layers of microorganisms grow into dense, three dimensional structures accommodating millions of planktonic cells operating mutually to form a shield called biofilm. To date, the most common medically significant biofilm infections include eye, middle ear, urogenital tract, gastrointestinal tract, lung tissue and teeth [3]. Biofilms also accounted responsible for majority of contamination in the hospital devises and medical implants such as peritoneal membrane and dialysis catheters, indwelling catheters for chronic administration of chemotherapeutic agents, tracheal and ventilator tubing, prostheses, dental implants and cardiac implants [4]. Biofilms are far more difficult to treat as of bacteria living within are vastly resistant up to 1000 fold higher to potent antibacterial agents which are used as a last resort, including methicillin and vancomycin [5]. In contrast the same bacteria grown under planktonic forms are susceptible to those antibiotics.
Euphorbia hirta L., a member of the Euphorbiaceae family, is a common weed distributed in the temperate, sub-tropical, and tropical regions of the world. It is alternatively known by its Malay names as 'ara tanah' and 'gelang susu'. This plant has been used in traditional Malay medicine for many years. The folk medicine used it in the treatment of gastrointestinal disorders, particularly intestinal parasitotosis, amoebic dysentery, diarrhoea, and ulcer [6,7]. The plant is also used in the treatment of bronchial and respiratory disorders including asthma, bronchitis, and hay fever. E. hirta is proven for well accepted pharmacological activities such as antioxidant [8], antibacterial [9,10], antifungal [11], diuretic [12], anthelmintic [13], antihypertensive [12], anxiolytic [14], antimalarial [15], anti-inflammatory [16], and anticancer [8].
Despite the various researches concerning the evaluation of multiple bioactivities, none have dealt with the bacterial biofilm investigation from E. hirta. Therefore, the aim of this study is to analyze the potency of E. hirta extracts as potential biofilm inhibitor and biofilm eradicator crude drugs as well as to comprehensively characterize the methanolic extract of E. hirta aerial part grown in Malaysia. A modified version of microdilution assay was developed in this investigation to measure the inhibition of biofilm formation and biofilm eradication activities from E. hirta L. against clinical isolates and standard strains.
From our literature search, there has been no comprehensive chemical composition data on the methanolic extract of E. hirta aerial part has been reported or evaluated. Moreover, the possibility is higher that of same species from different countries could show different chemical composition and biological activity. Therefore the present study is assumed the first to report fingerprinting profiles of E. hirta aerial part by GCMS-MS as well as the initial research to assess on biofilm inhibitory and eradication activity of this ethnomedicinal plant.
Chemicals and reagents
p-iodonitrotetrazolium violet (INT) were purchased from Sigma Chemical Co. (St. Louis, MO, USA). All other chemicals, namely dimethylsulfoxide (DMSO) and methanol were of analytical grade and obtained from Merck (Darmstadt, Germany).
Plant collection and authentication
The fresh aerial parts of Euphorbia hirta L. were obtained from Relau, Penang City, Malaysia. The plant was collected during the period of July to August 2012. The plant was authenticated by Mr. Shunmugam, the botanist of the School of Biological Sciences, Universiti Sains Malaysia, where a voucher specimen (No.11254) was deposited in the Herbarium Unit of the school.
Extraction of plant material
Aerial parts of E. hirta were air-dried and ground to mesh size No.40 and macerated solely with methanol by ratio of 10 g of ground plant material in 100 ml of methanol. Extraction was done for 6 days under occasional shaking and the process was repeated three times. The combined extracts obtained were filtered and concentrated to dryness with a rotary evaporator (Rotavapor® R-200, Buchi, Switzerland) under reduced pressure. The extract obtained was eventually freeze-dried (FreeZone®, MO) to remove any residual water. The yield of extract was calculated. The extractive procedure was performed in dim lighting and all the dried extracts were stored at 4°C until use. For GCMS analysis, sample was prepared by dissolving in methanol to obtain a concentration of 1 mg/ml. The sample was filtered through 0.22 μm syringe filter devices (Milipore) prior to injecting in the chromatography system.
Gas chromatography (GC) and gas chromatography mass spectrometry (GC/MS) analysis
The methanol extract of E. hirta was analyzed by GC and GC-MS using an Agilent-Technologies 6890 N Network GC system equipped with an Agilent-Technologies 5973 MSD mass spectrometer and Agilent-Technologies 7683B series auto injector. The GC/MS was operated under the following conditions. Separation was performed on HP-5MS capillary column (30 m × 0.25 mm × film thickness 0.25 μm). The column temperature was programmed from 70°C to 280°C at a rate of 20°C /min with the lower and upper temperatures being held for 3 and 10 min, respectively. The GC injector and MS transfer line temperatures were set at 250°C and 280°C, respectively. GC was performed in the splitless mode. Helium was used as carrier gas at a flow rate of 1.1 ml/min. For MS detection, the electron ionization mode with ionization energy of 70 eV was used, with a mass range at m/z 30-650. An injection volume of 1 μl was used for the methanol extract. The components were identified by their retention times and based on the commercially available spectral, (National Institute of Standards and Technology (NIST) mass spectral search program for NIST/EPA/NIH Mass Spectral Library V2.0) and mass fragmentation patterns using data of standards at Wiley 7.0 Library. GC and GC-MS analyses were performed in triplicate.
Bacterial strains
The susceptibility test comprises a panel of clinically resistant Gram-negative and Gram-positive bacteria. These selected human pathogenic bacteria are capable of forming biofilms and causes severe infections. The panel of clinically resistant pathogens used in this study includes Klebsiella pneumonia, Pseudomonas aeruginosa, Salmonella typhi, Shigella dysenteriae, Enterobacter aerogens, Escherichia coli, Enterococcus faecalis, Proteus mirabilis, Proteus vulgaris, Bacillus subtilis and Bacillus cereus. The clinical isolates were obtained from Department of Medical Microbiology and Parasitology (JTMP), School of Medical Sciences, Universiti Sains Malaysia, and stored at −80°C in tryptic soy broth containing 50% glycerol. Their identities were confirmed by biochemical test using API (analytical profile index) system. The bioassays was also performed in parallel with well characterized strains obtained from the American Type Culture Collection (ATCC), including Escherichia coli (ATCC 25922), Staphylococcus aureus (ATCC 25923), Pseudomonas aeruginosa (ATCC 27853), Bacillus subtilis (ATCC 6633), Enterococcus faecalis (ATCC 29212), methicillinresistant Staphylococcus aureus (MRSA) (ATCC 33591), vancomycin-resistant Enterococcus faecalis (VRE) (ATCC 51299) and vancomycin-resistant Enterococcus faecium (VRE) (ATCC 700221). The ATCC strains used were able to produce biofilms in a given conditions. A stable nonbiofilm producing clinical isolate Staphylococcus aureus (mutant) was used in the bioassays as control.
Inocula preparation
All the bacterial strains were recovered on a fresh tryptic soy agar (Difco, USA) plate 24 h prior to antimicrobial test. To prepare the inoculum, colonies from fresh tryptic soy agar were transferred into sterile Mueller Hinton (MH) liquid growth medium and incubated at 37°C overnight. Aliquots (500 μl) were transferred to 10 ml of fresh MH broth and incubated at 37°C. The optical density 600 nm wavelength (OD 600 ) was monitored until the exponential growth phase was reached. Cells were harvested by centrifugation (3,000 g, 5 min at 4°C), washed in 10 mmol/L phosphate-buffered saline (PBS) (pH 7.4), and resuspended in MH broth to an approximate cell density of 1.0 × 10 5 CFU/ml. The final cells concentration was confirmed by viable counts.
Minimum inhibitory concentration (MIC) and minimum bactericidal concentration (MBC)
The minimum inhibitory concentrations (MIC) of E. hirta were determined by using tetrazolium microplate assay which were slightly modified from serial broth microdilution method as described by Eloff [17]. This assay was performed using Corning 35-1172 flat-bottomed polystyrene 96-well clear microtitre plates with standard plate layout as proposed by Cos et al. [18]. Briefly, methanol extract of E. hirta were dissolved in DMSO and an identical twofold serial dilution using MH broth were made to form 0.062 -2.0 mg/ml. One hundred microlitres of bacterial inoculum was added and mixed thoroughly in all the wells (0.031 -1.0 mg/ml as the highest in-test concentration. The microtitre plates were sealed with parafilm tape and incubated overnight at 37°C. An appropriate mixture of solvent DMSO, medium and inoculum were included as drug-free control and the final concentration of DMSO in the well was ensured to be less than 1% (v/v). Clinically established antibiotic, cefepime was used in parallel experiment as a positive drug control. An additional noninfected medium was included for sterility check. The MIC of E. hirta extract was detected following addition of 50 μl of INT (2-4-Iodophenyl-3-4-nitrophenyl-5-phenyl-2H-tetrazolium chloride) at a final concentration 0.2 mg/ml in all the wells and incubated for further 30 min at 37°C. Bacterial growth was determined by observing the color change of INT in the microplate wells. Biologically active bacterial cells will reduce the colourless tetrazolium salt which act as an electron acceptor to a red-coloured formazan product [19]. Inhibition of bacterial growth is observed when the solution in the well remained clear after incubation with INT. MIC was defined as the lowest extract concentration that completely inhibits the growth of microorganisms.
For the determination of minimum bactericidal concentration (MBC), 20 μl of culture medium from the microtitre plate wells that showed no changes in color will be re-inoculated on MH agar plates. After 24 h of incubation at 37°C, MBCs were determined as the lowest concentration that yielded nil bacterial growth on MH agar plates. The MIC and MBC determination were performed three times in duplicate.
Inhibition of biofilm formation
The effect of E. hirta extract to inhibit biofilm formation was measured using microplate based assay, modified from Stepanovic et al. [20]. Briefly, the bacterial cells were grown in tryptic soy broth (TSB) at 37°C overnight. The cultures was then harvested by centrifugation (14,000 rpm, 15 min at 4°C) and rinsed with phosphate-buffered saline (PBS, pH 7.4). The washed bacterial cells were then resuspended in tryptic soy broth to approximately 1 × 10 7 cfu/ ml (determined by OD and plate count assay). Two-fold serial dilution of E. hirta extract was made with TSB to achieve concentrations ranging from 0.062 -2.0 mg/ml. An amount of 40 μL of the E. hirta extract solution was then pre-mixed with the bacterial inocula (360 μL) to attain final concentration ranging from 0.031 -1.0 mg/ml. One hundred microliters of this mixture for each concentration was added to three separate wells in the 96-well microplates for replicate testing. Wells containing mixture of dilute DMSO and inoculum were included as control with DMSO final concentration of 1% (v/v). After 24 h of incubation at 37°C, the supernatant containing TSB and planktonic cells were gently removed from the microplate wells and the wells were washed twice with 150 μL of PBS. After rinsing, 30 μl of the INT reagent were added and the suspension was incubated for 4 h at 37°C. The minimum biofilm inhibitory concentration (MBIC), defined as the lowest concentration of an antimicrobial agent required to inhibit the formation of biofilms was determined by observing the color change of INT in the microplate wells as described above. Non-biofilm forming clinical isolate S. aureus was used as negative control and wells containing clinically established drug (cefepime) were used as positive control. The bioassay was performed in triplicate.
Eradication of biofilms
In order to test the ability of E. hirta extract to eradicate the formed biofilms, a modified microdilution assay described by LaPlante [21] was employed. Microbial biofilms were developed in flat bottom 96-well clear microtitre plates. A volume of 30 μl of a washed suspension of bacterial cells grown into the mid exponential phase in tryptic soy broth (1.0 × 10 5 CFU/ml) were inoculated into fresh tryptic soy broth at 150 μL/well. The cultured microtitre plates were incubated at 37°C to permit the microbial cells to form the biofilms. After 24 hours of biofilm growth, the wells were rinsed with PBS (pH 7.4) to remove nonadherent cells. The biofilms established for 24 hours in the each well were subsequently treated with two-fold serial dilutions of E. hirta extract (1 mg/ml as the highest in-test concentration). Non-biofilm isolate was used as negative control and wells containing clinically established drugs were used as positive control. The microtitre plates were sealed with parafilm tape and incubated overnight at 37°C. The minimal biofilm eradication concentration (MBEC), defined as the lowest concentration of an antimicrobial agent required to eradicate biofilm, was determined by observing the color change of INT in the microplate wells as per MIC determination method. The bioassay was performed in triplicate.
Statistical analyses
The results were analyzed using Statistical Package for the Social Sciences (SPSS) software version 17.0. All the values are expressed as means ± standard deviation (SD) from triplicate experiments.
Growth inhibition of planktonic cells
Initially, methanol extract of E. hirta was assessed on growth inhibitory ability against planktonic cells of clinical concentration (MBEC) value of 0.5 mg/ml. The methanol extract also showed weak anti-biofilm activities against E. faecalis and P. aeruginosa (ATCC 27853) exhibiting MBEC value of 1.0 mg/ml. Nevertheless, E. hirta extract displayed no capability to disrupt the established biofilm in the rest of the selected pathogens at the highest test concentration of 1 mg/ml. Cefepime found to be ineffective to eradicate biofilm of all the tested pathogens except for the clinical isolate, P. aeruginosa with MBEC value of 0.5 mg/ml.
Discussion
Biofilm infections represent a serious health threats worldwide today mostly due to the appearance of antibiotic resistant strains. Contemporary testing on minimum inhibitory concentration (MIC) which measures only planktonic susceptibility may be the possible explanation for treatment failures and resistant development among bacterial biofilms. In the present study, the results of the MIC, MBC, MBIC and MBEC have highlighted the interesting activity of E. hirta. The phytochemicals present in the crude methanolic extract of E. hirta plays an important role for the evident antibacterial and anti-biofilm activity.
Medicinal plants are rich of secondary metabolites which some of them are directly involved in plant defense mechanisms against microorganisms [22]. In the current investigation, GC-MS analysis of methanol extract of E. hirta aerial part was conducted with the intention to correlate the phytochemical compounds to the antibacterial and anti-biofilm activity. The GC-MS profile of methanol extract of E. hirta revealed the presence of nineteen compounds. The most abundant phytocompound found was terpenoids. Terpenoids are the largest group of natural plant products many of which well acknowledged as potential antimicrobial agents. It is feasible that the observed broad spectrum antibacterial activity by methanolic extract of E. hirta probably attributed to the dominant presence of some terpenoids (cycloartenol, squalene, αamyrin and β-amyrin). Previous studies have shown αamyrin and β-amyrin were potential to inhibit the growth of some microbes [23,24]. These terpenoids most likely involved in the detachment of planktonic cells from the biofilm. Probably the terpenoids influenced the membrane integrity in all organisms and helped to eradicate most biofilm cells. The significant reduction in cell attachment made terpenoids an ideal anti-adhesive compound. Terpenes have also been frequently reported to be active against bacteria [25]. The mechanism of action of terpenes is thought to involve membrane disruption by the lipophilic compounds and thereby inhibit respiration and ion transport processes in the bacterial cells [22]. De Carvalho et al., [26] proved that natural compounds such as terpenes can be used to prevent cell aggregation and biofilm formation. Terpenes are believed to influence the fatty acid composition of the cell membrane, and thus cell hydrophobicity which lead to the eradication of the biofilm. Additionally, the expression of synergy, antagonism or additive effects among the major phytocompounds found in the crude extract may also be the rationale to the apparent anti-biofilm activity. The standard antibiotic (cefepime) used in this study showed potent antibacterial and anti-biofilm activity due to the fact that they appeared in purest form in contrast to the crude extract which is in complex mixtures of compounds. The MIC of the standard antibiotic used as positive control is given in Table 2. GC-MS analysis helps towards comprehending the phytocompounds with remedial values in E. hirta.
To begin our investigation on antibacterial activities, the growth inhibition effect of E. hirta methanolic extract were determined first on planktonic cultures of resistant clinical isolates and standard strains. E. hirta extract exerted broad antibacterial spectrum with substantial potency against both Gram-positive and Gramnegative bacteria tested. The standard strains presented more invulnerability towards E. hirta methanol extracts compared to resistant clinical isolates. This is due to ATCC strains are well characterized cultures with higher stability towards the antimicrobials. The strongest inhibition was recorded against Gram negative, P. aeruginosa followed by moderate antibacterial activity against Gram positive, E. faecalis and B. cereus. The different cell wall susceptibility amongst bacteria may be the key contributor to various MIC and MBIC values. According to Fennel et al., [27], Gram positive bacteria are often found to be more susceptible to plant extracts than the Gram negative bacteria. It is well known that the outer membrane present only in the Gram negative bacteria play an important role as an effective barrier. However in this study, the prominent sensitivity of P. aeruginosa toward methanolic extract of E. hirta may possibly due to the membrane permeability. Similar results for the effect of essential oils on outer membrane permeability in Gram-negative bacteria were reported by Helander et al., [28]. Although Gram positive bacteria lack of outer membrane, the thicker cell wall consist of few peptidoglycan layers could act as functional barrier thus hinder the penetration of antimicrobial compound into the bacterial cell [29]. E. hirta extract displayed a distinct bactericidal mode of action against most of the bacteria tested. The bactericidal activity is confirmed by the obtained MBC values which are usually two to four higher than the corresponding MICs. The MIC and MBC values reported in this work were lower than those obtained from other studies involving E. hirta plant [10,30].
In contrast to the growth inhibitory ability against planktonic cells, biofilms of P. aeruginosa were less susceptible to E. hirta extract with MBIC value of 0.25 mg/ml. Bacteria living as biofilm are often more difficult to eradicate compared to the planktonic mode of growth. Planktonic cells forms biofilms by adhering to each other strongly via formation of pili. Apparently, in this study, the pilicides action of E. hirta methanolic extract might be the reason for the growth inhibition of P. aeruginosa biofilm. Besides pili formation, bacteria also use quorum sensing to coordinate the formation of biofilms. Quorum sensing (QS) is a cellto-cell signaling mechanism which often linked to the establishment of complex communities of bacteria. The QS present between the bacterial inhabitants has led to development of the biofilms [31]. The opportunistic pathogen P. aeruginosa uses quorum sensing to coordinate the formation of biofilms, swarming motility, exopolysaccharide production, and cell aggregation [32]. The eradication of P. aeruginosa biofilm was interpreted to suggest that methanolic extract of E. hirta displays QS inhibitory activity. MBIC and MBEC values of E. hirta methanol extract against clinical isolates and standard strains are being documented for the first time in this study. This study outlines a very sturdy basis for future investigations in pursuit to discover new anti-infectious agents.
Conclusions
In conclusion, the findings from this study seemed to validate the traditional use of E. hirta plant for the treatment of ailments caused by infectious agents. The interesting biofilm inhibitory and eradication activity found against P. aeruginosa biofilm makes this ethnomedicinal plant an outstanding candidate for nosocomial infection therapy. Moreover, E. hirta plant may also benefit the hospitals and healthcare facilities as biofilm control agents for the prevention of contamination in the medical devises. The antibacterial and anti-biofilm activities of crude extract of E. hirta plant could be further improved after the opportune bioactivity guided fractionation steps. This pace may help to discover and develop distinctive chemical entities with above promising biological activities. | 2016-05-12T22:15:10.714Z | 2013-12-09T00:00:00.000 | {
"year": 2013,
"sha1": "4af6035a6fb1c3fa6f319ff67e7aeb99fadbe806",
"oa_license": "CCBY",
"oa_url": "https://bmccomplementalternmed.biomedcentral.com/track/pdf/10.1186/1472-6882-13-346",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3b6f9d620ed873e08a3910a2a52f831b8023ed8a",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
15801946 | pes2o/s2orc | v3-fos-license | Accessibility percolation and first-passage site percolation on the unoriented binary hypercube
Inspired by biological evolution, we consider the following so-called accessibility percolation problem: The vertices of the unoriented $n$-dimensional binary hypercube are assigned independent $U(0, 1)$ weights, referred to as fitnesses. A path is considered accessible if fitnesses are strictly increasing along it. We prove that the probability that the global fitness maximum is accessible from the all zeroes vertex converges to $1-\frac{1}{2}\ln\left(2+\sqrt{5}\right)$ as $n\rightarrow\infty$. Moreover, we prove that if one conditions on the location of the fitness maximum being $v$, then provided $v$ is not too close to the all zeroes vertex in Hamming distance, the probability that $v$ is accessible converges to a function of this distance divided by $n$ as $n\rightarrow\infty$. This resolves a conjecture by Berestycki, Brunet and Shi in almost full generality. As a second result we show that, for any graph, accessibility percolation can equivalently be formulated in terms of first-passage site percolation. This connection is of particular importance for the study of accessibility percolation on trees.
Introduction
A number of recent papers [4][5][6][7][8][9][10] have studied a percolation problem known as accessibility percolation, based on ideas of Kauffman and Levin for modeling biological evolution [1]. In its simplest form, accessibility percolation consists of a graph G = (V, E), or more generally a digraph, together with a fitness function ω : V → R generated according to some random distribution. This is thought of as representing the landscape of possible evolutionary trajectories of a species. The vertices in G represent the possible genotypes for an organism whose fitness is a measure of how successful an individual of that genotype is, and the edges the possible ways the genome can change subject to a single mutation. Here it makes sense both to consider directed and undirected edges depending on whether or not a certain mutation is reversible. Of primary concern is the existence or distribution of so-called accessible paths.
Definition. Let G = (V, E) and ω : V → R be a fitness landscape. We say that a For v, w ∈ V we say that w is accessible from v if there exists an accessible path from v to w.
For the distribution of ω we will in this paper consider two variations of Kingman's House-of-Cards model [3]. Both of which have previously been considered in accessibility percolation. In fact, all results in [6][7][8][9][10] consider some variation of the House-of-Cards model, whereas [4] and [5] also consider the so-called Rough Mount Fuji model. The first model we will consider here is the original formulation of the House-of-Cards model, in which the ω(v):s are independent and U (0, 1)-distributed for all v ∈ V . Kauffman and Levin refers to this as an uncorrelated landscape. For the second distribution we modify the House-of-Cards model by introducing an a priori global fitness maximumv ∈ V by changing ω(v) to one. As accessibility percolation only depends on the relative order of fitnesses, this can be seen as equivalent to conditioning the House-of-Cards model onv being the global fitness maximum. In particular, ifv is chosen uniformly at random among V , then this is equivalent to the House-of-Cards model withv denoting the global fitness maximum.
Our first main result considers accessibility percolation on the unoriented ndimensional binary hypercube. The question of primary concern is whether or not there exists an accessible path from the all zeroes vertex,0, to the fitness maximum v. We prove that, providedv is not too close to0 in Hamming distance, the probability that such path exists converges to a non-trivial function of the Hamming distance betweenv and0 divided by n, confirming a conjecture by Berestycki, Brunet and Shi [7] in almost full generality.
As a second result, we show that accessibility percolation for a general graph can be equivalently formulated in terms of first-passage site percolation. This lets us reformulate previous results in the literature in terms of first-passage site percolation. In particular, this relation has important implications for accessibility percolation on trees, as studied in [6,[8][9][10]].
Notation.
• Whenever talking about a general graph G = (V, E), we allow both undirected and directed edges. For vertices u, v ∈ V , we write u ∼ v if there is either an undirected edge between u and v or a directed edge going from u to v. • The unoriented n-dimensional binary hypercube, denoted by Q n , is the graph whose vertices are the binary n-tuplets {0, 1} n and where two vertices share an edge if their Hamming distance is one. The oriented n-dimensional binary hypercube, − → Q n , is the directed graph obtained by directing each edge in Q n towards the vertex with more ones.
• For a vertex v in the hypercube we let |v| denote the number of coordinates of v that are one. Addition and subtraction of vertices in Q n denotes coordinate-wise addition/subtraction modulo two. We let0 and1 denote the all zeroes and all ones vertices respectively, and let e 1 , . . . , e n denote the standard basis. • Often when considering the House-of-Cards model, it is useful to condition on the fitness of0. Following the convention in [6,7], for any α ∈ [0, 1] we let P α (·) and E α [·] denote conditional probability and expectation respectively, given ω(0) = α.
Recent work.
Let us take a moment to summarize the results for accessibility percolation on the binary hypercube with House-of-Cards fitnesses in [5][6][7]. We start by consider the simplified version of the problem where we replace Q n by − → Q n . This is equivalent to only considering paths without backwards mutations. As any coordinate wherev is zero will be constantly zero along any such path, it suffices to consider the case wherev =1.
Let X denote the number of oriented paths from0 to1 which are accessible. As there are n! oriented paths from0 to1, and each path is accessible if and only if the n random fitnesses along the path are in ascending order, we see that EX = 1. At first glance, this may seem to imply a positive probability of accessible paths existing. However, a much clearer picture of what occurs is obtained by conditioning on the fitness of the starting vertex. Indeed, conditioned on the fitness of0 being α ∈ [0, 1], we have (1.2) E α X = n(1 − α) n−1 .
We see that, for large n, this expression is 1 approximately at α = ln n n , and rapidly decreasing as α increases. Informally, this means that unless the fitness of the starting vertex is below ln n n , accessible paths are highly unlikely. In fact, by considering (1.2) a bit more closely it follows that P X ≥ 1 ∧ ω(0) > ln n n ≤ 1 n . On the other hand, the regime where α is smaller than ln n n turns out to be more difficult to treat. In [5] it was shown by Hegarty and the author that the probability of accessible paths in this case tends to 1 as n → ∞.
This theorem was later strengthened by Berestycki, Brunet and Shi in [6] who proved that, in the special case where ω(0) = O 1 n , X has a non-trivial limit distribution when scaled appropriately.
Let us now switch back to the unoriented hypercube and see how this analysis changes. Again, let X denote the number of accessible paths from0 tov. Here, paths are not as combinatorially well-behaved as for the oriented cube, and first moment estimates are not as easy to come by. Nevertheless, in a recent paper by Berestycki, Brunet and Shi [7] it was shown that E α X has the following asymptotic behavior: Theorem 1.2. (Berestycki, Brunet, Shi) Let α ∈ [0, 1] be fixed, and letv =v n ∈ Q n be such that x := lim n→∞ |v n | /n exists. We have that as n → ∞ As a consequence, for each x there is a critical value α * (x) = 1−ϑ(x) for the fitness of0, given by the unique non-negative solution to We see a similar behavior of E α X as for the oriented cube. One important difference though is that unlike the oriented cube the critical value has a nontrivial limit as n → ∞. The function ϑ(x) is plotted in Figure 1. This function is continuous and increasing where ϑ(0) = 0 and ϑ(1) = ln 1 + √ 2 ≈ 0.88. In particular, it follows that if the a priori global fitness maximum is1, then the critical fitness is 1−ln 1 + √ 2 ≈ 0.12, and if chosen uniformly at random then |v| /n will be tightly concentrated around 1 2 and hence the critical fitness is 1 − 1 2 ln 2 + √ 5 ≈ 0.28. Berestycki et al. further gave two conjectures that (1.6) "tells the truth" in the sense that P α (X ≥ 1) tends to 1 as n → ∞ for α < 1 − ϑ(x). paper proposes this in the special case wherev =1, and Conjecture 2 in the more general setting ofv =v n satisfying |v n | /n → x ∈ [0, 1].
1.3.
Results. The first result of this paper fully resolves Conjecture 1 by Berestycki, Brunet and Shi [7], and Conjecture 2 under the additional condition that x is not too small. Theorem 1.3. Letv =v n ∈ Q n be a sequence of vertices such that x := lim n→∞ |v| /n exists. Let X denote the number of accessible paths from0 tov. Let ϑ(x) be as defined in Theorem 1.2. Assuming x ≥ 0.002, we have In particular, ifv =1, then and ifv is chosen uniformly at random, then The value 0.002 deserves some explanation. In the proof of Theorem 1.3, or more accurately the proof of Theorem 1.6 below which is shown to be equivalent to the former, we see that there is a value x * ≈ 0.00167 such that the proof goes through whenever x > x * and breaks down when x < x * , see Remark 4.8. It seems likely however that this is simply an artifact of the technique used in the proof, and that the statement should hold true even for smaller x. Regardless of whether or not this is true, we can note that the two cases of most concern, x = 1 and x = 0.5, are far above x * .
We now turn to the relation between accessibility percolation and first-passage site percolation for a general graph. Let G = (V, E) be a graph with a distinguished vertex0. Note that each edge of G may either be directed or undirected. For each vertex v ∈ G randomly assign a cost, denoted by c(v), according to independent U (0, 1) random variables. For a path u 0 , u 1 , . . . , u l in G we define the site passage time of the path by Example of a graph where accessible paths have a different distribution than paths with small passage time. We can for instance note that there can never be exactly three accessible paths from0 to v, whereas there can certainly be exactly three paths with reduced passage time at most 1 − α.
and similarly define its reduced site passage time by Note that neither the passage time nor the reduced passage time of a path include the cost of the first vertex. For each u, v ∈ G we define the site first-passage time from u to v, denoted by T V (u, v), and the reduced first passage time from u to v, denoted by T V (u, v), as the minimum of the respective quantity over all paths from u to v. (1.14) P α v accessible from0 = P T V (0, v) ≤ 1 − α . Moreover, in the latter case this claim can be significantly strengthened. Conditioned on the fitness of0 being α, the set of vertices accessible from0 has the same distribution as the set of vertices v such that T V (0, v) ≤ 1 − α.
Informally we can think of this theorem as saying that accessibility percolation is equivalent to first-passage site percolation with independent U (0, 1) vertex passage times. We need to be a bit careful there though; the theorem only deals with the question of whether or not a certain vertex is accessible from0 along any path, and it does not for instance say anything about the number of accessible paths. Indeed, it is not true in general that the number of accessible paths from0 to v is distributed as the number of paths from0 to v with reduced passage time at most 1 − α. For graphs containing non-simple paths this is clear as non-simple paths can have arbitrarily small passage time but cannot be accessible, but it can even be false for directed acyclic graphs, see for instance Figure 2. On the other hand, the connection is more general than just treating which vertices are accessible. For instance, using the proof ideas in Section 2 one can show that the minimal number of times you need to move to a less fit vertex to get from0 to v is distributed as the integer part of T V (0, v) + α.
A problem with using Theorem 1.4 to relate known results from first-passage percolation to accessibility percolation is that the vast majority of the first-passage percolation literature assigns passage times to edges rather than vertices. However, a common property for percolation problems is that it is harder to percolate on vertices than edges [2]. The following proposition shows that something similar holds for first-passage percolation. Proposition 1.5. Suppose the edges of G are assigned independent U (0, 1) weights. Let T E (u, v) denote the minimum total weight of any path from u to v in G. Then, it is possible to couple For the special case when G is a rooted tree one can see that this coupling is exact; to go from site to bond percolation we can simply consider the passage time of each vertex to instead be assigned to the edge leading to it. Accessibility percolation on trees has been considered in [6,[8][9][10]. With the exception of [6], these articles have considered regular rooted trees with degree n and height h, and where fitnesses are assigned according to the House-of-Cards model conditioned on the fitness of the root being zero. Of principal concern is how the number of vertices in generation h that are accessible from the root varies as a function of n, and in particular whether this number is non-zero. Using Theorem 1.4 we can see that this is equivalent to assigning independent U (0, 1) passage times to the edges of the tree and considering the number of vertices v in generation h such that T E (0, v) ≤ 1. In particular, the question of whether generation h is accessible from0 is equivalent to to asking if the first-passage time from the root to generation h is at most 1. It should be mentioned however that the usual setting in first-passage percolation on regular rooted trees keeps n fixed and considers the first-passage time from the root to generation h as h → ∞. While the author is not aware of any results from this field that have appropriate error bounds to be directly applicable to accessibility percolation, there seems to be a significant overlap of ideas between [9,10] and the literature on first-passage percolation on trees. See for instance [13].
Let us now consider the implications of Theorem 1.4 for the hypercube. Using this result, we can immediately translate the result from Theorem 1.1 to that, for the oriented hypercube, T V (0,1) is concentrated around 1 − ln n n with fluctuations of order 1 n . More importantly, we have that the following is equivalent to Theorem 1.3: Theorem 1.6. Let G = Q n and letv =v n ∈ Q n be a sequence of vertices such that x := lim n→∞ |v| /n exists. Assuming x ≥ 0.002, as n → ∞ we have in probability.
Note here that the fact that ϑ(x) is an asymptotic lower bound on the reduced passage time is already implied by Theorem 1.2.
It should be mentioned that basically the same results holds true for bond percolation. In [11] it was shown that for the oriented hypercube, we have T E (0,1) → 1 in probability as n → ∞. In a more recent result by the author [12], it was shown that for the unoriented hypercube T E (0,1) → ln 1 + √ 2 as n → ∞. Strictly speaking these results assume standard exponential edge weights, but it is not too hard to show that the limiting distribution of T E (0,1) only depends on the weight distribution as the righthand limit of its probability distribution function at 0, hence it will be the same for U (0, 1) weights.
The remainder of the paper will be structured as follows: In Section 2 we prove Proposition 1.5 and Theorem 1.4. The remaining sections, Sections 3, 4 and 5, are dedicated to the proof of Theorem 1.6.
Proof of Proposition 1.5 and Theorem 1.4
We may, without loss of generality, assume that for any vertex v there exists a path from0 to v.
A key idea of the proofs of Proposition 1.5 and Theorem 1.4 is the following procedure for computing T V (0, v). We initially consider T V (0, v) to be unassigned for each v, except0 for which it is set to 0, and we let U = {0} denote the set of vertices with assigned first-passage times. Until T V (0, v) is assigned for all v, we do the following operation: (1) Find a pair of vertices u, v that minimizes To see that this assigns first-passage times correctly, suppose that we are in the step where T V (0, v) is assigned. As v is not in U , the passage time of any path from0 to v must include the passage time from0 to some vertex u in U adjacent to some vertex outside U , as well as the cost v.
As there is a path from0 to v with passage time T V (0, u) + c(v), this must be optimal. Hence, if all previous assignments are correct, T V (0, v) will be assigned correctly as well.
Proof of Proposition 1.5. We can modify this algorithm to run on first-passage bond percolation by replacing c(v) by the weight of the edge from u to v. In either case, as no vertex cost or edge weight respectively is accessed more than once, the accessed values form a sequence of independent and U (0, 1) random variables. Hence the distribution of T V (0, v) is unaffected. On the other hand, for bond percolation we get that T V (0, v) is the edge passage time of some path from0 to v (but not necessarily the shortest).
We now turn to the proof of Theorem 1.4. The coupling between first-passage site percolation and accessibility percolation we will consider is essentially to let f (v) = α + T V (0, v) be the fitness function, where {x} = x − x denotes the fractional part of x. We will however modify this slightly by putting f (v) = 1 whenever α + T V (0, v) = 1. It is clear that the probability of such v other than0 existing is 0, so the only way this will change the distribution of f is that It is not too hard to see that, for any vertex v except0, f (v) is U (0, 1)-distributed. The following lemma shows that the f (v):s are also independent, hence showing that f is distributed according to the House-of-Cards model without an a priori global fitness maximum, conditioned on f (0) = α.
Proof. Suppose that we generate vertex costs in the following way: Run the procedure above, but with the modification that whenever the algorithm tries to access c(v), first generate a U (0, 1) random variablef (v) and assign c(v) the value It is clear that the c(v):s are independent and U (0, 1)-distributed. The lemma follows by noting that, in the latter case, Proof of Theorem 1.4. We begin by considering the case with no a priori global fitness maximum. In this case, we can consider f : V → R to be the fitness function. For simplicity let us assume that no vertex cost is exactly 0.
. Hence the path is not accessible. Now for the case wherev ∈ V \ {0} is the a priori global fitness maximum. We here keep the same coupling as before between f (v) and Then v being accessible from0 is almost surely equivalent to some vertex in U being accessible from0. Note that this last statement does not depend on the value of f (v). It follows thatv is accessible from0 is almost surely equivalent to that
The Clustering Translation Process
Before proceeding, we will slightly modify T V (0,v) by replacing the U (0, 1) vertex costs by independent standard exponential such. Note that the standard exponential distribution stochastically dominates U (0, 1), and hence this modification will only increase T V (0,v). As the lower bound in Theorem 1.6 follows from Theorem 1.2, it suffices to show that, with this modification, asymptotically almost surely T V (0,v) ≤ ϑ(x) + o(1). To do this, we will mimic the argument in [12] for first-passage bond percolation on Q n .
Let us take a moment to describe some of the underlying machinery for firstpassage bond percolation on Q n . We assume independent standard exponential edge weights. In [11], Durrett introduced the following process, which he called the the branching translation process, BTP: At time 0 we place one particle at0 in Q n . The system then evolves by each existing particle independently generating offspring at each vertex adjacent to its position at rate 1. One can show that for each vertex v ∈ Q n , the time at which the first particle at v is born is stochastically dominated by T E (0, v). This follows from the fact that the BTP dominates the so-called Richardson's model. The strategy in [12] is basically to show that, with a certain coupling, there is a probability bounded away from zero of these quantities being equal.
In order to translate this approach to first-passage site percolation, we need to find a corresponding process to the BTP for this case. We claim that the following is such a process: We initially have a finite number of particles, each located at a vertex in Q n . For each particle, we assign an independent Poisson clock with unit rate. When a particle's clock goes off, it simultaneously generates one new offspring at each vertex adjacent to its position. The new particles are then assigned new Poisson clocks and the process continues. We will refer to this process as the clustering translation process, CTP.
We see that in both the BTP and CTP each particle generates offspring at each neighboring vertex at rate 1. A big difference however is that in the BTP this is done independently for each neighboring vertex, whereas in the CTP a particle generates offspring all neighboring vertices simultaneously. Another difference is that the initial state of the CTP is not fixed.
The most important initial state of the CTP will be one particle at each neighbor of0. We will refer to a CTP initialized in this way as a standard CTP. Particles born due to the same Poisson clock tick will be referred to as identical n-tuplets. To simplify terminology we will also consider the initial n particles in a standard CTP as identical n-tuplets. Below we will use the terms ancestor and descendant of a particle to denote the natural partial order of particles generated by the CTP. For convenience, we say that a particle is both an ancestor and a descendant of itself. The terms parent and child are defined in the natural way. The ancestral line of a particle x is the ordered set of ancestors of x, and we say that the ancestral line of x follows the path0 = v 0 , v 1 , v 2 , . . . , v l if the location of the ancestors of x in chronological order is given by v 1 , v 2 , . . . v l . Note that this path always starts at0 even though the first ancestor is located at a neighbor of0. We say that a particle x originates from a particle y at a time t if y is the last particle in the ancestral line of x that exists at time t.
We can immediately note some properties of this process. Firstly, it is Markovian. Secondly, let A be a set of vertices in Q n , and let M A (v, t) denote the expected number of particles at vertex v at time t ≥ 0 in the CTP initialized by placing one particle at each vertex in A. Then it is easy to see that M A (v, t) must solve the initial value problem In particular, if A = {0}, then the unique solution to this problem is and it follows by linearity that for any A, we have Recall that addition/subtraction of vertices in Q n are interpreted as coordinatewise addition/subtraction modulo 2. It should be remarked that the exact same analysis holds for the BTP. We now show that the standard CTP indeed has the desired relation to firstpassage site percolation. To this end, we partition the particles in this process into two sets, the set of alive particles and the set of ghosts. Each initial particle is alive. Whenever a new particle is born, it is alive if its location does not already contain an alive particle and its parent is alive, and is a ghost otherwise. Note that at most one particle at each vertex can be alive. Furthermore, it is easy to show that each vertex will almost surely eventually contain an alive particle.
Proposition 3.1. Consider first-passage site percolation on Q n with exponentially distributed costs with unit mean. It is possible to couple this process to the standard CTP such that for each vertex v except0, T V (0, v) denotes the birth time of the alive particle at v.
Proof. For each vertex v, we letT (v) denote the first time t ≥ 0 when v contains an alive particle, and we letc(v) denote the time from the birth of this particle to the first arrival of its clock. Thenc(v) for v ∈ Q n are independent exponentially distributed random variables with unit mean.
From the definitions of the CTP and alive particles, it follows that for any vertex v that is not a neighbor of0, the alive particle at v is born at the first arrival time of an alive particle at an adjacent vertex. Hence, for any v that is not a neighbor of0, we have and triviallyT (v) = 0 when v is a neighbor of0. It is easy to see that this uniquely definesT (v), and that for each vertex v except0,T (v) denotes the reduced firstpassage time from0 to v with respect to the vertex costs given byc(v).
Given this proposition, we are able to proceed analogously to Sections 2 and 3 in [12]. In applying this coupling between the CTP and first-passage site percolation we will consider a stronger and more tractable property than aliveness. For any particle x in the CTP, we let c(x) denote the number of pairs of particles y and z such that • y and z occupy the same vertex • y is an ancestor of x • y was born after z. We furthermore let a(x) denote the number of such pairs where z is either an ancestor of x or an identical n-tuple of an ancestor of x, and define b(x) = c(x) − a(x). We call a particle x uncontested if c(x) = 0.
It can be noted that a(x) is defined differently for the BTP. This is because the strategy is loosely speaking to let a(x) denote the number of pairs (y, z) that deterministically must exist given x. For the CTP we have additional such pairs, namely those corresponding to identical n-tuplets of ancestors of x.
Lemma 3.2. If a particle is uncontested, then it is alive.
Proof. If a particle x is a ghost, then it must have an earliest ancestor (possibly itself) which is a ghost, y. As y is a ghost but the parent of y is alive, it follows that the location of y must have already been occupied by some (alive) particle z. The pair (y, z) is then counted in c(x).
It is not hard to see that a(x) only depends on the path followed by the ancestral line of x. If we know this path, then we know the locations and order of births of all ancestors and identical n-tuplets of ancestors of x. Let σ be a path represented as a vertex sequence. We say that σ is vertex-minimal if there is no proper subsequence which is a path with the same end points. Lemma 3.3. Let x be a particle in the CTP. If the ancestral line of x is vertexminimal, then a(x) = 0. The converse is true unless x is located at0.
Proof. Denote the path followed by the ancestral line of x by v 0 , v 1 , . . . , v l and the ancestors of x by x 1 , x 2 , . . . , x l = x. We have that a(x) > 0 if and only if there exist 1 ≤ i < j ≤ l such that x j occupies the same vertex as either x i or an identical n-tuplet of x i , that is, v i−1 and v j are adjacent. Hence, if a(x) > 0 the path is not vertex-minimal. Conversely, if a(x) = 0 it follows that the only pairs of adjacent vertices are consecutive in the path. It is straight-forward to show that, unless the path starts and stops at the same vertex, this implies vertex-minimality.
What follows are two technical lemmas, corresponding to Lemmas 3.2 and 3.3 in [12]. Before presenting these, we need to specify how to formally describe the CTP. Firstly, by a (potential) particle we mean a word {v 1 , z 1 , v 2 , z 2 , . . . ,v l−1 , z l−1 , v l } where v 1 , . . . , v l denote vertices and z 1 , . . . z l−1 positive real numbers. This is interpreted as the particle whose ancestors are located at v 1 , v 2 , . . . , v l and born at times 0, z 1 , z 1 + z 2 and so on. The CTP is described by a random set X of potential particles, denoting the set of particles that will ever be born in the CTP. We will use ⊕ to denote concatenation of words. We remark that this representation means that the functions c(x) and b(x) are not functions only of x, and should more correctly be denoted by c(X, x) and b(X, x). On the other hand, a(x) is really a function of x as it only depends on the location of the ancestors of x.
For 0 ≤ i ≤ l − 1 let X i denote independent CTP:s where X i is the CTP obtained by initially placing one particle at each neighbor of v i . Let f be a function that maps pairs (X, x) to the non-negative real numbers where X is a realization of a CTP, and x is a particle in X. Similarly, let V σ (X) denote the set of particles in X whose ancestral lines follow σ. Then for a standard CTP, X, we have For compactness, we will only sketch a proof. The reader unconvinced by this is referred to the proof of Lemma 3.2 in [12].
Proof sketch. Let us first consider the case when f (X, x) only depends on x. In that case, we have This is because the original particle at v 1 gives birth to particles at v 2 at rate one whereupon, after its birth, each child at v 2 of this original particle gives birth to particles at v 3 at rate one, and so on. When f also depends on the realization of the CTP, the idea is that we substitute f (X, x) in the left-hand side of this sum by E [f (X, x)|x ∈ X]. Now, formally this conditioning does not really make sense, but its meaning is intuitively clear; it denotes the average value of f (X, x) where the average is taken over all X that include x. We have Now, x z1,...,z l−1 exists in X if and only if certain Poisson clocks have arrivals at certain times. By the independent increment property, conditioning on these arrivals does not affect the Poisson clocks at any other times. Hence, the conditional distribution of X given the existence of x z1,...,z l−1 is the same as that of a standard CTP, except with added arrivals, corresponding to the births of the ancestors of x z1,...,z l−1 . This is precisely the distribution of X z1,...z l−1 .
Lemma 3.5. Let X be a CTP, and let φ be an indicator function on the set of potential particles in X. If φ(x) = 0 for all original particles in the CTP, then Proof. Let us refer to the set of original particles as generation one, their children as generation two and so on. Let T denote the set of birth times for particles in generation two in X, and let T ⊆ T be the subset obtained by including t ∈ T if there exists a particle x ∈ X such that φ(x) = 1 and x is an descendant of a particle in generation two born at time t. It is clear that |T | ≤ x∈X φ(x) and that |T | = 0 if and only if x∈X φ(x) = 0. By definition of the CTP, it is clear that T is a Poisson point process. Furthermore, as the event that t ∈ T is included in T only depends on descendants of particles in generation two born at time t, this occurs independently for each t ∈ T. Hence, by the random selection property, T is also a Poisson point process. This implies that Proof. Let P (v, t) denote the probability that v contains an uncontested particle at time t. As at most one particle at each vertex can be uncontested, this is the same thing as the expected number of uncontested particles at v at time t. For each path σ from0 to v, let P σ (v, t), B σ (v, t) and S σ (v, t) denote the contribution to P (v, t), B(v, t) and S(v, t) respectively from particles whose ancestral line follows σ.
The idea now is to bound P σ (v, t) in terms of B σ (v, t) and S σ (v, t) for each path σ from0 to v. Recall that a(x) is constant over all particles x whose ancestral line follows a fixed σ. We will denote this constant by a(σ).
We will apply Theorem 3.6 as follows: Let {v n } ∞ n=1 be a sequence of vertices such that, for each n,v n ∈ Q n and x = lim n→∞ |v n | /n exists and is non-zero. We may, without loss of generality, assume thatv n is never equal to0. For each n, we let ϑ n denote the unique non-negative solution to (3.17) m(v n , ϑ n ) = 1 n .
Note that the expected number of particles atv n at time ϑ n in a standard CTP on Q n is Θ(1), and that ϑ n → ϑ(x) as n → ∞. By Theorem 3.6 we have that the probability that there is a uncontested particle atv n at time ϑ n in the CTP on Q n is at least S(v n , ϑ n ) exp − B(vn,ϑn)
S(vn,ϑn)
. Hence by Lemma 3.2 and Proposition 3.1 it follows that This means that if we can show that S(v n , ϑ n ) = Θ(1) and B(v n , ϑ n ) = O(1), then we know that T V (0,v n ) ≤ ϑ n with probability bounded away from 0 as n → ∞. Section 4 will be dedicated to estimating S(v n , ϑ n ) and B(v n , ϑ n ). The proof of Theorem 1.6 is then completed in Section 5 by showing that if T V (0,v n ) ≤ ϑ n with probability bounded away from 0, then a slightly larger upper bound on T V (0,v n ) must hold asymptotically almost surely.
Calculus
4.1. Estimating S. We will prove that S(v n , ϑ n ) = Θ(1) in two steps. Firstly, we show that most particles atv n at time ϑ n have ancestral lines which are close to vertex-minimal. Using this, we then give a combinatorial argument that shows that a positive proportion of these particles must have vertex-minimal ancestral lines.
Let us formalize the notion of paths being close to vertex-minimal. Let v, w ∈ Q n be fixed distinct vertices and let σ = {v = v 0 , v 1 , . . . , v l = w} be a path from v to w. Throughout this section, we will always think of a path as a finite sequence of vertices. In particular, by the length of a path we mean the number of vertices in the path. For any 0 < i ≤ j < l we say that the subsequence v i , v i+1 , . . . , v j is a detour of σ if removing these elements from σ results in a valid path. Clearly, for v = w a path is vertex-minimal if and only if it has no detours. Inspired by this, we say that a path is almost vertex-minimal if all detours have length at most 2. Note that as Q n is bipartite, any detour must have even length. Hence, a path is almost vertex-minimal if it only has the shortest possible detours.
An important property of almost vertex-minimal paths is that any such path from v to w can be constructed by taking a vertex-minimal path with the same end-points and extending it as follows: Between each two adjacent elements in the sequence either do nothing or insert a detour of length 2. Proof. Fix s. Observe that equality holds when t = 0 and that both expressions solves (3.1) n=1 be a sequence of vertices,v n ∈ Q n , such that α = lim n→∞ |v n | /n exists and is positive. Then, as n → ∞, the expected number of particles in the standard CTP on Q n which are atv n at time ϑ n , but that do not have almost vertex-minimal ancestral lines tends to 0.
Proof. Let X n denote the number of triples of particles x, y, z in the CTP on Q n such that • x is atv n at time ϑ n • y and z are located at adjacent vertices • z is an ancestor of y which is an ancestor of x.
• y and z are neither one nor three generations apart.
We note that if the ancestral line of a given particle x atv n at time ϑ n can be constructed using some detour of length d > 2, then it is clear that x would have a pair of ancestors at adjacent vertices which are d + 1 generations apart. This means that any such x is counted at least once in X n . Hence, it suffices to show that EX n = o(1).
For each triple x, y, z as above there are uniquely defined particles c, the particle after z in the ancestral line of x, and p, the parent of y. Note that the requirement that y is neither the child, nor the grand-grandchild of z implies that p is a descendant of c, but not a child of c.
Let T = {0 = t 0 < t 1 < · · · < t k = ϑ n } denote the end-points of a partition of [0, ϑ n ) into left-closed right-open subintervals, and let X n,T denote the number of triples as above where c and y are the only ancestors of x born during their respective time intervals. Pick a, b integers between 0 and k − 1. Consider the number of triples counted in X n,T where c is born during [t a , t a+1 ) and y is born during [t b , t b+1 ). Note that this is trivially 0 whenever b ≤ a.
Let us count the expected number of corresponding triples for a < b. As z and y are located at adjacent vertices, for each such triple we may denote the locations of z, y, c and p by v, v + e i , v + e j and v + e i − e k respectively for some v ∈ Q n and 1 ≤ i, j, k ≤ n. A particle is a potential z if it is born before time t a , hence there are on average n l=1 m(v − e l , t a ) potential z:s at v. For each z, a particle is a potential c if it is a child of z born during [t a , t a+1 ). Hence for each potential z at v, there are on average t a+1 − t a potential c:s at v + e j . For each potential c, a particle is a potential p if it originates from c at time t a+1 and is born before t b , but is not a child of c. Hence for each potential c at v + e j there are on average m(e i − e k − e j , t b − t a+1 ) potential p:s at v + e i − e k if v + e j and v + e i − e k are not adjacent, and m(e i − e k − e j , t b − t a+1 ) − (t b − t a+1 ) if they are. Lastly, for each potential p, a particle is a potential y if it is a child of p born during [t b , t b+1 ), and for each potential y a particle is a potential x if it is located atv n , originates from y at time t b+1 , and is born before time ϑ n . Hence for each potential p at v + e i − e k the expected number of potential y:s at v + e i is t b+1 − t b , and for each potential y at v + e i , the expected number of x:s is m(v n − v − e i , ϑ n − t b+1 ). Combining all of these, we see that where the sums over i, j, k and l all go from 1 to n. Letting T 1 , T 2 , . . . be a sequence of increasingly finer partitions of [0, ϑ n ] such that the length of the longest interval in T k tends to 0 as k → ∞, it follows by monotone convergence that we have EX n = lim k→∞ EX n,T . Combining this with equation (4.2), and recognizing the right-hand side as a Riemann sum, we get Using the fact that sinh t ≤ cosh t for all t ∈ R, we have i,j,k,l m(v n + e i + e l , ϑ n − t) m(e i + e j + e k , t) − 1 |ei+ej +e k |=1 t ≤ n (sinh(ϑ n − t)) |vn|−2 (cosh(ϑ n − t)) n−|vn|+2 i,j,k m(e i + e j + e k , t) − 1 |ei+ej +e k |=1 t .
It is straight-forward (but messy) to show that i,j,k m(e i + e j + e k , t) − 1 |ei+ej +e k |=1 t = (cosh t) n O n 3 t 3 . As cosh (ϑ n − t) cosh t ≤ cosh ϑ n it follows that Recall that by the definition of ϑ n we have Define the function f (t) = ln sinh(ϑ n − t) + ln cosh t. Note that f (t) = − coth(ϑ n − t) + tanh t, and f (t) = − csch 2 (ϑ n − t) + sech 2 t. As 0 ≤ sech t ≤ 1 and csch t ≥ 1 for all 0 < t ≤ ϑ n it follows that f is concave, and thus for any 0 ≤ t ≤ ϑ n we have Plugging this into equation (4.5), we get As |v n | ∼ x · n this implies that EX n = O 1 n , as desired. Proof. Let Γ n andΓ n denote the sets of vertex-minimal and almost vertex-minimal paths from0 tov n respectively. Using Lemma 3.4 with f (X, x) as the indicator function of x being born at time ϑ n and having ancestral line inΓ n and Γ n respectively, we can write the expected number particles atv n at time ϑ n in the CTP whose ancestral lines are almost vertex-minimal as As the total expected number of particles atv n at time ϑ n in the CTP is Θ(1), Proposition 4.2 implies that the sum in (4.9) is also Θ(1).
The idea now is to group the terms of the sum in (4.9) according to which vertex-minimal path σ it is an extension of, that is we write Hereσ ⊇ σ denotes thatσ is an extension of σ. Note that the inequality comes from the fact thatσ may be an extension of more than one vertex-minimal path. Let us fix a vertex-minimal path σ ∈ Γ n consisting of l vertices. It is straightforward to show that the number of possible detours of length 2 that can be inserted between each adjacent pair of elements in σ is 3(n − 1). Hence, there are at most 3 k (n − 1) k l−1 k ways to extend σ to an almost vertex-minimal path of length l + 2k. This means that As any path from0 tov n must have length at least |v n | + 1, we conclude that
Estimating B.
Proposition 4.4. For anyv ∈ Q n and any u > 0 we have where the sums over i, j, k and l go from 1 to n.
Proof. We observe that B(v, u) is bounded by the expected number of triplets of particles x, y, z in the CTP such that • x is atv at time u • y is an ancestor of x • y and z occupy the same vertex • z was born before y. Note the similarity to the quantity X n in Proposition 4.2. For the sake of compactness, we will be less rigorous here, and refer to the proof of that proposition to see how to formalize this argument.
Let us start by considering the number of such triples x, y and z where z has no ancestors in common with x and y, that is, for some i = j we have that x and y originate from the original particle at e i whereas z originates from the original particle at e j . Denote the common location of y and z by v, and pick k such that the parent of y is located at v − e k . Note that as z is strictly older than y, y cannot be an original particle and hence has a parent. The lineage of x, y, z is illustrated in Graph 1 of Figure 3.
Let us count the expected number of such triples corresponding to a fixed v and where y is born during the time interval [t, t + dt). The potential z:s corresponding to a fixed j are simply the descendants of the original particle at e j that are at v at time t. Hence the expected number of such particles is m(v − e j , t). Similarly, for a fixed i the expected number of potential y:s is given by m(v − e k − e i , t) dt, and for each potential y the expected number of potential x:s is m(v, u − t). As the potential z:s are born independently of the pairs of potential x:s and y:s, we see that the expected number of triples x, y, z that do not have common ancestors, corresponding to a fixed vertex v and a fixed time interval [t, t + dt) is given by The total expected number of triples x, y, z without common ancestors is hence given by summing this expression over all vertices v ∈ Q n and integrating over t from 0 to u. This is clearly bounded from above by the first term in the right-hand side of equation (4.13).
We now consider the cases where the three particles x, y, z have common ancestors. Denote the last common ancestor of the particles by l and its location by v. must be a time s when the ancestral lines of x and z split. There are three possible ways in which this can occur, as illustrated by Graphs 2-4 in Figure 3; either a new ancestor of x is born, a new ancestor of z is born, or new ancestors of x and z are identical n-tuplets and therefore born at the same time. Observe that, in all three cases, y must be born strictly after this time. We let w denote the common location of y and z.
We now count the expected number of such triples corresponding to fixed vertices v and w, where the ancestral lines split during the time interval [s, s + dt) and such that y is born during [s + t, s + t + dt). The potential l:s are the particles in the CTP at v at time s, hence the expected number of potential l:s is For each potential l, the probability that it gives birth during [s, s + ds) is ds. Now, for each possibility for the ancestral lines of x and z to split, conditioned on the process at time s + ds, the pairs of potential x:s and y:s originate from a different particle than the potential z:s. Hence these are born independently. By following the ancestral lines as illustrated in Graphs 2-4 in a similar manner as above, we see that the expected number of triples with common ancestors corresponding to fixed v and w, fixed time intervals, and corresponding to each case for how the ancestral line splits are given by respectively. The total expected number of triples x, y, z with common ancestors is hence given by summing these three expressions over all pairs of vertices v, w ∈ Q n and integrating over all s and t such that s, t ≥ 0 and s + t ≤ u.
It only remains to simplify these expressions. We observe that summing (4.15), (4.16) and (4.17) over all v, w ∈ Q n removes all dependence on s. Consider in particular the sum of (4.15) over all v, w ∈ Q n . By substituting summing over w by summing over ∆ = w − v and applying Lemma 4.1 we have v,∆∈Qn i,j,k Integrating this expression over all s, t ≥ 0 such that s + t ≤ u, we see that the expected number of triples of particles x, y, z as above corresponding to the case illustrated in Graph 2 in Figure 3 is given by Proceeding analogously for (4.16) and (4.17) we see that the expected number of triples corresponding to Graphs 3 and 4 in Figure 3 are given respectively by The expressions in (4.19)-(4.21) are clearly bounded from above by terms 2-4 respectively in the right-hand side of equation (4.13).
Consider the sum ∆∈Qn m(∆, a) 2 m(v − ∆, b). For any v ∈ Q n we let v i denote the i:th coordinate of v. Define the function m 1 : {0, 1}×R → R by m 1 (0, t) = cosh t and m 1 (1, t) = sinh t. Using the fact that m(v, t) = where k = |v|.
Proof. The idea of the proof is to use equation (4.23) to reformulate equation (4.13) in terms of partial derivatives of G x (a, b). Note that by the fact that m(v, t) satisfies Similarly, using the fact that all derivatives of m(v, t) are non-negative, we have Let c denote the minimum of 1 2 cosh 2a − 1 2 e −2b over all a, b ≥ 0 such that ε ≤ a + b ≤ 1. It is clear that c > 0. This means that for any a, b in this range and any 0 ≤ x ≤ 1 we have (4.25) Hence, for sufficiently large C > 0 we have ∂ ∂a G x (a, b) ≤ C a whenever 0 ≤ x ≤ 1 and a, b ≥ 0 such that ε ≤ a + b ≤ 1. Moreover, as G x (a, b) is smooth wherever it is defined, we know that for C sufficiently large all partial derivatives of order up to 4 of G x (a, b) are bounded in absolute value by C when the pair (a, b) is in this domain.
By explicitly writing out the partial derivatives of exp nG k n (a, b) above and combining this with Proposition 4.4 we see that (4.24) holds for sufficiently large C, as desired.
For a given sequencev =v n as above, we define and (4.27) Note that f depends on x. From the definition of G x (a, b) we see that f n (0) = − ln n n and f n (ϑ n ) = −2 ln n n , and that f (0) = f (ϑ(x)) = 0, see (1.7). Suppose that f n (t) is "asymptotically U-shaped" in the sense that exists a constant λ > 0 such that for sufficiently large n we have for any 0 ≤ t ≤ ϑ n . If this holds, then by Proposition 4.5 we have which would imply that B(v n , ϑ n ) = O(1) as desired. It remains to show for which sequences of verticesv =v n , f n is asymptotically U-shaped. We start by giving a simple sufficient condition for x.
Proof. By some straight-forward but tedious calculations we see that Now, if we assume that ϑ n ≥ ln √ 2, then the first term in the right-hand side is non-negative, and so we have that f n (t) is at least, say, n−k 100n for all 0 ≤ t ≤ ϑ n . It follows that if ϑ(x) > ln √ 2, then f n (t) is asymptotically U-shaped. The proposition follows by the easily verified fact that ϑ 1 − ln(2 It is clear from the proof of Proposition 4.6 that the limit 1 − ln(2 √ 2) ln 3 is not optimal, and can be lowered by considering f n (t) more closely. It turns out however that there is a limit for x at which the convexity of f n breaks down, and more importantly for sufficiently small x the asymptotic U-shape of f n breaks down. In the remaining part of this section, we will investigate when this occurs.
By some more straight-forward but tedious calculations we see that This expression has the same sign as f n (t). We see that depending on the sign of 1 4 − e −4ϑn it is either increasing or concave, hence f n changes sign at most twice. Furthermore, if f n changes sign twice it goes from negative to positive to negative. In the same way, since f (t) · cosh 2t − e −2ϑ(x)+2t cosh 2t + e −2ϑ(x)+2t . We see that as ϑ tends to 0, this converges to its limit of −4. The curve intersects the ϑ-axis at ϑ(x) ≈ 0.0898, that is at x ≈ 0.00167.
Combining this observation with the fact that f n (t) is bounded it follows that a necessary and sufficient condition for f n being asymptotically U-shaped is that lim n→∞ f n (0) = f (0) < 0 and lim n→∞ f n (ϑ n ) = f (ϑ(x)) > 0. In fact, the former condition is implied by the latter as then f n (t) changes sign at most once, but f (0) = f (ϑ(x)) = 0.
Hence, we have an explicit expression for (4.32) as a function of ϑ(x). Plugging t = ϑ(x) into the right-hand side of this expression we get Note that this has the same sign as f (ϑ(x)). By Taylor expanding this expression in x and ϑ we see that the dominating term for small ϑ is −4x. Hence f n is not asymptotically U-shaped for sufficiently small x. To get a picture of what happens when x increases, we divide (4.34) by x and plot as a function of ϑ, see Figure 4. It is clear that there is a critical value x * slightly less than 0.0017 such f n is asymptotically U-shaped if and only if x > x * . This proves the following proposition: Proposition 4.7. Let {v n } ∞ n=1 be a sequence of vertices,v n ∈ Q n , such that lim n→∞ |v n | /n exists and is strictly greater than x * . Then for ϑ n as defined in (3.17) we have B(v n , ϑ n ) = O(1).
Remark 4.8. Throughout this section we have only really been interested in deriving a tractable upper bound for B(v n , ϑ n ) without discussing sharpness. Nevertheless, it is not too hard to convince oneself that the bound given in Proposition 4.5 is sharp up to, say, a polynomial factor in n. However, for x < x * we know that there exists an interval of positive length for t where f n (t) is positive, which would then imply that B(v n , ϑ n ) diverges exponentially fast in n.
5.
Completing the proof of Theorem 1.6 Let {v n } ∞ n=1 be a sequence of vertices,v n ∈ Q n for each n, such that x = lim n→∞ |v n | /n exists and is at least 0.002 and let {ϑ n } ∞ n=1 be as in (3.17). Applying the estimates of S(v n , ϑ n ) and B(v n , ϑ n ) from Propositions 4.3 and 4.7 to Theorem 3.6 it follows by Proposition 3.1 and Lemma 3.2 that there exists a constant c 0 > 0 such that Since ϑ n → ϑ(x) as n → ∞, this means in particular that for any ε > 0 we have Note that we can assume that c 0 is independent of the choice of sequence.
Proposition 5.1. Let {v n } ∞ n=1 be a sequence as above, and let x = lim |v| /n. Then, for any ε > 0 we have Proof. Let ε > 0 be arbitrary. Condition on the vertex passage times of all neighbors of0 andv n . Assuming |v n | ≥ 3, it is easy to see that the number of coordinate places 1 ≤ i ≤ n with the property that the i:th coordinate ofv n is 1, and the cost of both e i andv n − e i are at most ε/3, is distributed as Bin |v n | , (1 − e −ε/3 ) 2 . Hence as n → ∞ it is clear that, with probability 1 − o(1), there are at least two such coordiantes. Pick a pair i = j.
Depending on the choice of i and j, we define Q 0 as the induced subgraph of Q n with vertex set {v ∈ Q n : v i = 1, v j = 0}. We similarly define Q 1 as the induced subgraph of Q n with vertex set {v ∈ Q n : v i = 0, v j = 1}. Note that Q 0 and Q 1 are vertex disjoint subgraphs of Q n , both isomorphic to Q n−2 .
In light of Q 0 and Q 1 , we have two natural upper bounds for T V (0,v n ), namely c(e i ) + c(v n − e j ) plus the smallest reduced vertex passage time for any path from e i tov n − e j in Q 0 , and c(e j ) + c(v n − e i ) plus the smallest reduced vertex passage time for any path from e j tov n − e i in Q 1 . As the only vertices of Q 0 and Q 1 which are neighbors of0 orv n are e i , e j ,v n − e i andv n − e j , the reduced first-passage times in Q 0 and Q 1 are independent of each other and each is distributed as the reduced first-passage time between two vertices at distance |v n | − 2 in Q n−2 . By applying (5.2) to the first-passage percolation problems in Q 0 and Q 1 , we conclude that for any ε > 0 and for any sequence {v n } ∞ n=1 wherev n ∈ Q n for each n ≥ 1 such that x = lim n→∞ |v n | /n exists and is at least 0.002, we have (5.4) lim inf Note that this is the same expression as (5.2), except that the right-hand side here is strictly larger. Hence, by iteratively applying this argument, we see that we can replace the right-hand side in (5.4) by c k = 1 − (1 − c 0 ) 2 k for any non-negative integer k. The Proposition follows by letting k → ∞. | 2015-01-09T17:34:59.000Z | 2015-01-09T00:00:00.000 | {
"year": 2015,
"sha1": "9343f161813e2667b9e7e0f1ebd54464a0ab3464",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "9343f161813e2667b9e7e0f1ebd54464a0ab3464",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Biology"
]
} |
4886665 | pes2o/s2orc | v3-fos-license | Teaching Human Computer Interaction: First Experiences
Human-computer interaction is a very recent discipline at the Universidad de Costa Rica. In this paper we present the experiences of the first academic year the first courses about humancomputer interaction, an undergraduate course and a Masters course, were designed and taught. The HCI course introduction strategy consisted of two steps: 1) to initiate a dedicated undergraduate course during the first term, and 2) to initiate a dedicated Masters course during the second term, simultaneously taught with the undergraduate course. Both courses share the outline. However, due to differences among undergraduate and graduate students and among undergraduate and Masters courses, evaluation methodology differences were implemented, resulting in more assignments and a higher exigency level for graduate students. Work in the classroom is different for each of the courses, because graduate students can build their own knowledge based on their previous working experience and on the exchange of ideas with other students. In both undergraduate and Masters courses, emphasis is set on practice supported by theory.
INTRODUCTION
Human-computer interaction (HCI) is considered by the Association for Computing Machinery (ACM) one of the fourteen fundamental areas that represent the body of knowledge of computer science [6] . The Escuela de Ciencias de la Computación e Informática (ECCI) of the Universidad de Costa Rica (UCR) has not included a required course on this area on its bachelor degree program. However, the ECCI has considered necessary to teach an elective course covering this area, because of the importance that HCI has taken and the growth that the software development industry has experienced in Costa Rica. On the other had, the Masters Program on Computer Sciences, associated to the ECCI, has created an elective course on HCI. The undergraduate and graduate courses are relatively new and are taught by the author. It is then possible to compare both of them in order to reveal methodological differences that can arise when teaching courses on the same topic at different academic levels. In this paper we describe our experience on designing and teaching courses on HCI during the two academic semesters when the first introductory HCI courses were introduced, showing methodological differences due to differences on the characteristics of the students and other factors. The structure of this paper is as follows. Section 2 describes the context of the Bachelor and Masters programs. Section 3 presents the justification for teaching HCI within the two programs and the strategy followed. Section 4 describes the characteristics of undergraduate and graduate students, showing evidence of differences between the two groups. Section 5 presents the curricular design of both HCI courses. Section 6 describes and compares evaluation schemata of both courses. Section 7 presents the course schedule. Section 8 describes how lessons are developed. Section 9 describes the results of teaching the HCI courses over two semesters. Section 10 and Section 11 present future work and conclusions, respectively.
CONTEXT OF THE COMPUTER SCIENCE PROGRAMS
The faculty members of the ECCI at the UCR reviewed and updated the Bachelor in Computer Science program in 1999. This program, with some few changes, is still in use. This undergraduate program is a four-year program which includes 41 courses, reaching a total of 139 credits. Credits are a measurement unit of the student academic activity within public universities in Costa Rica [5]. This unit was defined in 1997 with the purpose of unifying the credit definition within the superior education system of Costa Rica. Specifically at the UCR, one credit represents three hours of weekly work during fifteen weeks, applied to an activity supervised, evaluated and approved by a professor [4]. Thirty seven (37) courses out of the 41 courses within the undergraduate program are required and 4 are elective. Academic semesters are 16 weeks long. Elective courses allow students to take courses oriented towards their professional interests. These are 4-credit courses which provide flexibility in order to maintain an updated program through special topics courses. The course on HCI has been taught twice since the second academic semester of 2006 as a special topics course. The Masters Program on Computer Sciences at the UCR started in 1995. In the program all courses are elective. Two options are available: academic (thesis-option) and professional (non-thesis option). In the professional option, research is oriented towards practical application of topics covered in courses. In the academic option, research is mostly oriented to the creation of new knowledge [5]. Courses in the Masters program were structured as two paired sub-courses: a four-credit theory course and a twocredit laboratory course. Theory courses are dedicated to the theory and are developed during lecturing hours. Laboratory courses are dedicated to applied research, in which students develop a practical project. Students in the professional option have to register in both the theory and the laboratory courses, whereas students in the academic option only take the theory course. Within this context, the author designed a Masters course on HCI and started to teach it during the first academic semester of 2007. A non-academic difference between Masters courses and undergraduate courses is that Masters course fees are much higher, not only due to the higher number of credits, but all because credits are more expensive.
JUSTIFICATION AND STRATEGY FOR A HCI COURSE
According to [6, p. 5], human-computer interaction is "a discipline concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them." The area of HCI has experimented an enormous growth during the last ten years. However, not all universities include a course on HCI in their programs [10]. Even more, some universities include HCI as part of programming courses [10]. This happens even when organizations such as ACM and the Institute of Electric and Electronic Engineers (IEEE) have defined the importance of teaching about HCI. In 1991, ACM and IEEE defined nine areas which comprise computer science, one on them being HCI [14]. In 2001, ACM/IEEE published the final report of Computer Curricula 2001 project [2], in which fourteen fundamental areas representing the body of knowledge of computer science are identified. One of the fourteen areas is HCI. On the other hand, the ACM Special Interest Group on Computer-Human Interaction (ACM/SIGCHI) is an organization formed by people working on the tasks of design, evaluation, implementation, and study of interactive computer systems to be used by human beings [6]. This group and other HCI specialist organizations prepared a document on guides and recommendations on HCI education and published the "Curricula for Human-Computer Interaction." The documents published by ACM and IEEE emphasize on the importance of including the HCI area in computer science programs. However, the undergraduate programs of the three public universities in Costa Rica teaching computer science or any related major do not include any course on HCI [7,8,15]. Hence, the ECCI has considered it is very important to be pioneer in this field in the country, offering a basic course to create a new generation of software developers aware of the role of human beings in the success of any software application and the importance of their participation within the software development process. Five strategies to introduce the topic of HCI into a computer science curriculum have been proposed [3]: 1) to cover HCI topics in required and elective courses, for example, programming and software engineering courses, 2) to initiate with a course exclusively dedicated to HCI (for example, a Masters course or an elective undergraduate course), 3) when the first dedicated course proposed in strategy 2 has consolidated, to initiate a dedicated course at the other level, 4) to simultaneously initiate HCI courses at the Bachelor and Masters levels, and 5) to offer several elective courses on HCI. At the ECCI, we decided to follow a combination of strategies 2 and 3, starting with a dedicated undergraduate course, and a semester later teaching a dedicated Masters course. This allowed the professor to become familiar with the HCI topic and improve the course before it is taught at Masters level, especially important since, in general, graduate students expect professors to have deep knowledge of the topic. Additionally, [3] also recommends to have two independent courses, one for undergraduate and another for Masters.
CHARACTERISTICS OF STUDENTS
Around 90% of undergraduate students registered in the HCI course are seniors with full time dedication to study, 24 years old on average, taking three or four courses simultaneously, with no professional experience, and with basic knowledge of English. Their lack of professional experience makes them not to highly value the importance of users for the success of a software system. Undergraduate students know their classmates and have previously worked together. They like teamwork and can easily find classmates to form a working team. Most of them already took Software Engineering I and are currently taking Software Engineering II. Topics covered on Software Engineering I include software project planning, cost estimation, requirement analysis, and high level design. Topics covered on Software Engineering II include detailed design, implementation, testing, and maintenance [9]. HCI topics are not explicitly covered in any software engineering course. On the other hand, graduate students who take the HCI course have two or three years of working experience. Most of them work in software development related tasks. They have a full time job and take one or two Masters courses. Their job has higher priority than their studies. Masters students are 26 years old on average. They practice English in their working places and frequently travel abroad, which make them miss three or four lessons per semester. Additionally, students do not know most of their classmates, since they belong to different college generations or obtained their undergraduate degrees at any other university. For all these reasons, graduate students find it hard to form a team to work with and prefer individual work. Despite the difference on average ages of the two student groups is not very significant -only 2 years-, there are notable differences in the behavior of both groups. For example, undergraduate students are quiet in class and seldom ask any question, whereas graduate students share their personal experiences with the group and ask many questions, which promotes more discussion and opinion exchange. Graduate students have higher expectations about the course they take and are more critical, due to their higher professional maturity and higher Masters course fees. The author has characterized both groups of students after nine years of teaching experience. This information was very valuable when designing the curriculum, the methodology, the evaluation schema, and the class dynamics of the two HCI courses.
COURSE CURRICULAR DESIGN
Topics covered in a course are a key aspect in the learning process. Graduate and undergraduate courses on HCI are designed as introductory and the learning objectives, in terms of what students are able to do after having taken the course, are as follows: 1. Identify the human factors which are pertinent when designing the human-computer interaction of any interactive software system.
2. Design and prototype the human-computer interaction of an interactive software system, in such a way that user needs are satisfied in an effective and efficient manner.
3. Conduct an usability evaluation of a human-computer interface.
Because graduate courses are divided into two sub-courses, the corresponding laboratory course has its own learning objectives: 1. Conduct usability and accesibility requirement analysis and engineering.
2. Develop paper prototypes and storyboards describing the prototype behavior.
3. Analyze tasks executed by users from a usability and accesibility point of view.
4. Design a software prototype based on usability and accesibility requirements and task analysis.
Improve an interface based on evaluation results.
Neither graduate students nor undergraduate students have previously taken a course on HCI. Hence, we decided to cover the same topics on both courses, giving different emphasis on practice, research and theory. HCI is a multidisciplinary area. Apart from knowledge on software engineering, topics such as human factors, sociological and anthropological aspects, and ergonomics, among others, are required [1]. Both courses were designed based on contents of a book on HCI published by the Association of Human-Computer Interaction (AIPO for its name in Spanish, Asociación de la Interacción Persona Ordenador) [1]. This book presents the topics suggested by ACM/SIGCHI [6] in a complete and easy-to-understand way. Most books on HCI have been written in English. Having a text book originally written in Spanish is advantageous for students, since this is their mother language. Additional didactic materials are necessary. Due to their character of introductory courses, both the undergraduate and the graduate courses cover the following topics:
EVALUATION SCHEMATA
A notable difference between the undergraduate course and the graduate course presented in this paper is the evaluation schema. This difference is consequence of the differences between the two student groups, such as working experience and time availability, and the number of credits assigned to the courses (4 for undergraduate and 6 for graduate). However, both evaluation schemata make emphasis on practice. As shown on Table 1, there are common elements in both evaluation schemata, such as the presentation of design patterns, but some significant differences exist too. For example, homeworks represent 40% of the final grade of the undergraduate course, whereas only 10% of the graduate course. At the undergraduate level, homeworks are an important evaluation instruments, since they give the professor the opportunity of assigning relatively long research, reading, and practical homeworks, necessary to complement concepts presented in the classroom. At the graduate level, homeworks are evaluation instruments too, but because most students have a full time job, it is better not to overload them with many long individual assignments. In this case, homeworks are short and focused on very specific topics. Research is then promoted making students write a research paper, based on a broad literature review, in which a new idea is presented. Students are expected to review at least twelve different bibliographical references, such as books and journal papers, in order to support what they assert on their research papers. More details about some of the evaluation instruments shown on Table 1 are described in the following sections.
Design of an Interface
The assignment of designing an interface is the most important evaluation instrument, since it allows students to put into practice the theory. For most of them, this is the first time in their lives they seriously think on the impact of their interaction design decisions on users' performance. Design activities are followed by usability evaluation activities. In spite of being a common element on both evaluation schemata, graduate students are expected to produce a result with a higher degree of usability than undergraduate students. The professional experience of graduate students has given them the opportunity of getting in contact with software system users and their daily problems, which is an advantage when compared with undergraduate students. Even when it would be possible to allow students to choose the software system they will design, we suggest proposing several software systems to be developed. All suggested systems should have the same degree of difficulty and complexity and have a relatively limited functionality, so that students set most of their effort on designing interaction instead of understanding functionality. Additionally, when software systems to be developed are mobile applications or require designing specialized hardware devices, students are challenged to research more.
Heuristic Evaluation
The heuristic evaluation is a common element on both evaluation schemata. The heuristic evaluation is a group assignment, with three or four students participating on each group. Students have to plan and conduct a heuristic evaluation of a software system. According to [12], conducting a heuristic evaluation of a badly designed software system makes students become aware of the importance of investing time in designing the interaction Another suggestion for choosing the software system to be evaluated is to select one not complying with the most frequently used standards and guides. Conducting a heuristic evaluation is important because students experience what users feel when using software systems. Most software developers never use the software they create and do not understand what users complain about. Each group should prepare a short plan before conducting the evaluation. This plan must contemplate a description of the heuristic evaluation methodology, a profile of the software system users, characteristics of the evaluators (the students) and the evaluation environment (hardware and software), and templates to be used. If possible, there should be several different evaluation environments. For example, if the software system is a Web application, then different operating systems and browsers (e.g. Opera and Internet Explorer) should be used.
Conducting a heuristic evaluation should include individual and group activities. The professor can provide a guide on conducting the heuristic evaluation to help students understand what they are supposed to do. As an example, we use the following guide: • Individual activity: each student in the group evaluates the software system and identifies problems and positive findings. For future reporting, each problem and positive finding is supported by a screenshot. • Group activity: based on individual evaluations, the group prepares a list of positive findings.
• Group activity: based on individual evaluations, the group prepares a combined group list of problems, highlighting which problems were found only by one student and which were specific for one particular browser. • Individual activity: each student takes the combined group list of problems and assigns severity ratings to each problem (1 to 4 integer scale). The scale used for rating is as follows: [1] Negligible problem: it does not have to be fixed unless there is enough time.
[2] Low importance problem: fixing it is not very important. [3] Severe problem: fixing it is important. [4] Catastrophic problem: it must be fixed. Any other scale may be used, as far as students describe it on their final report. • Group activity: based on individual severity ratings, the group calculates the average severity ratings for each problem and sorts problems into descending order (on average severity).
The first time the HCI course was taught, we allowed students to choose the software system to be evaluated, whereas the second time we decided to choose it ourselves, in order to guarantee similar software system complexity degree and design quality for all students. We also provided the format to be followed to prepare the final evaluation report. The structure of final evaluation report used by the authors is as follows: 1. Software system functionality. Students have to describe the software system functionality based on user manuals and the evaluation process itself. 2. Evaluated points and their importance. Students identify the set of heuristic principles used to conduct the evaluation and explain why they are important. 3. Positive findings. Every software system interface has positive aspects. Students must find at least three. 4. Problems found in the software system. Describe the severity rating scale used. Show individual and average severity ratings. 5. Problem analysis. Select the five most severe problems, analyze them, and discuss how they could be fixed. Use bibliographical references to support suggestions.
Presentation of a Design Pattern
Master students present a journal paper. In order to motivate them to research more on a specific topic, students must look for additional bibliographical materials to complement the paper they present. The schema used to evaluate presentations covers aspects such as presentation organization, audiovisual material, and personal performance. Effective oral communication is a very important aspect in the area of HCI. The evaluation schema highlights aspects which are important to develop communication skills (Figure 1).
Research Paper
When the undergraduate course evaluation schema is compared with the graduate theory course evaluation schema (Table 1), it can be noticed that a very important difference is that more emphasis on research is set on the graduate course. In fact, at the Masters level, all students write a technical paper, which represents 45% of the final grade of the theory course. Students in the professional option have the possibility of writing the research paper on a topic related to the development of the assignment of design of an interface, such as how design decisions where taken and how problems were solved. However, they can also choose a topic on which they have a special interest. Students in the academic option may choose a research topic related to their thesis research. Table 2 shows the proposed schedule for the graduate course, with details about topics covered weekly in class and activities. The distribution of topics is the same for the undergraduate course, but activities such as research paper presentations are not considered and assignment due dates slightly vary. In general, one topic is covered in one week. However, due to their importance or complexity, some topics such as human factors, design and prototypes, accesibility, and graphical design are developed in two weeks. i. Pronunciation and entonation (1-5) ii. Volume (1)(2)(3)(4)(5) iii. Avoidance of jargon and words which might cause confusion or offend the audience (1-5)
LESSON DEVELOPMENT
Undergraduate students receive two 100-minute lessons every week, generally early in the morning. On the other hand, graduate students receive only one 200-minute lesson per week, usually in the evening. Students at both levels attend the same amount of hours, but it is definitely different to attend and pay attention for almost four hours. Graduate students have worked the whole day before they attend the lesson. They can easily get tired and bored. This represents a great challenge for the professor. Most of the time, the professor lectures undergraduate students. Slides based on the AIPO book [1], other books [11,13], and journal and conference papers are used to support the lecturing process. The professor assigns short practical problems to be solved in the classroom. Students present design patterns complemented with actual examples. The professor can cover material on the slides because students do not ask many questions. On the contrary, graduate students participate much more in the classroom. Slides are available for them too, but lecturing for 200 minutes is boring. These two factors together suggest that it is better to plan the lesson as a combination of participative activities and short summarizing lectures. If students have previous knowledge about a topic covered in class, activities such as discussions or practical problems related to the topic can be solved by students working in groups. This allows students to generate their own knowledge. Additionally, group activities developed in the classroom allow graduate students to meet classmates and know them better, which additionally helps them to choose the team they will work with on other assignments. The professor closes the lesson with a short summarizing lecture useful to provide a theory base and review main concepts and results obtained in class activities.
It is not possible to follow this participative approach to cover all topics, because some are new for students. For example, human factors, which include psychology and sociology aspects, are relatively unknown to both undergraduate and graduate students. In such a case, the professor lectures in order to create a sound base and a common language to be used all the semester long.
As shown on Table 2, the schedule of the graduate course includes external speakers, who are professionals with sound knowledge on a HCI topic. Topics such as interfaces for e-learning or for vehicles are new to most students and are useful to create a broader overview on HCI. Undergraduate students are invited to these presentations. Both undergraduate and graduate students visit the university library, where they receive a speech and a demonstration on software and hardware used by people with special needs, such as blind people (week 8 on Table 2). This visit is important in order to create awareness of the importance of taking accesibility into account when designing software systems.
FINDINGS
Over two semesters, a total of 48 undergraduate students and 22 graduate students took the corresponding HCI course. The passing rate is 100% for the undergraduate course and 95% for the Masters course. At the end of the semester, a 5-question questionnaire is filled up by students in order to assess their level of satisfaction. Figure 2 shows the results for the question about whether the course objectives were reached. Questionnaire answers show that 68% feel the objectives were fully achieved and 32% believe they were partially achieved. Some of them expected more emphasis on graphical design and more practice on usability evaluation and interface design.
Figure 2. Students´ Opinion about whether the Course Objectives Were Reached
On the other hand, 57% of students believe the knowledge acquired in the course will be very useful for their professional performance and 43% believe it will be useful to some extent. This means that the HCI course has a positive impact on their professional practice.
Students were asked about the most important concept they learned. Since it is an open question, many different answers were received, but 53% of students referenced in their answers the importance of user participation during the development process, and the concepts of usability and accessibility. Overall, we feel comfortable with the level of satisfaction of students, but we are aware there are many aspects which need to be improved. Students suggested adding more practice on usability evaluation and interface design, and including architectural models.
FUTURE WORK
Currently we are planning to teach the graduate course once again. Following students´ suggestions, this time we give more emphasis on interaction design and evaluation. The next step is to design a second course to allow students to study in depth some HCI topics. As we expect in the future many undergraduate students will take the undergraduate HCI course, we plan to change the graduate course curricular design, in order to offer students the opportunity of taking a more advanced course. However, because students from all universities in the country apply to our Masters program, we cannot ignore this situation and have to design a course starting from basic HCI concepts.
CONCLUSIONS
In this paper we have described our first experience on designing and teaching two introductory courses on HCI, one for undergraduate students and one for graduate students. The strategy of initiating an elective undergraduate course was very positive, since it allowed the professor to gain sound grip on the area. Due to differences among undergraduate and graduate students and among undergraduate and Masters courses, evaluation methodology differences were implemented. Previous knowledge about the characteristics of students taking the courses is a key factor in identifying those differences. | 2015-06-10T20:56:49.000Z | 2009-04-01T00:00:00.000 | {
"year": 2009,
"sha1": "308592775769d2efcd1950b662ada82d52e9a902",
"oa_license": "CCBY",
"oa_url": "https://www.clei.org/cleiej/index.php/cleiej/article/download/240/164",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "308592775769d2efcd1950b662ada82d52e9a902",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
59147948 | pes2o/s2orc | v3-fos-license | Half-maximal consistent truncations using exceptional field theory
We show how to construct half-maximal consistent truncations of 10- and 11-dimensional supergravity to seven dimensions using exceptional field theory. This procedure gives rise to a seven-dimensional half-maximal gauged supergravity coupled to $n$ vector multiplets, with $n \neq 3$ in general. We also show how these techniques can be used to reduce exceptional field theory to heterotic double field theory.
Introduction
Exceptional field theory [1,2] is an extension of 10-and 11-dimensional supergravity, which treats the metric and p-form fields in a d-dimensional "internal space", on an equal footing, and has an extended set of coordinates. These features allow the exceptional field theory to make an E d(d) symmetry of the supergravity manifest.
This E d(d) symmetry is often confused with the U-duality group arising by compactifying 11dimensional supergravity on a T d . However, here the E d(d) symmetry arises before any truncation or compactification occurs, or equivalently all modes of the d-dimensional internal space are kept. Instead, the metric and p-form fields at each point combine into E d(d) representations. We will see later how the global structure of the internal space determines the resulting "duality group", which to be precise we take in this case to be the global symmetry of the lower-dimensional gauged SUGRA that arises after truncating the theory.
In addition to repackaging the fields into E d(d) representations, the exceptional field theory extends the coordinates of the internal space to form a particular representation of E d (d) . For the special case that the internal space is a torus, these coordinates have a nice interpretation as momentum and wrapping modes of membranes on T d . For general backgrounds such an interpretation fails, but the extra coordinates allow us to treat 11-dimensional and 10-dimensional type II -and indeed, as we will see, also heterotic -supergravities at the same time. This allows us to easily see when a lower-dimensional theory can have different higher-dimensional origins, i.e. when there is a duality, and it is in this sense that one can think of exceptional field theory as making dualities manifest. 1 Exceptional field theory [1,2], as well as double field theory [3] and generalised geometry [4][5][6] have proven very powerful in finding consistent truncations of 10-and 11-dimensional supergravities [7][8][9][10][11][12][13][14][15][16][17]. This has allowed an uplift of various, previously "orphaned", gauged supergravities. However, so far this has focused on so-called generalised Scherk-Schwarz Ansätze which yield consistent truncations preserving all supersymmetries.
In [18] and [19] we considered exceptional field theory on backgrounds with non-trivial structure groups. The particular class of backgrounds considered admit half the possible number of spinors and thus have a generalised N = 2 structure. We showed how exceptional field theory naturally describes such backgrounds and developed the technology to find consistent truncations to seven-dimensional half-maximal gauged SUGRA. Using these tools we also showed how exceptional field theory can be reduced to the heterotic double field theory when the internal space has generalised SU(2)-structure.
In this proceedings article we will present these results. In section 2 we will give a selfcontained introduction to the SL(5) exceptional field theory. We will then show in section 3 how the SL(5) exceptional field theory describes half-maximal backgrounds. Section 4 is devoted to the study of half-maximal consistent truncations while section 5 details how to obtain the heterotic double field theory from exceptional field theory. We conclude in section 6.
Review of SL(5) exceptional field theory
We begin with a brief review of the key features of the SL(5) exceptional field theory [1,2,20] which are necessary for our purposes. We consider 11-dimensional supergravity but breaking GL(11) −→ GL(7) × GL(4). We will see that having made SL(5) symmetry manifest we will automatically also have obtained the equivalent reformulation of type IIB supergravity. With respect to this splitting we write xμ = x µ , x i withμ = 1, . . . , 11, µ = 1, . . . , 7 and i = 1, . . . , 4. Decomposing the metric and p-form fields at each point we have The 14 fully internal components g i j , C i jk parameterise the coset space SL (5) SO (5) and can be combined into the so-called generalised metric where a, b = 1, . . . , 5 are fundamental SL (5) indices. Similarly, the 10 components which transform as a vector under GL(7), i.e. g µi , C µi j can be combined into a GL(7)-covector valued in the 10 of SL(5) One can continue and combine the five C µνi and C µνi jkl into an object B µν,a , and so on. Now that the metric and p-form fields have been combined into SL(5) representations, we also extend the four-dimensional internal space to be ten-dimensional, with coordinates Y ab = Y [ab] . This will allow us to write fully SL(5) covariant expressions, even though as we will see not all of these coordinates are physical. For example, diffeomorphisms and p-form gauge transformations combine into a local SL(5) transformation. These symmetries are generated by the generalised Lie derivative [21][22][23] which acts on a SL(5) vector V a with weight w as where Λ ab = Λ [ab] is a so-called "generalised vector field" and has weight 1 5 . The derivatives ∂ ab are with respect to the extended coordinates Y ab .
These generalised diffeomorphism symmetries must close into an algebra and this requires the following condition on all pairs of fields which we denote abstractly as f and g This is known as the "section condition" [21] and restricts all fields to depend only on a subset of coordinates. Up to SL(5) transformations there are two solutions: The first clearly breaks SL(5) −→ SL(4) while the second breaks SL(5) −→ SL(3) × SL (2). It should thus not come as a surprise that the first corresponds to 11-dimensional supergravity while the second to type IIB supergravity [2,24]. One way to see this is to act with the generalised Lie derivative on the generalised metric M ab . With the corresponding solution to the section condition the generalised Lie derivative reduces to diffeomorphisms and the correct p-form gauge transformation for these two theories. Equipped with the generalised Lie derivative (2.4) one can introduce a generalised connection for a generalised tensor V c is also generalised tensor. This implies a certain transformation law for the components Γ ab,c d , which means that they are not tensorial themselves. However, as usual, there are certain combinations of the connection which are by themselves tensors. These are the generalised torsion of the generalised connection, and are given by where Λ is any generalised vector and L ∇ Λ denotes the generalised Lie derivative with all derivatives replaced by the connection ∇. The definition (2.8) makes it manifest that the torsion is a tensor. Later on we will make use of the fact that the torsion takes values in the representations W ≡ 10 ⊕ 15 ⊕ 40 ⊂ 10 ⊗ 24 = 10 ⊕ 15 ⊕ 40 ⊕ 175 . (2.9) We conclude this section by noting that it is possible to rewrite the 11-dimensional and type IIB supergravity actions in terms of M ab , A µ ab and the other fields. This action is uniquely fixed by the requirement that it is invariant under the generalised Lie derivative, up to the section condition, as well as the external seven-dimensional diffeomorphisms, which we have not discussed here as they are not important for what follows. We will call the part of the action that has only "internal" derivatives with respect to the 10 Y ab as the potential, V . It is given by where R depends only on the generalised metric M ab and ∇ ab g µν = |e| 2/7 ∂ ab g µν |e| −2/7 . (2.11) As we will show later upon performing a truncation this part correctly reproduces the scalar potential of seven-dimensional half-maximal gauged SUGRA, as well the internal part of the heterotic DFT.
SU(2)-structures and half-maximal supersymmetry in exceptional field theory
Let us now turn our attention to internal spaces which admit nowhere vanishing spinors corresponding to 16 supercharges. Because we wish to include fluxes these spinors transform in the fundamental of USp(4), not SO(4) as one would expect from the 11-dimensional perspective. A SU(2) R ⊂ USp(4) doublet of spinors carries the appropriate 16 supercharges and the existence of these spinors implies that the generalised structure group is reduced to SU(2) S ⊂ USp(4) which stabilises this SU(2) R doublet [25]. Here the subscripts R and S are used to distinguish the SU(2) R-symmetry from the structure group. We will call these internal spaces generalised SU(2)-manifolds.
Reformulating the exceptional field theory
We now seek a bosonic description of generalised SU(2)-manifolds. One can show [18] that one can form the following generalised tensors as bilinears of the well-defined spinors: where a = 1, . . . , 5 is a SL(5) index and u = 1, . . . , 3 labels the triplet of SU(2) R . We will throughout raise and lower the SU(2) R triplet indices u by δ uv . Using Fierz identities one can show that these satisfy where ε abcde = ±1. Here B u ab is a generalised vector field, thus has weight 1 5 under the generalised Lie derivative (2.4), while has weight 2 5 and A a has weight 3 5 under the generalised Lie derivative. These objects should be thought of as the generalisation of complex and Kähler structures on K3. Readers familiar with exceptional field theory will recognise these objects from the tensor hierarchy of exceptional field theory [26,27]. This is not a coincidence since sections of the appropriate vector bundles behave in many ways as generalisations of differential forms [28]. Indeed, this observation allows one to generalise this construction to other dimensions [29].
We will now show that the set of nowhere vanishing κ, A a , a and B u ab define a generalised SU(2)-structure. We will do so by showing that they are stabilised by a SU(2) S ⊂ SL(5) × R + subgroup. We begin by noting that the scalar density κ breaks the SL(5) × R + structure group of the exceptional field theory to SL (5). It is easy to show that A a and a subject to (3.2) are stabilised by a SL(4) ⊂ SL(5) subgroup and thus nowhere vanishing A a and a define a SL(4) ≃ SO(3, 3) structure. Upon performing a consistent truncation, A a and will lead to the dilaton scalar field of the seven-dimensional gauged SUGRA, and thus we will also call a set of globally well-defined nowhere-vanishing A a and a a "dilaton structure".
Furthermore, a set of three globally well-defined nowhere-vanishing B u ab subject to (3.2), further reduce the generalised structure group to SU(2) S ⊂ SL(4) ⊂ SL(5). Because SU(2) S ⊂ USp(4) and the generalised metric is a generalised USp(4)-structure, the objects A a , a and B u ab together implicitly define a generalised metric. However, there is in general no explicit expression for the generalised metric in terms of A a , a and B u ab . Nonetheless, because they carry the same degrees of freedom, it is possible to express the exceptional field theory action in terms of A a , a and B u ab as we will demonstrate in the next section.
It is also useful to further define the following objects using the generalised SU(2)-structure where V u,ab is in the 10 of SL(5) and has weight 4 5 , while K uv a b is in the adjoint of SL (5). It is easy to show that it satisfies the SU(2) R algebra and acts on B u ab as From this one can see that K uv a b generates the SU(2) R ⊂ SL (5) subgroup. Furthermore, in the case of 11-dimensional supergravity, it becomes the hypercomplex structure on the "internal" fourmanifold.
Intrinsic torsion
The first step in expressing the exceptional field theory action in terms of A a , a and B u ab is to introduce a generalised connection. The natural choice here is given by a generalised SU (2) connection which means that it is compatible with the generalised SU(2)-structure, i.e.
Note that we are not imposing a torsion constraint on this connection and so it will certainly not be unique. However, we will not require the connection explicitly. We will only need to make use of the relations (3.6).
To write the action we want to use generalised tensors which are given by one derivative of the SU(2)-structure. We use the intrinsic torsion of a SU(2) connection. This is the part of the torsion which is independent of the choice of SU(2) connection. This implies that it can be written without a SU(2) connection appearing explicitly. Furthermore, it should only involve the SU(2)-structure. One can easily show that the intrinsic torsion consists of the following representations We can give explicit expressions for the intrinsic torsion by making use of the generalised Lie derivative. We begin by considering where we have made use of the fact that ∇ is an SU(2)-connection to show that this is an element of the SU(2) torsion. It is clear that it is independent of the choice of SU(2)-connection because the left-hand side does not make use of a SU(2)-connection. Thus it is intrinsic. Similarly, one can show that are also intrinsic. Keeping track of the representations that appear and with the help of some algebra one can write [29] L B u B u ab = κ 2 T ab + κT u B u ab (3.10) The generalised tensors T ab , T u , T a , R uv ab , R uvw , S a , S u a , U u , P a and P, which for later convenience we have defined to have weight − 1 5 , correspond to the irreducible representations of the intrinsic torsion. They are explicitly given by This implies that they correspond to the following irreducible representations of SU(2) S × SU(2) R T u ∈ (1, 3) , T ab ∈ (3, 1) , T a ∈ (2, 2) , 1) .
These are the exactly the representations of the intrinsic torsion (3.7). One can show explicitly that any other generalised tensor involving one derivative and constructed from the SU(2)-structure is given by a linear combination of the intrinsic torsion above. For example, one can show that and thus the symmetric part of L B u B v ab is fully determined by its trace and thus by the intrinsic torsion T ab , T u and T a .
Reformulating the action
One can reformulate the exceptional field theory action in terms of the SU(2)-structure. For example, the kinetic terms of the generalised metric, are given by [2] where D µ = ∂ µ − L A µ is the SL(5)-covariant derivative with respect to the external seven dimensions. This term can be written in terms of the SU(2)-structure as The kinetic terms for the field strengths can similarly be rewritten by replacing the generalised with the SU(2)-structure. The potential term in the action (2.10) can be rewritten in terms of the SU(2)-structure as By writing the SU(2)-structure in terms of spinor bilinears, as detailed in [18] one can show that R is given by where . . . refers to terms consisting only of the doublets of the intrinsic torsion, which we will not need in the following.
Half-maximal consistent truncations
So far we have discussed how to describe a general 10-or 11-dimensional supergravity background in exceptional field theory which admits half the full number of spinors. We now want to describe a truncation of the theory on such a background in order to obtain a half-maximal gauged SUGRA, and discuss the requirements for consistency. Here, a consistent truncation is one where any solution of the lower-dimensional gauged SUGRA is also a solution of the original higherdimensional SUGRA.
A key feature in obtaining consistency will be to remove any doublets of SU(2) S from the truncation. Any such mode would correspond to additional spinors on the background, i.e. the background would admit more than half-supersymmetry. In terms of supergravity fields removing these modes is equivalent to projecting out the massive gravitino multiplets associated to the broken supersymmetries.
Truncation Ansatz
We make an Ansatz for our truncation by expanding the SU(2)-structure in terms of a set of objects which depend only on the internal coordinates Y ab , and are related to the modes that we keep in the truncation. The coefficients of these objects are scalar functions of the external seven coordinates x µ and become the scalars of the half-maximal gauged SUGRA. In order to have a half-maximal SUGRA we expand the SU(2)-structure in such a way that there are no doublets of SU(2) S . To be explicit we take and we make a warped Ansatz for the external seven-dimensional metric Here the angled brackets denote the truncated objects, ρ(Y ) is a scalar density and ω M ab (Y ), n a (Y ) andn a (Y ) satisfy n an a = 1 , ω M abn a = 0 , and M = 1, . . . , n + 3 with n in principle arbitrary and η MN has signature (3, n). These objects are the half-maximal analogue of twist matrices in the generalised Scherk-Schwarz procedure [7-13, 30, 31]. As we will see n corresponds to the number of vector multiplets of the half-maximal gauged SUGRA that we obtain. Throughout this paper we will use η MN to raise and lower the M, N indices. This imposes six constraints on the 3n + 9 scalars b u M . Additionally, we will identify any scalars related by the action of SU(2) R . This removes another three degrees of freedom. The remaining 3n + 1 scalars b u M and d parameterise the coset space The coset structure can be made more explicit by writing makes use of the same modes ρ, ω M ab , n a andn a as determined by their SL(5) structure.
Consistency conditions and embedding tensor
Our discussion so far has been purely algebraic. We now need to impose a set of differential constraints in order to obtain a consistent truncation, and we do this via the intrinsic torsion. Firstly, we must make sure that derivatives of the SU(2)-structure do not source doublets of SU(2) S since we have removed these from the truncation Ansatz. This means that the doublets of SU(2) S in the intrinsic torsion must vanish.
Secondly, we need to ensure that the finite set of modes that we do keep, i.e. ρ, ω M ab ,n a and n a do not source other modes. Thus the intrinsic torsion should close into this set of modes. Together these two requirements imply that we can write (4.9) By the structure of the generalised Lie derivative f MNP = f [MNP] is totally antisymmetric and the right-hand side is the most general allowed subject to the conditions discussed above. The objects f MNP , f M , ξ M and θ are exactly the right representations to form the embedding tensor of halfmaximal gauged SUGRA [32,33] coupled to n vector mutliplets, that is they satisfies the linear constraint gauged SUGRA. Indeed, we will see that this is the correct interpretation and thus the embedding tensor can be thought of as the intrinsic torsion of the SU(2)-structure background.
The final consistency condition we need to impose that f MNP , f M , ξ M and θ are constant. This final condition is completely analogous to the case of maximally supersymmetric consistent truncation. It is important to highlight that the construction presented here naturally leads to the full embedding tensor including the deformation parameter θ .
Truncated intrinsic torsion and scalar potential
Before evaluating the scalar potential with the truncation Ansatz let us first compute the intrinsic torsion. With the truncation Ansatz (4.1) and the differential constraints (4.9) one finds where are left-and right-moving projectors. In the language of half-maximal gauged SUGRA one could say that the intrinsic torsion becomes the T-tensor, the "flattened" version of the embedding tensor.
We can now compute the scalar potential of the truncated theory. For this we will take the trombone tensor to vanish, i.e. ξ M = 0. Otherwise, one does not obtain a consistent action principle and the gauged SUGRA would only be defined at the level of the equations of motion. We find This is precisely the scalar potential of half-maximal gauged SUGRA, in particular with the singlet deformation parameter θ and the term f MNP f MNP which vanishes by section condition but appears here automatically with the right relative coefficient.
We now see why it was crucial that the embedding tensor f MNP , f M , ξ M and θ M are constant. Just as in the maximal case, see e.g. [12], this means that the Y ab -dependence in the action factorises and thus any solutions of the lower-dimensional half-maximal gauged SUGRA can be uplifted to a solution of the full exceptional field theory, and thus, subject to solving the section condition, to 10or 11-dimensional SUGRA. Another nice feature of the formulation given here is that if ρ, ω M ab , n a and n a satisfy the section condition, then the quadratic constraint of the half-maximal gauged SUGRA is automatically fulfilled. However, there are also be solutions of the quadratic constraint which violate the section condition, again analogously to the maximal case, for example [9,14].
From exceptional field theory to heterotic double field theory
The construction presented above can also be used to relate exceptional field theory to heterotic double field theory. This relation is reminiscent of the M-theory / heterotic duality [19].
Ansatz and consistency condition
We begin by considering exceptional field theory with an internal space admitting an SU(2)structure. We now expand this SU(2)-structure in the same way as in our truncation Ansatz (4.1) but we now allow the "scalar fields" b u M (x,Y ) and d(x,Y ) to depend on both the external sevendimensional coordinates x µ as well as the ten Y ab , although we will soon restrict this dependence in a controlled manner. The fields b u M (x,Y ) become the left-moving frame fields of the heterotic double field theory and d(x,Y ) becomes the generalised dilaton of the theory. One proceeds similarly for the gauge fields A µ ab , etc. We will call the theory thus obtain the "gauged" theory to differentiate it from the exceptional field theory we started off with. The above Ansatz means that we do not obtain a truncated theory in seven dimensions, but still have a higher-dimensional theory. This procedure of making a truncation-like Ansatz but allowing the scalar fields to still depend on (some) of the Y ab 's has recently been shown [34] to reproduce the massive IIA theory [35] using exceptional field theory. We will show that the procedure here instead gives rise to the heterotic double field theory [36][37][38], or heterotic supergravity once the section condition is solved.
We begin by imposing the same differential constraints on ρ, ω M ab ,n a and n a , i.e. (4.9) as for consistent truncations. In order to make comparison with the heterotic double field theory of [38] we will take although it is easy to consider a heterotic double field theory with some of these additional gaugings turned on. We will see that f MNP determines the gauge group of the heterotic double field theory. Note that this implies that the gauge group is necessarily a subgroup of O(3, n).
We must also ensure that derivatives of the scalar fields b u M (x,Y ) and d(x,Y ) do not source doublets of SU(2) S . Additionally, just as in (4.9), we need to ensure that any excitations that they do source are captured in the truncation. We do this by imposing that and similarly for any other field of the gauged theory. In particular, this means that so that the theory effective has a six-dimensional internal space.
It is now natural to introduce the twisted derivatives which we wish to identify with the n + 3 derivatives of the heterotic double field theory. To do so, we require that they commute. However, the commutator is given by This vanishes if impose the section condition as well as on any field in the gauged theory. This latter condition is also required from the perspective of the heterotic double field theory [38]. Given that we take these conditions to be fulfilled we can now write the twisted derivatives as a partial derivative
Local symmetries of heterotic double field theory
We can now compute the generalised Lie derivative acting on fields in the gauged theory. Consider two generalised vectors with the heterotic Ansatz (5.1) with Λ M and V M satisfying the conditions (5.3) and (5.7). It is now straightforward to show that becomes the heterotic double field theory section condition. We thus recover the local symmetries and section condition of the heterotic double field theory.
Intrinsic torsion and heterotic action
Let us now evaluate the heterotic action and begin by calculating the intrinsic torsion. With (5.1), (5.3) and (4.9) one finds that the only non-vanishing components of the intrinsic torsion are where we defined exactly as in [39]. Here bū M denote theū = 1, . . . , n right-moving frame fields of the generalised metric, satisfying P + M N bū N = bū M . (5.14) Furthermore, we have used L Λ V M to represent the heterotic generalised Lie derivative (5.10) With these results one finds that the scalar kinetic terms (3.16) become where D µ = ∂ µ − L A µ represents the gauge-covariant derivative of the heterotic double field theory. Furthermore, the scalar potential reduces to This matches the scalar kinetic terms and scalar potential of the heterotic double field theory in the frame formulation [39,40]. One can similarly obtain the full action from exceptional field theory.
Conclusions
Here we have shown how to use exceptional field theory to describe backgrounds admitting half the number of spinors. Such backgrounds admit a generalised SU(2)-structure which can be defined by having certain non-vanishing generalised tensor fields, which can be thought of as the generalisation of complex and Kähler structure on K3 surfaces. We showed how to reformulate exceptional field theory in terms of the SU(2)-structure thus making N = 2 SUSY manifest.
We then showed how one can define consistent truncations on such generalised SU(2)-structure manifolds. The truncation Ansatz is made by expanding the SU(2)-structure in terms of a set of modes which only depend on the internal space, with coefficients corresponding to sevendimensional scalar fields. The consistency conditions are naturally encoded by the generalised Lie derivative and allow us to identify the embedding tensor of the half-maximal gauged SUGRA, including the singlet deformation parameter θ . With the truncation Ansatz, the action reproduces the action of half-maximal gauged SUGRA.
Finally, we showed how one can use the methods described here to obtain the heterotic double field theory, using an Ansatz for all fields similar to the truncation Ansatz, but allowing the "gauged" fields to still depend on a subset of the 10 coordinates Y ab . The intrinsic torsion of the SU(2)-structure background now determines the gauge group of the heterotic theory. Furthermore, the generalised Lie derivative of exceptional field theory naturally gives rise to the heterotic double field theory Lie derivative, and the action reproduces that of the heterotic double field theory.
As we stressed in the introduction, dualities appear as ambiguities in exceptional field theory. The framework allows one to easily see when a lower-dimensional theory has two different, dual, higher-dimensional origins. With the results presented here one can also see when a lowerdimensional supergravity has a heterotic uplift in addition to type II. This happens for example, in K3 compactifications of 11-dimensional supergravity, where there is also an uplift to the heterotic SUGRA on T 3 with the gauge group broken to the Cartan subgroup [19].
The construction presented here can be generalised to other dimensions [29] as well as less SUSY, using the results of [41]. It would be interesting to apply these results to find new halfmaximal consistent truncations which can amongst other things be used to find new vacua of 10and 11-dimensional SUGRA. Another interesting aspect would be to study the moduli space of half-maximal vacua, similar to [42]. | 2017-10-01T05:29:57.000Z | 2017-10-01T00:00:00.000 | {
"year": 2017,
"sha1": "39cead6e6f100fb05de293ca62091356b4618b9b",
"oa_license": "CCBYNCND",
"oa_url": "https://pos.sissa.it/292/125/pdf",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "39cead6e6f100fb05de293ca62091356b4618b9b",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
219751599 | pes2o/s2orc | v3-fos-license | High ‐ Efficient Micro Reacting Pipe with 3D Internal Structure: Design, Flow Simulation, and Metal Additive Manufacturing
: The micro reacting pipe with 3D internal structure, which is a micromixer with the shape of the pipe, has shown great advantages regarding mass transfer and heat transfer. Since the fluid flow is mostly laminar at the micro ‐ scale, which is unfavorable to the diffusion of reactants, it is important to understand the influence of the geometry of the microchannel on the fluid flow for improving the diffusion of the reactants and mixing efficiency. On the other hand, it is a convenient method to manufacture a micro reacting pipe in one piece through metal additive manufacturing without many post ‐ processing processes. In this paper, a basis for the design of a micromixer model was provided by combining the metal additive manufacturing process constraints with computational fluid dynamics (CFD) simulation. The effects of microchannel structures on fluid flow and mixing efficiency were studied by CFD simulation whose results showed that the internal micro ‐ structure had a significantly positive effect on the mixing efficiency. Based on the simulation results, the splitting ‐ collision mechanism was discussed, and several design rules were obtained. Two different materials were selected for manufacturing with the laser powder bed fusion (L ‐ PBF) technology. After applying pressure tests to evaluate the quality of the formed parts and comparing the corrosion ‐ resistance of the two materials, one material was picked out for the industrial application. Additionally, the chemical experiment was conducted to evaluate the accuracy of the simulation. The experimental results showed that the mixing efficiency of the micro reacting pipe increased by 56.6%, and the optimal determining size of the micro reacting pipe was 0.2 mm. The study can be widely used in the design and manufacture of a micromixer, which can improve efficiency and reacting stability in this field.
Introduction
The micro reacting pipe is a kind of micromixer with the shape of a pipe which is a continuous flow chemical synthesis device with critical dimension from micron to millimeter. It has many highquality characteristics. For example, the micromixer has great mass transfer ability and strong heat transfer ability. It is easy to monitor the chemical production by using this device, and there is no amplification effect. These characteristics greatly improve the mixing efficiency of micro reacting pipes, facilitate the stable control of various chemical reactions in industrial production, and shorten the R&D period of new products in chemical and pharmaceutical fields. For instance, Luong Jim et al. realized the effective management of extra-column effect and performed post-column backflushing with the 3D-printed two-stage micromixer [1]. Therefore, the study of the micro reacting pipe is of great significance for the development of high efficiency, low consumption, safe and controllable production mode, and the realization of fine and sustainable development in the fields of medicine and chemical industry [2,3].
The critical dimension of the micro reacting pipe is in the order of micron to millimeter, which greatly increases the influence of the wall viscosity force on fluid. Hence, the flow pattern inside the channel of the micro reacting pipe is mostly laminar flow. In laminar flow, the mixing of the components in fluid depends on the natural diffusion of molecules which is extremely slow. For example, the diffusion coefficient of liquid is only about 1 × 10 −10 -1 × 10 −9 m 2 /s, which is not conducive to the improvement of production efficiency. Therefore, it is necessary to improve the diffusion process in the micro reacting pipe to improve the production efficiency and practical value of the micro reacting pipe [2,4,5]. One of the effective methods is adding external instruments to provide energy to produce turbulence directly. For instance, Fan Zhang's team investigated the turbulent mixing of two miscible fluids in a high pressure (HP) coflow micromixer operated at 100 bar [6].
Because the traditional manufacturing methods usually exert some influence on the outer surface of raw materials, it is inconvenient to use them to fabricate the internal structure. For example, square waveforms shaped or "Z" shaped micromixers [2] need to be made up of two metal plates, each of which is machined on its outer surface, and then they are bolted or welded. Another example is Jeon WJ, who manufactured micromixers by using molds [7]. There are some other designs using different methods of manufacturing. For instance, using lithography, micro-molding, and bonding techniques to build polydimethylsiloxane micromixer [8] or using a standard wet etching technique to fabricate the designed channel and bonding glass-glass chip to produce the designed micromixer [9]. But these methods can only make a 2D shape. There are some other methods to manufacture a 3D shape micromixer, such as forming a three-dimensional (3D) micromixer with propeller blades by two-photon polymerization (TPP) [10], which needs a long time to fix the printed 3D object, so the process is cumbersome. In addition, the 3D-microfluidic reactor in low temperature cofired ceramic (LTCC) [11] has poor thermal conductivity and is difficult to control sintering shrinkage. There is an easy method, which is bending straight tubes of copper into coil [12]. However, it causes the internal structure to be too simple to produce turbulence to mix fluid efficiently. By contrast, additive manufacturing (AM) can be used to form micro reacting pipes with 3D inner structure directly without adding additional processes such as welding or making molds.
There were some people redesigning the micromixer. For example, Sumsun Naher et al. [13], Mubashshir Ahmad Ansari et al. [14], Zeyang Wu et al. [15], and Xueye Chen et al. [16] have designed different internal structures of the micro reacting pipe through simulation, which might improve the mixing efficiency of the micro reacting pipe theoretically, but their results were difficult to be fabricated by the traditional manufacturing methods because of the complex geometry, thus, the practicability of these results was affected. Based on AM technology, the optimized internal structure can be formed directly to give more optimization space for the micro reacting pipe. In addition, the AM technology can break through the conventional thinking of deforming and removing materials, realize the new manufacturing concept of " Net Shape Forming " [17,18], and conform to the new tide of "Green Manufacturing" [19].
The fabrication of micromixer by AM has been studied. For example, Obinna Okafor's team made small continuous flow oscillating baffle reactors with SLA (stereo lithography appearance) technology [20] and Elisenda Fornells' team studied a multi-material 3D printed microfluidic reactor with integrated heating [21]. The evaluation results of the study showed that the micro reacting pipe could be manufactured by AM technology to obtain a satisfactory mixing performance, which reflected the potential of AM in the field of micro reacting pipe manufacturing.
The micromixers mentioned above are all non-metallic material. But there is also research which is based on metal additive manufacturing such as 316L stainless steel micromixer by micro powder injection molding [22]. Jie Liu et al. proposed bringing the L-PBF method to fabricate the porous metals as catalyst supports to improve the hydrogen production performance of micromixers [23]. But this method also needs to assemble several parts to make a completed micromixer, whose process is relatively complex. Kathryn L. et al. investigated optimized wavy microchannels numerically which were manufactured by using L-PBF [24]. Their microchannels had a relatively gentle rolling inner structure. But this structure could not make turbulence as much as possible.
In this paper, based on L-PBF technology, a type of internal structure of the micro reacting pipe was designed and fabricated, and its optimal critical dimension was discussed. In addition, the properties of the micro reacting pipe in this paper were compared with the former ordinary micro reacting pipe. To investigate the content above, both the CFD simulation method and chemical experimental method were employed. To study the insight of the micromixer, Harrson S. et al. discussed computational methodology for the development of microdevices and micromixers with ANSYS CFX [25]. At the same time, the chemical experiment was based on the Villermaux-Dushman parallel competition reaction system which used the concentration of I in the outflowed solution as a probe to evaluate mixing efficiency. On the other hand, Jacob C. et al. researched building direction effects on microchannel tolerance and surface roughness through X-ray computed tomography (CT-scan) [26]. Therefore, this paper will not focus on the dimension deviation and surface roughness.
Equipment and Materials
To conduct the chemical experiments, the advection pump 2PB-10005 was employed to inject the reactant solutions and the Agilent Cary 60 UV spectrophotometer was used to determine the absorbance of the outflowed solution. To fabricate the micro reacting pipe, two metal materials were compared and the better one was selected to form the micro reacting pipe with the L-PBF machine Dimetal-280 manufactured by Laseradd Technology (Guangzhou, China) Co., Ltd. Main system technical parameters of the L-PBF machine are described briefly below. Its theoretical spot diameter is greater than or equal to 40 μm, stacking layer thickness is 0.02-0.1 mm, laser power is (10%-100%) × 500 W, velocity of forming is 10-20 cm 3 /H, and accuracy of forming is ±0.02 mm. The principle of the L-PBF process is illustrated in Figure 1a. With the help of high-energy laser, metal parts with extremely internal complicated structure can be fabricated directly by powder material without the need of post-processing treatment [27].
The aforementioned metal materials were IN718 alloy and 316L stainless steel, whose average particle sizes were 25 and 33 μm, respectively. Their compositions were as shown in Table 1, and the micromorphology images of the two materials powders are as shown in Figure 1b,c.
Research Procedure
The research procedure of this study is shown in Figure 2. Firstly, based on the combination of the optimal principles to improve mixing efficiency and the constraints of L-PBF process, the internal structure of the micro reacting pipe was obtained in this paper. Secondly, the optimal critical dimension as well as the comparison between the micro reacting pipe in this paper and the former ordinary one was discussed by CFD simulation. Thirdly, the corrosion resistance test and compression resistance test were performed to select the suitable material and the forming parameters to form the micro reacting pipe. Finally, several micro reacting pipes in this paper were fabricated and some chemical experiments were conducted to verify the simulation settings.
Design of Micro Reacting Pipe
As shown in Figure 3, the internal structure designed in this study is composed of the obstacles with the shape of quadrilateral pyramid table. The dimension of the outer circle is 10 mm. Fluid will turn into several strands by the front edges and the gaps between obstacles when it passes through the obstacles. Then, the split fluid will converge along the slope, so the fluid will collide and form swirls at the back of the obstacles. This process will break down the laminar pattern, turn the flow pattern into turbulence, and promote convection diffusion so as to improve the mixing efficiency. During manufacturing, out of consideration of process constraints of L-PBF, the axis of the micro reacting pipe should be vertical to the substrate. At the same time, the angle between the suspended structure and the wall in the channel should be not smaller than 45° to avoid building support inside the channel which cannot be removed after forming [28]. So, the first design principle is obtained: the angle of inclination of the internal structure is more than or equal to 45°. The flow field model used in this simulation is shown in Figure 4a below. The entrance part was modeled according to the actual measurement of the tee coupling joint used in the chemical experiment. The mixing area was composed of 10 groups of internal structure units with the total length of 40 mm. The characteristic cross-section of the flow field is shown in Figure 4b. The size represented by x in the diagram is called the determining size and is denoted as JXxx (indicating that the length of the gap is xx × 0.1 mm). By geometrical analysis, the area, the circumference, and the critical dimension of the cross-section can be calculated. By the laws of geometry, the equations are as follows: The relationship between the area and the determining size: The relationship between circumference and determining size: The equation of the critical dimension of the cross-section: In this study, θ was equal to 45°, and a was equal to 2 mm. Therefore, the relationship between the critical dimension and the determining size is as follows: In this paper, the simulation analysis and experimental verification were conducted with the determining size of 0.2, 0.4, 0.6, 0.8, 1.0 mm, which corresponded to the critical dimension of 0.85, 0.89, 0.95, 1.03, 1.13 mm.
Parameter Settings and Characterization Method in CFD
In this paper, CFD simulation was conducted with the commercial software ANSYS 17.0. Firstly, the mesh file of the model was constructed. The final number of mesh elements ranged from 6.5 × 10 5 to 8.0 × 10 5 . Then, the standard k-e model was used to describe the flow pattern [29]. Water and ethanol were used as simulation mediums, and the species transport equation was used to simulate the diffusion process. The density of the mixture was defined by volume-weighted mixing law [30], and the viscosity of the mixture was set to 0.001 kg/(m 2 •s), the mass diffusion coefficient was 1.2 × 10 −9 m 2 /s. When setting the boundary condition, the velocity inlet and the pressure outlet were used to define the boundary condition. The volume flow rate inside the two inlets were 100 mL/min. Temperature was set to 293 K, and the gauge pressure of inlets was 20,000 Pa in inlets and 0 Pa in outlet. The solution scheme was SIMPLE. The pressure discretization scheme used PRESTO! The other items were discretized by the second-order upwind method.
In the field of characterizing the mixing efficiency in CFD, Ansari et al. [31] used the mixing index to analyze the mixing efficiency in the simulation. The mixing index M is defined as follows: where 1 .
In the equations above, is the variance of the tracer's concentration inside the cross-section, is the maximum value of . N denotes the number of sample points inside the cross-section, is the mass fraction of the tracer at certain sample points, and is the average value of . In the case of complete mixing, M = 1, while M = 0, there is no mixing process happening [31]. In this study, the mixing index came from the outlet.
Experimental Method to Characterize Mixing Efficiency
The chemical experiment used the Villermaux-Dushman parallel competition reaction system, which used the concentration of I in the outflowed solution as a probe to evaluate mixing efficiency. The process is as follows: dissolve the powder of H3BO3, NaOH, KI, and KIO3 with deionized water respectively. Then, pour the solutions above into a beaker successively and stir them so as to obtain the solution a. Next, dilute the 98% H2SO4 solution to the solution b with the certain concentration of H+ needed in the experiment. The following reactions will occur after the two solutions mixing [28]: Reaction (7) reacts always faster than (8). Thus, reaction (8) happens only in the position where there is an excess of H . Since H is given less than H BO , reaction (8) happens only when the mixing is not sufficient, which causes some local excess of H . Reaction (9) will happen after reaction (8), so the product of reaction (9), i.e., I can become the probe to reflect the mixing efficiency after calculating with equilibrium constant K of reaction (9) [33].
According to Beer-Lambert law [34], the relationship between the absorbance of the solution and the concentration of I is as follow: where A denotes the absorbance of solution which can be determined by the UV spectrophotometer, Ɛ is the equilibrium constant which is only related to temperature, L is the wall thickness of the cuvette used in experiment.
To characterize the mixing efficiency, the segregation index is defined as follows [35]: where Vb denotes the volume of solution b, [] denotes the equilibrium concentration, denotes the initial concentration.
Since Xs is negatively correlated with the mixing efficiency, the micromixedness ratio (α) was proposed, and the micromixedness ratio is positively correlated with the mixing efficiency. The relationship between Xs and α is as follows:
Determination of H
The I /IO system is very sensitive to the concentration of H , because this reaction system is actually a battle fighting for H . Too much or too less of H can reflect a wrong result of mixing efficiency. Therefore, it is necessary to select an appropriate H concentration at first. Foumier et al. [35] found that the initial concentration of H should range from 0.03 to 0.08 mol/L and the most appropriate value within this range should be selected according to the measuring range of the UV spectrophotometer and the results of the experiment.
Metal AM Printing and Compression Resistance Test of Micro Reacting Pipe
In this study, two kinds of materials, IN718 and 316L stainless steel, were used for forming the micro reacting pipe. Because 316LSS is relatively cheap and used extensively as metal materials in L-PBF, on the contrary, IN718 is expensive but has better corrosion-resistance. The optimization parameters of L-PBF forming IN718 are as follow: spiral angle 30°, scanning distance 0.08 mm, laser power 200 W, scan speed 1200 mm/s; the L-PBF optimized parameters for 316L stainless steel material: inter-laminar spiral scanning, the effect of spiral angle 30°, scanning distance 0.08 mm, laser power 140 W, scanning speed 800 mm/s. The formed parts made of IN718 and 316L stainless steel are shown in Figure 5. The printed molded parts were cut from the substrate by wire cutting and polished. In this study, the inner and outer diameters of the molded parts were measured by electronic vernier calipers, and each dimension data was measured three times. The maximum dimension deviation value is 0.2 mm. To facilitate the physical experiment of mixing efficiency, the shape of the micro reacting pipe was adjusted, the groove was added to the outer wall of the micro reacting pipe according to the size of the determining size, and the experimental sample was molded by IN718 and the molding parameters obtained in Section 2.3. After obtaining the molded part, the tee coupling was welded to import the two reaction solutions, and the final sample is shown in Figure 6. The channel size of the micro reacting pipe is small so the pressure in the micro reacting pipe drops dramatically, and it is used in a large flow rate to increase the output in the industrial production, which requires the compression resistance of the micro reacting pipe. In this study, the parameters of L-PBF were optimized by booster test, and the parameters of the high-quality pipe could be obtained by the test. The test pressure was dynamic and gradually increased. During the test, horizontal flow pump was used to inject water into the formed micro reacting pipe and the output flow rate of the pump was continuously increased. As the pressure increases up to the limited pressure 5 MPa in industrial application, if there is no leakage on the external wall of the pipe, the compression resistance of the formed parts can meet the industrial requirements.
Comparison of Corrosion Resistance between Two Materials
Strong corrosive substances such as hydrochloric acid and phosphorus tribromate are often used in chemical production, so it is necessary to compare the corrosion resistance of the materials used by static corrosion test. The test objects were placed in 3% hydrochloric acid solution and 99% phosphorus tribromate solution for 24 h respectively. The lower the mass variation is, the better the corrosion resistance is. The test results are as shown in Tables 2 and 3, and the data show that the corrosion resistance of IN718 is better than that of 316L stainless steel, and the corrosion resistance of the material can meet the industrial requirements.
ID
Mass before the Experiment (g) In the experiment of mixing efficiency, 11.2406 g boric acid powder, 3.6357 g sodium hydroxide thin tablet, 1.973 g potassium iodide powder, and 0.4985 g potassium iodate powder were dissolved in enough deionized water, and then slowly poured sodium hydroxide into boric acid solution and stirred to make a buffer. Then, the potassium iodide solution and potassium iodate solution were poured into the buffer in turn. Then, 98% concentrated sulfuric acid was diluted to 0.025 mol/L and 0.03 mol/L to obtain the solution b. The initial concentrations of each component in solution a and b are shown in Table 4. Table 4. Initial concentration of the chemical constituents in the experiment.
Chemical Constituents Initial Concentration (mol/L)
The solution a and b were injected into each inlet of the micro reacting pipe simultaneously by the volume flow rate of 100 mL/min by the advection pump, the absorbance of the effluent was measured, and the absorbance was converted into the micromixedness ratio. The experimental results are as shown in Figure 7. The change of micromixedness ratio with the determining size was consistent at 0.05 and 0.06 mol/L H + concentration, and accorded to the law of theory that the micromixedness ratio of 0.05 to 0.06 mol/L is higher than that of 0.06 mol/L, so the experimental results are correct.
Selection of Model Parameters and the Three Design Principles
The fundamental reason for the high mass transfer performance of the micro reacting pipe is its small critical dimension of the channel. By reducing the critical dimension of the channel, the diffusion distance of the components reduces, and the mixing efficiency dominated by free diffusion is enhanced. The following formula (15) shows that the microscopic mixing time (t) of the micromixer is directly proportional to the square of the critical dimension (d) and inversely proportional to the component diffusion coefficient (D). Therefore, for the microscopic mixing process, the mixing efficiency decreases monotonously with the increase of the critical dimension [36]. Therefore, the second design principle of the micro reacting pipe is that the channel characteristic size should be reduced as much as possible to enhance the mixing efficiency dominated by free diffusion: Figure 8 shows the relationship between the intensity of segregation (Is) and the mixing time (t) in different flow states [2]. The intensity of segregation (Is) in turbulent flow decreases more rapidly than laminar flow, because turbulence can promote convective motion between micro-clusters of fluid, and shorten the distance between micro-clusters, thus enhancing the mixing efficiency dominated by convective diffusion. Therefore, the third design principle of the micro reacting pipe is trying to put the fluid in a turbulent state to enhance the mixing efficiency dominated by convective diffusion. Combined with the application requirements of the micro reacting pipe and the process constraints of metal additive manufacture, three design principles of the micro reacting pipe based on the metal additive manufacture method are summarized: 1. The tilt angle of the internal structure is greater than or equal to 45° to avoid warping during laser melting; 2. The critical dimension of the micro reacting pipe's channel should be reduced as much as possible to enhance the mixing efficiency dominated by free diffusion; 3. To enhance the convection-diffusion dominated mixing efficiency, the fluid in the micro reacting pipe channel should be put in a turbulent state.
The critical Reynolds number (Re *) is related to the roughness of the wall and channel shape, and Re is defined by the following Equation [2]: where ρ, v, d, μ represent the density, average rate, critical dimension (hydraulic diameter), and dynamic viscosity of a cross-section, respectively.
When the actual Reynolds number (Re) on a cross-section exceeds the critical Reynolds number (Re *), the flow will begin to become unstable and gradually transform to complete turbulence as the difference between the two increases.
As Equation (16) shows, when the critical dimension (d) of the micro reacting pipe is reduced to a low level, the actual Reynolds number (Re) will be so small that the critical Reynolds number (Re *) cannot be exceeded or the gap between Re * and Re cannot be enlarged, which makes it difficult for the flow to transfer from laminar flow to fully developed turbulence. That is also the reason why the flow state is usually laminar flow in ordinary micro reacting pipes.
The Re can be increased by enlarging the critical dimension, however, due to the constraint of the second design principle, only increasing the critical dimension will reduce the mixing efficiency. Therefore, fully developed turbulence can only be created by increasing the cross-section's average velocity or decreasing the Re *. One specific way to increase the cross-section's average velocity is to increase the energy supply of external energy sources, such as increasing the pump's output pressure and flow rate, or installing an ultrasonic oscillator on a micro reacting pipe. Multiple pumps are used for multi-stage energy supply, and the micro reacting pipe is placed in an electric or magnetic field to accelerate or rotate charged particles in a fluid by an electric field or magnetic field force, which is the active micro reacting pipe [37].
However, the active micro reacting pipes' energy consumption and overall device complexity greatly increase due to the need for additional external energy sources. Fortunately, some former studies have also shown that adding obstacles inside the channel can reduce Re * [31]. Therefore, this study used the method of creating complete turbulence by reducing the Re *. Some obstacles in the micro reacting pipe's channel were designed to generate turbulence to improve the mixing efficiency. [38]
Simulation Results of Micro Reacting Pipe
The simulation data were substituted in Equations (5) and (6) to obtain the mixing index under the corresponding determining size, as shown in Figure 9. Based on the theories and experiment results above, concrete measures were put forward for the design of the micro reacting pipe, including:
Reducing the micromixing distance and improving the mixing efficiency by means of sudden contraction and sudden enlargement of the cross-section of the micro reacting pipe; The fluid can be separated by means of a structure with a sharp edge; The fluid confluence can be guided by an inclined plane with a certain angle between the wall of the tube and the surfaces of the obstacles.
With these measures, the fluid can collide in the radial direction and greatly enhance the convection diffusion of the components in the radial direction.
It can be seen from Figure 9a that the mixing index decreases first and then become larger with the increase of the determining size, and then decreases rapidly after an extreme value, where the extreme value appears when the determining size is 0.8 mm. Refer to the previous analysis of the design principles of micro reacting pipes, it is the result of the combination of the second design principle (reducing the size, enhancing the free diffusion) and the third design principle (creating turbulence and enhancing convection-diffusion) in Figure 9b. According to the analysis of the second design principle, with the increase of the determining size, the critical dimension of the channel as well as the longest distance between the fluid components is also increasing. Therefore, the mixing efficiency caused by free diffusion decreases with the increasing of the determining size, and the mixing index caused by free diffusion decreases monotonously with the increase of the determining size. With the increase of the determining size, the critical dimension of the channel is also increasing. According to Equation (16), this accelerates the increase of the actual Reynolds number in the channel. However, the increase of the determining size also reduces the volume of the obstacle and the interference to the flow. Due to the existence of this pair of contradictions, the mixing efficiency caused by convection-diffusion exhibits an "inverted U-shaped" change as the critical dimension to increase.
To explain the disturbance effect of the designed obstacle on the fluid flow, and according to Figure 9a, the best determining size is 0.2 mm. The micro reacting pipe without obstacles and the micro reacting pipe with the determining size of 0.2 mm were simulated, as shown in Figure 10 below. The mixing index of the obstacle free micro reacting pipe was calculated to be 0.315734242. As shown in Figure 10a,c, the state of flow flowing in an unobstructed channel is laminar, the fluid flows completely along the axial direction, and there is a lack of transverse mass and momentum exchange between the fluid microspheres. So, the diffusion of components depends on the random motion of molecules. In the channel with obstacles, the fluid is squeezed at the square frame, which reduces the distance between the two sides of the fluid boundary, shortens the maximum displacement of the two components, and enhances the free diffusion efficiency of the components. At the circle, the velocity vectors round in layers of circles, indicating that a swirl is formed there, which indicates that the fluid flow changes into turbulence. Figure 10d,e is respectively two velocity vector diagrams of the fluid formed after passing through one set of orthogonal obstacles. In Figure 10d, four strands of fluid, upper left, upper right, lower left, and lower right, are guided by obstacles to move towards the circle in the graph and converge into two strands. The convergent fluids are divided into four strands after reaching the next set of obstacles which are orthogonal to the former, and then move towards the circle frame in Figure 10e, converge into two strands. In this periodic fluid movement, the fluid is constantly divided and converged, and the fluid microspheres from all directions collide violently at the confluence, resulting in the continuous deformation and fragmentation of the fluid microspheres and the formation of smaller microspheres. Thus, it is more favorable for the uniform distribution of the components in the fluid and the enhancement of the mixing efficiency. Figure 11 shows the comparison of mass fraction distribution of axial components and kinetic energy distribution of axial turbulence between the micro reacting pipe without obstacles and with obstacles designed in this paper. The distribution of mass fraction of the component ethanol (left in Figure 11) demonstrates that the mass fraction of the ethanol in the micro reacting pipe with obstacles converged to the ideal point 0.4473625 earlier than the unobstructed pipe. Therefore, the obstacles designed in this paper can enhance the mixing efficiency of the micro reacting pipe.
The turbulent kinetic energy of the micro reacting pipe with obstacles in the mixing region is nearly an order of magnitude higher than that of the unobstructed micro reacting pipe as shown in the comparison of axial turbulent kinetic energy (right). Therefore, the fluid motion is more intense in the micro reacting pipe with obstacles. The mass and momentum exchange between the various fluid micro-clusters more frequently, which enhances the convection and diffusion of the micro reacting pipe and is beneficial to the improvement of the mixing efficiency.
Discussion
By comparing the micromixedness ratio (α) in Figure 7 and the mixing index (M) of the physical test in Figure 9a, the simulation results showed that the maximum mixing efficiency of the prediction was different from that of the experiment. The reasons for the deviation might be that the meshes and models were not suitable enough, the deviation dimensions and roughness of the actual internal structure of the molded parts were not considered in the simulation. Although both of them were positively correlated with the mixing efficiency, they were not uniform in dimension. Hence, only the qualitative analysis can be carried out. But the simulation results were consistent with the experimental results. The position of the turning point and the changing trend of the slope between every two points were accurately predicted, so the simulation still had sufficient credibility. [39]. Figure 12 shows how splitting-collision mechanism works. There are two main mixing patterns influencing its mixing process, collision, and swirl. Besides, continuous fluid can be divided into huge amounts of micromasses which have different physical or chemical states based on hydrodynamics theory system. When two micromasses collide, the components contained inside the micromasses will break through the surfaces of two micromasses, resulting in the mixing of two components. At the same time, a bigger micromass will turn into some smaller micromasses. The smaller micromasses have larger relative surface area, which increases the probability of contacting outer components and enhances the molecular diffusion caused by concentration differential.
Besides, the fluid revolves around a low-pressure point and runs along a helical line in swirls, where the fluid micromasses are teared into many thin and long slices by the normal and tangential force from pressure difference and viscous force. The mixture slices have a larger contact area between two micromasses, so it attains stronger molecule diffusion and higher mixing efficiency.
As the colliding process goes, the high-pressure zones appear at the converging zones in the central region due to the anti-force caused by colliding. Meanwhile, the obstacles do not obstruct the fluid any more after the fluid flow across the middle cross-section of obstacles. So, there are some low-pressure zones close to the back slopes. Then, the acceleration vectors point from the central region to surroundings. The acceleration vectors are against the velocity vectors resulting that the torques appear because not all acceleration vectors are on the same lines with velocity vectors. The torques cause swirls. Thus, in the paragraph after the middle cross-section of obstacles, the mixing behavior is the development of swirls as well as the collisions between the boundaries of different swirls.
According to the above design principles, there are many ways to design micro reacting pipes, however, only a kind of internal structure was discussed in this paper. As the cross-section of other micro reacting pipes in Figure 13, there are other developing directions of the shape of internal structure, such as obstacles with curved face or asymmetrical structure. The internal channel structure of the micro reacting pipe has a great influence on the mixing efficiency, and the appearance of the AM technology provides a wider optimization space for the design and manufacture of the micro reacting pipe, so it is worthy of further study by researchers in various fields. The wall thickness can influence the heat transfer efficiency, but during this simple chemical experiment, it can be ignored because of the high-impact material. In addition, the structure designed in this paper requires the micro reacting pipe to be formed vertically to the substrate, and the most metal additive manufacturing machine has relatively small forming height. But the ratio of length to width of the micro reacting pipe is very large in industry, so it needs a lot of molding time, which should also be considered as an important factor in the design of micro reacting pipes in the future.
Conclusions
Three design criteria for the micro reacting pipe with a 3D inner structure in one piece for AM were proposed. Through CFD simulation, the optimization design of the micro reacting pipe was done with constraints of laser selective melting technology. The influence of the internal structure on fluid flow and mixing efficiency were discussed, including splitting-collision mechanism. Through the corrosion resistance test and compression resistance test, the suitable metal material and forming parameters in the industrial application were found out. Finally, the simulation results were verified by using the Villermaux-Dushman parallel competition reaction system in the practical experiment. Results showed that the mixing efficiency of the micro reacting pipe increased by 56.6%, the optimal determining size of the micro reacting pipe was 0.2 mm, and results of the simulation and practical experiment are consistent with each other.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-06-04T09:05:02.320Z | 2020-05-29T00:00:00.000 | {
"year": 2020,
"sha1": "35b5a64cf3d35784abf057dceef5c4f2e37f34b8",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/10/11/3779/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "7dd824dfbb8aac4e0b46daefa6ab79c3e637f7cd",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
248332078 | pes2o/s2orc | v3-fos-license | X-ray computerized tomography observation of Lycopodium paste incorporating memory of shaking
In a uniform layer consisting of a mixture of granular material and liquid, it is known that desiccation cracks exhibit various anisotropic patterns that depend on the nature of the shaking that the layer experienced before drying. The existence of this effect implies that information regarding the direction of shaking is retained as a kind of memory in the arrangements of granular particles. In this work we make measurements in paste composed of Lycopodium powder using microfocus x-ray computerized tomography ($\mu$CT) in order to investigate the three-dimensional arrangements of particles. We find shaking-induced anisotropic arrangements of neighboring particles and density fluctuations forming interstices mainly in the lower part of the layer. We compare the observed properties of these arrangements with numerical results obtained in the study of a model of non-Brownian particles under shear deformation. In the experimental system, we also observe crack tips in the $\mu$CT images and confirm that these cracks grow along interstices in the direction perpendicular to the initial shaking.
I. INTRODUCTION
Mixtures of fine solid particles and liquid (for example, water) behave as viscous fluids with plasticity. Such mixtures are generally referred to as granular pastes. It has been found that cracks often appear in granular pastes as they transition to semisolid states during desiccation [1,2]. When a uniform layer of granular paste dries evenly on a hard flat surface, cracks usually form an isotropic cellular pattern. However, previous studies have revealed that the crack patterns can be made anisotropic if a mechanical or electromagnetic perturbation is applied to the layer for a short time before drying. This memory effect of paste suggests that we can readily impart anisotropic mechanical properties on pastelike materials and control the process of cracking [2][3][4][5][6][7][8]. However, our understanding of such behavior is yet incomplete, mainly because we do not know how external perturbations generate structures incorporated as memory in paste and how these structures influence cracking processes.
The memory of shaking is the most common type of memory effect, being observed in many types of paste. In situations that paste exhibits this effect, after horizontal unidirectional oscillation is applied to a uniform * http://www.noneq.phys.nara-wu.ac.jp/˜kitsune; kitsune@kirin.phys.nara-wu.ac.jp layer of paste, parallel cracks first appear during desiccation in the direction perpendicular to the initial shaking. This effect was first found in claylike paste consisting of calcium carbonate [3] and then also observed in wet granular materials, such as mixtures of starch powder and water [9]. As these experimental studies revealed, parallel cracks form only when the shaking is sufficiently strong to exert shear stresses larger than the yield stress on the paste. Further, it was found that many types of crack patterns, including rings and spirals, can be produced by changing the manner of shaking [3,5,8]. Recent experiments have also indicated that such memory can be rewritten by adding only a few oscillations in a direction that differs from that of the initial shaking [10]. Nonlinear elastoplastic theories provide predictions that are consistent with these results. The phenomenological models proposed by Otsuki and Ooshida predict that large shear deformation yields anisotropic residual stresses, and this prediction has been confirmed by measurements of stresses in calcium carbonate paste [11][12][13][14][15].
What anisotropic structures are formed microscopically by shaking and how do they influence the fracture properties of paste? In a previous study [9] we observed the arrangements of particles in several samples using x-ray computerized tomography (µCT) and found that shaking induces short-range anisotropy in which the number of neighboring particles increases in the direction perpendicular to the direction of shaking in the shear plane. However, we were not able to investigate the de-tails of the anisotropy and their relation to the resultant cracks. In this work we use a µCT apparatus at the large synchrotron radiation facility SPring-8 (RIKEN, Japan). With this apparatus, we are able to realize a field of view with a diameter of 3.56mm, which is approximately four times larger than that used in the previous study, while maintaining the same resolution. We carry out measurements in a few dozen samples. As in the previous study, we use paste consisting of Lycopodium powder, i.e., spores of Lycopodium clavatum. These particles have round shapes with diameters of approximately 30 µm. These particles are the largest among the spherical particles whose paste is known to exhibit memory of shaking, and due to this large size, individual particles can be resolved in the µCT observations.
In this study, we confirm that such short-range anisotropy is induced only when the sample is shaken under the conditions that the memory effect appears and that it is pronounced in the lower part of the layer. We also find that shaking induces not only short-range anisotropy but also anisotropic fluctuations in the density of particles forming interstices. We find that this anisotropic density fluctuation is a direct cause of the memory effect. By visualizing the vicinities of growing crack tips, we determine its relation to the crack growth direction.
In the next section we explain the methods of sample preparation and the µCT measurements. In Sec. III we analyze the height dependence of the directional order parameters of the arrangement of particles. In Sec. IV we present results for the properties of interstices and their relation to the direction of crack growth. In Sec. V we discuss numerical simulations of non-Brownian particles under shear deformation. In these simulations, we find anisotropy in the arrangement of particles similar to that seen in the experiments. In Sec. VI we give a summary.
II. METHODS
We prepared a layer consisting of a mixture of Lycopodium powder and cesium chloride (CsCl) solution containing a small amount of agar and applied horizontal shaking. After drying the paste in a high-temperature environment of 45.0 • C ± 1.5 • C for a predetermined period, we covered it to stop desiccation and then allowed it to solidify through gelation of the agar at room temperature. We prepared numerous samples under various conditions in advance and cut out pieces of a layer from each sample for µCT observation at SPring-8. The main parameters characterizing our experiments are the frequency of shaking f , the initial solid volume fraction, φ(0), and the solid volume fraction at the time of gelation φ g .
We prepared every sample such that the area density of Lycopodium powder (Association of Powder Process Industry and Engineering, Japan) was 0.070 g/cm 2 and we controlled the initial solid volume fraction to take val-ues between φ(0) = 10% and 23% by adjusting the volume of the solution. The solution contained 140 g of CsCl (Wako Pure Chemical Industries, Osaka, Japan) and 6.00 g of agar (Ina Food Industry, Japan) per liter of ion-exchanged water as radiopaque and gelation agents, respectively. (The solidifying temperature of the agar was approximately equal to 40 • C. [16]) The solid volume fraction of the paste was determined from the densities of the CsCl solution (1.10 g/cm 3 ) and a Lycopodium particle (1.05 g/cm 3 ) by assuming the small amount of agar to be negligible. Lycopodium powder was mixed with the solution using a hot magnetic stirrer and then stored in a sealed flask at 70 • C until it was poured into containers.
The existence of the memory effect in a paste of Lycopodium powder and agar solution was confirmed by pouring paste into containers and then drying it in the high-temperature environment until crack formation, as shown in Fig. 1. We found that the morphological diagram of crack patterns obtained in this case is qualitatively the same as in the case without agar, although the plastic limit decreases by approximately 3% [9]. We also found that adding CsCl to the agar solution does not change the crack patterns significantly, as seen in the photographs in Fig. 1 [17]. In order to clarify the differences among the conditions, we mainly report the results obtained under the conditions corresponding to the open squares in Fig. 1.
The yield stress curve of Lycopodium paste is plotted in Fig. 1. The yield stresses σ Y were measured by using a rheometer (Physica MCR301, Anton Paar, Austria). These measurements were made on paste maintained at 45 • C inside a coaxial double cylindrical vessel to which a stress with constantly increasing magnitude was applied. The value of σ Y was determined at the commencement of the first rotation using the least-squares method. For samples without CsCl, the measured values of σ Y were 0.14, 0.43, 1.5 and 4.6 Pa for φ(0) = 10.6%, 14.5%, 19.2% and 22.9%, respectively, and σ Y > 200 Pa for φ(0) = 25.2%, although we found that the values decreased by amounts in the range 0%-30% at the second rotation.
The crosses in Fig. 1 indicate the strengths of shaking that generate the yield stresses on the bottom of a layer and the dotted curve indicates the yield stress curve determined from these data. Anisotropic crack patterns were observed clearly above this curve and near the plastic limit, as expected. The yield stresses of samples containing CsCl were found to be 0.8 and 1.6 Pa for φ(0) = 10.5% and 19.1%, respectively. The corresponding shaking strengths are indicated by the pluses. Adding CsCl to the agar solution did not significantly affect the rheological properties of the paste, although the reproducibility of σ Y deteriorated for small yield stresses.
We applied a horizontal oscillation of amplitude A = 15 mm to a layer of paste using a shaker (FNX-220, TGK, Tokyo, Japan). We refer to the shaking direction as the x axis and to the upward vertical direction during shaking as the z direction in this paper. The applied frequency f was varied from 0 to 80 rpm; in the case f = 0, the Initial solid volume fraction paste experienced no shaking after being poured. Shaking was applied for 5 min immediately after the paste was poured into a container positioned on the shaker in the high temperature environment. The surface temperature of a layer of paste was in the range 50 • C-60 • C immediately after being poured and then it decreased to approximately the ambient temperature within 30 s. Although we omit the results, in some samples that were covered after being poured, we did not observe the anisotropic structures reported in this paper. We believe that this resulted from the fact that, due to the insulating effect of the cover, these samples did not cool sufficiently and this resulted in different rheological properties during shaking.
We prepared a sample for the µCT measurements by pouring paste into a square container of side length 98 mm, which was constructed from polystyrene paper attached to a circular plastic Petri dish, as depicted in Fig. 2(a). The use of polystyrene paper made it easy to remove a cut piece of a layer from the bottom of a container. We prepared many solidified samples under various conditions, i.e., various values of (f, φ(0), φ g ), during a period from approximately one week to one day before the µCT measurements. The samples were then transported to SPring-8 and measurements were carried out during a 24-h period assigned in advance by SPring-8. We performed such measurements three times several months apart.
Our measurements were performed using x-rays of 25 keV from the BL20B2 beam line of SPring-8. One pixel in the µCT images corresponds to a region of length l p ≡ 1.74 µm in a sample. The field of view is a cylindrical region with both a diameter and height of 2048l p = 3.56 mm. For the measurements, we cut out a piece of a layer from each sample, avoiding the vicinities of cracks and the boundary of the container, as depicted in Fig. 2(b), except in the cases in which we observed the crack tips, considered in Sec. IV. In order to minimize the disturbance to the samples caused by the cutting process, we constructed a jig from an aluminum rod to which a cylindrical blade (diameter 4 mm) of a biopsy punch (BPP-40F, Kai industries, Gifu, Japan) was attached in such a manner that it could be slid along a single direction, as shown in Fig. 2(c). The jig had a hexagonal base, which allowed us to fix the direction of each cut. We cut a sample vertically with the blade and gently removed the cut layer with the blade just prior to the µCT observation. Because very soft samples often became stuck, despite our use of the polystyrene paper, the jig was prepared with a narrow hole running its entire length, and through this hole we applied negative air pressure with a syringe to gently remove a sample. We fixed the jig upside down on the turntable for µCT scanning, and then, attaching a cap to prevent desiccation, we slid the blade down to place the cut layer on the top of the rod for x-ray irradiation. Each µCT measurement took 20-30 min. We constructed three-dimensional (3D) images from the measurement data using programs provided by SPring-8 [18]. Figure 3(a) displays a typical image of a cross section. In the image, the brightness increases with the x-ray absorption rate and thus, in order of increasing brightness, we have regions consisting of air, Lycopodium powder, and gelated CsCl solution. After applying brightness inversion and noise reduction, we obtained images of particles through binarization [19]. In these figures, it is seen that the density of particles is not uniform and low-density regions form interstices along the direction perpendicular to the applied shaking. We do not find such an anisotropic structure in samples without memory of shaking; Fig. 4(a) displays a typical horizontal cross section of a sample prepared with an initial volume fraction too small for memory of shaking to appear, while Fig. 4(b) corresponds to the unshaken case of Fig. 3.
We investigate the short-range anisotropy in the arrangement of neighboring particles in Sec. III and then features of these anisotropic interstices in Sec. IV. We calculated the center of mass of the particles from the 3D binary images [20]. As a measure of the height within a sample, we introduce the quantity ζ(z), representing the fraction of the total number of particles contained in the 3D image that exist below z, the height above the bottom. Because we took the 3D images such that the bottom of a layer was included and the concentration of particles was nearly constant throughout a sample, ζ(z) is approximately proportional to z. Generally, the position corresponding to ζ = 1 coincided with the top surface of a layer. However, there were a few samples with small φ g (smaller than approximately 15%) for which the entire height of the layer was not included in the 3D image due to the large thickness. In such cases, therefore, ζ = 1 did not correspond to the top surface. The total number of Lycopodium particles detected in a 3D image was approximately 5 × 10 5 when the entire height of the layer was included.
III. HEIGHT DEPENDENCE OF DIRECTIONAL ORDER PARAMETERS
In order to investigate the arrangements of neighboring particles, we calculated the height dependence of the directional order parameters.
We regard two particles as a neighboring pair if the distance between their centers, r (ij) ≡ |r (ij) |, is less than 35 µm, where r (ij) ≡ r j − r i , is the vector pointing from the center of the ith particle to the jth particle. Assuming a height interval of thickness 60l p = 104.4 µm on either side of z(ζ), we define the parameters where z ) ≡ r (ij) /r (ij) and the angular brackets represent the average over all pairs of neighboring particles such that the ith particle is located Unshaken case with the same initial solid volume fraction as that in Fig. 3: (f, φ(0), φg) = (0 rpm, 18.4%, 25.3%). These horizontal cross sections were taken using the same ζ and the same method as in Fig. 3(c). The ringlike unevenness in brightness is an artifact of the image construction.
within this interval. In Ref. [9] we investigated these order parameters for the entire region corresponding to the field of view and found that the anisotropy induced by shaking is reflected by the diagonal components.
In this study, we investigate the quantities We can regard S 1 and S 2 as anisotropy indices in the horizontal plane and vertical direction, respectively. The quantity S 1 increases as the number of neighboring particles increases in the direction perpendicular to that of the initial shaking in the shear plane, and S 2 decreases as the number increases in the vertical direction, as seen from the relation S xx + S yy + S zz = 0. The quantity S σ decreases as S becomes closer to a diagonal matrix. As explained in the Appendix, the averages of S 1 and S 2 over samples would vanish and their standard deviations would be equal to S σ if every n (ij) was chosen independently from an isotropic uniform distribution. Figure 5 plots the height dependences of S 1 , S 2 , and S σ . Figure 5 pears, (f, φ(0)) = (60 rpm, 18.4%) and (80 rpm, 18.4%).
Near the bottom of a layer, S 1 takes large positive values, while S 2 remains at zero, within the experimental uncertainty. This anisotropy is clearer for f = 80 rpm than for f = 60 rpm. Although φ g differed among the five samples prepared under each set of conditions in the range from approximately φ(0) to the value at which cracks appear, there was no significant difference in the degree of anisotropy among the samples. This indicates that the anisotropy is created initially and retained during desiccation.
The observed height dependence is consistent with the fact that the memory effect of shaking is caused by shear stresses larger than the yield stress. Because the lower region of the paste experienced larger shear stresses during shaking, due to the weight of the upper region, the lower region was fluidized repeatedly, while the deformation of the upper region was mainly elastic and small. Although the nonlinear response to shaking is not simple, we infer that the memory of shaking is preserved mainly in the lower region, which experienced large deformation [2]. Unlike the results displayed in Fig. 5(a), these results do not exhibit anisotropy characterized by S 1 > S 2 0, as expected. However, for both sets of conditions considered here, the arrangement of particles was not isotropic, with S 2 < 0. This anisotropy implies that the number of neighboring particles increases in the z direction. In addition, there is a difference in S 1 between the two cases: S 1 is positive for the shaken samples with a small initial volume fraction of φ(0) = 10.4%, while S 1 0 for the unshaken samples (f = 0), although a large value of S σ indicates that the matrix S is not fully diagonalized. Such anisotropy is regarded as a kind of "hidden memory", which is not manifested as an anisotropic crack pattern.
IV. ANISOTROPIC STRUCTURES OF INTERSTICES
We next investigate the interstices depicted in Figs. 3(b) and 3(c). These interstices are retained during desiccation and significantly affect crack growth.
Because the sample considered in Fig. 3 was covered and solidified after several cracks had grown partway through the sample along the direction perpendicular to the initial shaking, these cracks were preserved. We cut out a piece of the layer including a crack tip and observed it with µCT. Figure 6(a) is a 3D image of the air regions constructed from the µCT data [21]. In this image, the growing crack appears as the penetration of air into the paste. The crack tip has a complex shape, which suggests unstable growth. It is also seen that the region of intermediate height leads the growth of the crack, with the lower and upper regions trailing behind. These characteristics of the crack growth are consistent with what we have inferred from plumose patterns left on crack surfaces in desiccation cracks, although we have not observed a sharp plumose structure in Lycopodium paste [22][23][24]. In this work we studied four crack tips cut from three samples. Two of them had similar properties, while for the other two, the tip shapes deviated to the top or bottom. Figure 6(b) displays a horizontal cross section at the center of the layer considered in Fig. 6(a). Here a running average in the depth direction was carried out in order to make the interstices clearly visible, as in Fig. 3(c). Note that there is an interstice ahead of the crack tip, and the crack runs along this interstice in the direction perpendicular to that of the initial shaking, although the width of the interstice is extended inside the crack. Similar relations between the interstices and crack growth were observed in all four crack tips. We thus infer that the structures of the interstices determine the direction of crack growth, and for this reason, they are directly responsible for the memory effect of shaking exhibited by Lycopodium paste. In order to ascertain the properties of interstices from a 3D binary image of particles, we constructed a 3D distance map D(r), which represents the distance from each position r to the nearest particle [see Fig. 7(a)], and calculated the correlation functions using the equation We divided both the lower part (0.1 < ζ < 0.4) and the upper part (0.6 < ζ < 0.9) of a sample into cubic regions with side lengths 200l p = 348 µm to calculate G(r) in every region, except several regions that contained large impurities, such as bubbles.
The function G(r) reflects the statistical properties of the shape of an interstice. For the lower part of the sample considered in Fig. 3, the average of G(x, y, 0) over all regions is not isotropic, as seen in Fig. 7(b). Figure 7(c) plots G(r, 0, 0), G(0, r, 0), and G(0, 0, r) with their fitting functions. It is seen that G(r) decays monotonically in the y and z directions, while it decays faster and has a minimum in the x direction. This difference implies that interstices have anisotropic shapes that are shorter in the x direction. Defining a ≡ 30 µm as a typical diameter of a Lycopodium particle, we fitted G(r, 0, 0), G(0, r, 0), and G(0, 0, r) for r > a 2 with functions proportional to e −r/ξi cos 2πr λi (i = x, y, z), respectively, where the fitting parameter 1/λ i vanishes for the monotonically decreasing functions. The fitting parameter ξ i represents the correlation length in the ith direction and λ i /2 corresponds to a typical distance from the inside of the interstices to the high-density regions in the case that G has a minimum. We note that these fitting functions are adopted as the first approximation, as G(r) does not have such a simple form; specifically, it tends to decay somewhat more slowly than an exponential function in the y direction.
We calculated G(r) for the lower and upper parts of the sets of samples considered in Fig. 5. The averages of a/λ i and ξ i /a over five samples for each set of conditions are plotted in Figs. 7(d) and 7(e), respectively. For the two sets of conditions under which the memory effect of shaking appears, we find distinct anisotropy in the lower parts of the samples: a/λ y and a/λ z approximately vanish and a/λ x is approximately 0.1 for (f, φ(0)) = (60 rpm, 18.4%) and 0.15 for (80 rpm, 18.4%), while ξ y tends to be larger than ξ x and ξ z . We thus conclude that anisotropic interstices develop perpendicularly to the x direction in the lower part of a sample, with typical widths in the range λ x /2 = 3a-5a in the x direction, while they extend mainly in the y direction. Contrastingly, such anisotropy is not found in the upper part under the same conditions, although a/λ z tends to be large.
For the two sets of conditions under which the memory effect of shaking does not appear, the forms of G(r) differ significantly. In the unshaken case (0 rpm, 18.4%), we found no anisotropy in the horizontal plane, as expected, and ξ x , ξ y , and ξ z have similar values that are smaller than those in the other cases. This result suggests that interstices develop through repeated shear deformation. For the shaken samples with small initial solid volume fractions (60 rpm, 10.4%), we found anisotropy with a/λ x taking large values in both the lower and upper parts. Also in this case, large density fluctuations appeared in , 0), and G(0, 0, r) with fitting functions proportional to cos(2π r λ i )e −r/ξ i (i = x, y, z). Also shown are plots of the averages of (d) a/λi and (e) ξi/a over five samples prepared under each set of conditions indicated on the horizontal axis. Here a = 30 µm is a typical particle diameter. The data are arranged in the order x, y, z for the lower part, followed by x, y, z for the upper part, and the error bars indicate the standard deviations. the form of voids rather than anisotropic interstices in the 3D images. Because paste is less viscous for small solid volume fractions, we infer that shaking causes this anisotropy with large shear flows, but such anisotropy is not reflected in crack patterns.
These results indicate that anisotropic structures of interstices are created by shaking in paste with a large solid volume fraction, just as for the anisotropic arrangements of neighboring particles reported in Sec. III.
V. NUMERICAL SIMULATIONS
With the experimental conditions used in this work, the Lycopodium particles in paste can be regarded as non-Brownian particles in high-viscosity shear flow. The particle Reynolds number and the Péclet number are estimated as Re p ≡γa 2 ρ w /η w 10 −3 1 and Pe ≡ 6πη wγ a 3 /(k B T ) 10 5 1, respectively, for Lycopodium particles of diameter a = 3 × 10 −5 m [25]. Here, k B is the Boltzmann constant, and η w and ρ w are the viscosity and density of water, for which we used the values η w 7 × 10 −4 Pa s and ρ w 10 3 kg/m 3 . We used a shear rate ofγ 1 s −1 and an absolute temperature of T 3 × 10 2 K as typical experimental conditions. Some previous works investigated rearrangements of non-Brownian particles in Stokes flows [25][26][27][28]. It is known that the motion of such particles is not reversible under oscillating shear flow. In particular, it was reported that colloidal particles confined between two parallel plates become arranged in the direction perpendicular to the flow direction under an oscillating shear flow [29].
As a first step in investigating how shaking creates anisotropic arrangements, we performed numerical simulations of spherical particles under a given shear flow [30]. We assumed that a uniform shear flow of given shear rateγ(t) exerts viscous drag on every particle and that this viscous drag always balances with the contact forces exerted by other particles, due to the large viscosity. In these simulations, we found that there appear anisotropic arrangements of neighboring particles similar to those reported in Sec. III, but large density fluctuations, such as those forming interstices, do not appear.
We numerically integrated the equationṡ to generate the time evolution of the positions of the N particles, r i = (x i (t), y i (t), z i (t)) (i = 1, 2, ..., N ), where the time t is merely a parameter used to define the shear deformation γ(t). The first term in Eq. (4) represents the velocity produced by a simple shear flow. We investigated two cases, that of oscillating shear flow, with γ(t) = γ m sin (2πt), and that of constant shear flow, with γ(t) = 4γ m t, where the magnitudes of the shear strains were set to change by γ m per quarter of time in both cases. The second term in Eq. (4) represents short-range elastic interactions between spherical particles in contact. The factor 1/2R i comes from the size dependence of the mobility of a particle in Stokes flows. We assumed normal forces of Hertzian contacts in the form for R i + R j > r (ij) and f (r (ij) ) = 0 otherwise, where R i and R j are the radii of the ith and jth particles. In other words, particles interact repulsively when they overlap. We determined the value of α to be 100 from numerical simulations in order to maintain an average overlap of less than 5%. Taking the average diameter of a particle as the unit of length, the particle sizes were uniformly distributed over the interval [0.8, 1.2] and the system size was 20. The system was a cubic region with periodic boundary conditions in the x and y directions and Lees-Edwards boundary conditions in the z direction. We distributed the N particles randomly at the initial time t = 0 using the method described in Ref. [31] and used the midpoint method with a time step ∆t = 0.001. Figure 8(a) plots the time evolutions of the order parameters obtained from the numerical simulations for a solid volume fraction of φ = 22.8%, which is similar to the value of φ(0) used in our experiments. Here we chose γ m = 2 as the lower region of a sample is inferred to experience shear deformation of the order of 1 in experiments [2]. The definitions of the order parameters are the same as in Sec. III, where two particles are regarded as a neighboring pair if the center-to-center distance is less than 1.16 35 µm/a. As seen in Fig. 8(a), in the case of oscillating shear flow, we find anisotropy, with S 1 > 0 and S 2 0. This is similar to the anisotropy depicted in Fig. 5(a). The anisotropy emerges quickly within a few cycles of shaking and it is consistent with the recent experimental finding that the memory of shaking can be rewritten through the influence of one or two oscillations in a different direction [10,15]. Figure 8(b) plots the dependence of this anisotropy on the volume fraction φ and the amplitude of shear deformation γ m . It is seen that the anisotropy becomes weak as γ m decreases or φ increases. We conclude that collisions among loosely packed particles under large oscillating deformation can yield short-range anisotropy. Figure 8(a) also indicates that, in the case of constant shear flow, another type of anisotropy, with S 2 < 0 < S 1 , appears as the shear deformation increases. This anisotropy is similar to that found for the shaken samples with small φ(0) considered in Fig. 5(b). This is reasonable, because paste is fluidized entirely during shaking, due to the very small yield stress realized under such conditions. This should be compared to the case of the unshaken samples considered in Fig. 5(b), for which S 2 < 0, S 1 0, and the matrix S is not fully diagonalized. We infer that this latter type of anisotropy was caused by uncontrolled shear flows created when the paste was poured into the container initially. In this case, S 1 vanishes as a result of the average over the five samples, because the samples experienced flows in various directions.
In contrast to the anisotropy in the arrangement of neighboring particles, we were not able to generate structures of interstices similar to those seen experimentally in our numerical simulations. This can be attributed to the fact that the model used in this work is too simple to describe the rheological properties of Lycopodium paste, in particular, the yield stresses created with small solid volume fractions. Lycopodium particles exhibit interactions that are much more complicated than the simple repulsive interactions assumed in the simulations. As Lycopodium particles consist mainly of fatty oil and have porous surfaces, the surfaces become hydrophobic when the fine asperities trap air. However, recent experiments investigating water droplets on a surface composed of Lycopodium particles found that vertical vibration induces a wetting transition through which the surface becomes hydrophilic [32,33]. It is likely that there is a similar transition that removes air from the porous surfaces when a mixture is stirred in the preparation of a paste. This can be understood from the fact that surfaces experience large stresses during stirring, just as in the case of vibration. After such processes, it is likely that adhesive and frictional forces are exerted between the porous surfaces of Lycopodium particles in contact. Studies of jamming transitions have found that adding attractive forces or frictional forces to rigid particles reduces the jamming point so that yield stresses emerge at smaller volume frac- 8. (a) Directional order parameters obtained from numerical simulations in which N = 3500 spherical particles were subject to shear deformation beginning with a random initial arrangement in a cubic system and a typical snapshot of particles in the system after a sufficiently long period of oscillating shear flow (t = 20). The time evolutions of S1 and S2 are plotted in the cases of oscillating shear flow and constant shear flow, respectively. (b) For the cases of oscillating shear flow, we investigated the time averages of the order parameters in the period 19.0 ≤ t < 20.0, S1 and S2, using various values of N and the amplitude γm. The averages of S1 and S2 over 16 initial conditions are plotted with respect to the volume fraction φ. The error bars represent the standard deviations.
Directional order parameters
tions [34][35][36]. From these considerations, we conjecture that there is a sparse network of particles that supports yield stresses, and shaking could make such a network structure anisotropic. Our experimental results suggest that such a structure develops irreversibly under oscillating shear deformation if the suspension of non-Brownian particles behaves as a plastic fluid.
VI. CONCLUSION
We carried out µCT observations of the 3D arrangements of particles in Lycopodium paste to elucidate the structures responsible for the memory effect of shaking. We found that applying horizontal shaking in one direction induces anisotropic structures mainly in the lower part of a layer of paste; the number of neighboring particles increases in the direction perpendicular to the shaking in the horizontal plane, and density fluctuations also emerge as anisotropic interstices extending in the perpendicular direction. Numerical simulations of non-Brownian particles under a given shear flow indicate that collisions of particles can account for the anisotropic arrangements of neighboring particles. We conclude that the formation of anisotropic interstices is directly responsible for the memory effect of shaking exhibited by Lycopodium paste. Interstices are robust during desiccation and they play a role as a path of air penetration causing anisotropic crack growth in the direction parallel to the initial shaking. We do not yet understand the process through which anisotropic interstices are created nor yield stresses in the case of small solid volume fractions. We leave for future work the investigate of how yield stresses emerge in systems with low particle densities and how shaking creates anisotropic interstices in such systems. with TrS 2 = 5 i=1 S 2 i and S 2 σ = 1 3 (S 2 3 + S 2 4 + S 2 5 ). Figure 9 plots the number of pairs of neighboring particles used to calculate each data point in Fig. 5. We find that this number is determined approximately by the solid volume fraction at the gelation time, φ g . Here we counted the pairs (i, j) and (j, i) separately, and thus M is approximately half of the indicated number of pairs. Because 2M = (2 × 10 4 ) − (4 × 10 5 ), the standard deviation of S i is expected to be 2 15M = (1 − 4) × 10 −3 for an isotropic distribution. | 2021-12-08T02:15:44.899Z | 2021-12-07T00:00:00.000 | {
"year": 2021,
"sha1": "6929ff0f74054da51949beb328c69f2c3ca955ea",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "6929ff0f74054da51949beb328c69f2c3ca955ea",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Physics"
]
} |
158555448 | pes2o/s2orc | v3-fos-license | The Insurance Purchase Effect of the Relationship between Customer and Insurance Agent Using Negative Binomial Regression
This study analyzes the effect of the relationship between the insurance consumer and the solicitor on the number of contracts for life insurance and property insurance, which are the main results of insurance companies, using the negative binomial regression model. The variables affecting the number of life insurance contracts were trading relationships between insurance customer and solicitor, followed by annual income, job, sex, marriage, and household. The number of property insurance contracts was largely influenced by marriage, regular trading relationships between insurance customer and solicitor, age, and annual income. The relationship between the insurance consumer and the solicitor was revealed as a major factor affecting the number of life and property insurance contracts. However, the contents in life insurance and property insurance was much different. For the number of life insurance contracts, the relationship between the insurance customer and the solicitor was most influential variable and the next was such as income and job. However, the most important variable in the contracts number of property insurance is marital status, followed by the relationship between insurance customer and solicitor. As for the property insurance, it can be seen that the relationship with the "insurance agent who is highly trustworthy" does not much affect the number of contracts. In the case of property insurance, religious / social organizations, family / relative relationships, and business relationships were found to have more influential impact on the number of contracts. ※
Abstract
This study analyzes the effect of the relationship between the insurance consumer and the solicitor on the number of contracts for life insurance and property insurance, which are the main results of insurance companies, using the negative binomial regression model.The variables affecting the number of life insurance contracts were trading relationships between insurance customer and solicitor, followed by annual income, job, sex, marriage, and household.The number of property insurance contracts was largely influenced by marriage, regular trading relationships between insurance customer and solicitor, age, and annual income.
The relationship between the insurance consumer and the solicitor was revealed as a major factor affecting the number of life and property insurance contracts.However, the contents in life insurance and property insurance was much different.
For the number of life insurance contracts, the relationship between the insurance customer and the solicitor was most influential variable and the next was such as income and job.However, the most important variable in the contracts number of property insurance is marital status, followed by the relationship between insurance customer and solicitor.As for the property insurance, it can be seen that the relationship with the "insurance agent who is highly trustworthy" does not much affect the number of contracts.In the case of property insurance, religious / social organizations, family / relative relationships, and business relationships were found to have more influential impact on the number of contracts.
※ Keywords: insurance planner, the number of insurance contracts, negative binomial regression model, Poisson regression model.
This table is a online web surveys by Hankook Research in March 2016.Among the 1,005 respondents, it is data on 732 life insurance contractor.satisfactionand loyalty judgments".Journal of Academy of Marketing Science,28(1), 2000, pp. 150-167. | 2019-05-20T13:06:43.119Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "6f8b208392fe2018d62751cd4452f44729994184",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.23842/jif.2018.29.4.004",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b6bc99f45477211f133366b8f17c4cc03f33d44e",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Mathematics"
]
} |
255041088 | pes2o/s2orc | v3-fos-license | Neonatal hypoxic ischemic encephalopathy increases acute kidney injury urinary biomarkers in a rat model
Abstract Hypoxic ischemic encephalopathy (HIE) is associated with acute kidney injury (AKI) in neonates with birth asphyxia. This study aimed to utilize urinary biomarkers to characterize AKI in an established neonatal rat model of HIE. Day 7 Sprague–Dawley rat pups underwent HIE using the Rice–Vannucci model (unilateral carotid ligation followed by 120 mins of 8% oxygen). Controls included no surgery and sham surgery. Weights and urine for biomarkers (NGAL, osteopontin, KIM‐1, albumin) were collected the day prior, daily for 3 days post‐intervention, and at sacrifice day 14. Kidneys and brains were processed for histology. HIE pups displayed histological evidence of kidney injury including damage to the proximal tubules, consistent with resolving acute tubular necrosis, and had significantly elevated urinary levels of NGAL and albumin compared to sham or controls 1‐day post‐insult that elevated for 3 days. KIM‐1 significantly increased for 2 days post‐HIE. HIE did not significantly alter osteopontin levels. Seven days post‐start of experiment, controls were 81.2% above starting weight compared to 52.1% in HIE pups. NGAL and albumin levels inversely correlated with body weight following HIE injury. The AKI produced by the Rice–Vannucci HIE model is detectable by urinary biomarkers, which can be used for future studies of treatments to reduce kidney injury.
| INTRODUCTION
Perinatal asphyxia resulting in hypoxic ischemic encephalopathy (HIE) occurs in 2-5 per 1000 live births and is a major cause of morbidity and mortality (Pfister & Soll, 2010). HIE is one of the multiple etiologies that comprise the clinical syndrome of neonatal encephalopathy and is specific to hypoxia-ischemia. Perinatal asphyxia results in multi-organ dysfunction involving the kidneys, as well as the brain. Neonatal acute kidney injury (AKI) occurs in up to 40% of neonates with HIE and is an independent risk factor for increased duration of ventilation, length of stay, poor neurodevelopmental outcome, and mortality (Karlowicz & Adelman, 1995;Kirkley et al., 2019;Sarkar et al., 2014;Selewski et al., 2013). There is growing evidence that an episode of AKI in the neonatal period results in increased risk of chronic kidney disease in later life (Chaturvedi et al., 2017;Harer et al., 2017). Although studies on the use of hypothermia treatment for HIE found reductions in the degree of AKI (Tanigasalam et al., 2016;van Wincoop et al., 2021), many of the observations were short term, and evidence of renal dysfunction remained present and was apparent in later childhood (Robertsson Grossmann et al., 2022), highlighting the continuing need for effective preventative strategies.
The Rice-Vannucci neonatal rat HIE model has been widely used to examine the impacts of different potential treatments to reduce the neurological damage associated with HIE in human neonates. This model initiates HIE at 7 days of age, at time traditionally considered to be equivalent to a term human infant, based originally off of measurements of tissue weight; later advancements, however, expanded considerations to include benchmarks in cell proliferation and maturation (Semple et al., 2013). This more integrated assessment resulted in a model system that places the human equivalence at post-conception day 260, which is in the late pre-term to term time frame (Workman et al., 2013). In rats, postnatal day 7 is also an age that corresponds to a sensitive window of kidney development that more mimics late pre-term humans, as it is prior to the cessation of nephrogenesis. Nephrotoxicity at this early stage has been linked to kidney dysfunction later in life (Seely, 2017). While two studies have recently examined whether this model results in AKI, it remains less well studied. Wang et al. showed that acetyl-l-carnitine prevented the decrease in renal organic cation/carnitine transporter 2 and pyruvate dehydrogenase levels at 24 h after injury which would improve energy metabolism in the kidney (Wang et al., 2019). Xu et al. found that melatonin reduced expression of edema-related proteins, including aquaporin-4, zonula occludens-1, and occludin following hypoxic ischemic insult (Xu et al., 2017). Given the incidence of AKI resulting from perinatal asphyxia, our study therefore aimed to further expand understanding of the renal pathology produced by this model. We hypothesized that urinary biomarkers can be utilized to identify and track the progression of HIE-induced AKI.
Serum creatinine is currently the gold standard for diagnosing AKI; however, it is estimated that >50% of renal function is lost before a rise in creatinine is observed. The sensitivity of the test therefore underestimates the extent of injury. Urinary biomarkers of AKI are being developed that are more sensitive as at detecting kidney injury. This sensitivity is critical for diagnosing and determining treatment for early or mild to moderate renal injury and loss of function, and highlights the utility of urinary biomarkers in experimental models AKI. Urinary biomarkers NGAL (neutrophil gelatinase-associated protein), kidney injury molecule 1 (KIM-1), and osteopontin (OPN) were shown to trend higher in neonates with AKI, as defined by a rise in serum creatinine (Askenazi et al., 2016;Rumpel et al., 2022). The NGAL gene is found to be upregulated in very early kidney injury and is a highly induced protein in the kidney after ischemic or nephrotoxic AKI in animal models (Devarajan, 2015;Devarajan et al., 2003;Mishra et al., 2003;Mishra et al., 2004;Supavekin et al., 2003). The expression of albumin is greatly induced above typical levels in the kidney following AKI in both animal models and clinical studies (Ware et al., 2011). Furthermore, the application of biomarkers in this model enables it to be utilized to examine treatments that may reduce AKI, particularly when associated with perinatal HIE.
| Animals
This study was approved by the University of Rochester's Institutional Animal Care and Use Commitee (IACUC) (102314/2019-30). National Institutes of Health guidelines were complied with in the care and handling of the animals. Timed-pregnant Sprague Dawley rats were obtained from Charles River Laboratories and were housed and cared for in the central animal facility. All dams and pups received identical standard husbandry conditions as provided by our institution's vivarium staff and the dams received standard chow and water ad libitum. Dams delivered litters that contained roughly equal numbers of pups (10-12 pups per litter). To ensure even distribution of pups between each experimental condition, equal numbers of male and female pups were selected from each of three litters and were assigned an experimental group. This process was replicated for each group and each group contained roughly equal numbers of pups from each litter.
| Surgery and hypoxia-ischemia
Seven-day old rat pups, equivalent to late preterm or term human neonates (Workman et al., 2013) were anesthetized with 2% isoflurane, analgesia provided was subdermal buprenorphine. The rat pups underwent a modification of the Rice-Vannucci model (Rice et al., 1981;Vannucci & Vannucci, 1997 (Figure 1) which involved ligation of the left carotid artery and recovery for 1 h before the pups were placed in a hypoxia chamber at 8% oxygen for 120 min. Temperature was maintained throughout at 37 C degrees by placing pups on isothermal pads designed to maintain a constant temperature for several hours. Pups were then returned to the dam to recover and feed as normal. Controls included a group that did not receive surgery or anesthesia as well as a group that received sham surgery with anesthesia. Sham surgery was comprised of skin incision and exposure of the left carotid artery. Pups were weighed daily for the first 3 days post-intervention and on day of sacrifice.
| Urine collection and biomarker analysis
Urine was collected by gently scruffing the pup, by grasping gently along the nape of the neck and down the back, and holding the pup upright to induce urination. Freely expelled urine was collected directly into a 1.5-ml microfuge tube. Care was taken to prevent the tube from contacting the pup itself or any fecal matter also expelled during this process to minimize contamination. Volumes greater than 50 μl were considered adequate for analysis. Urine was collected the day before intervention, daily for 3 days post-intervention and on the day of sacrifice. Urine was centrifuged at 300g to remove particulates, filtered with a 0.2-micron sterile filter and frozen at −80 C . A multiplex ELISA kit, Meso Scale Discovery Rat Kidney Injury Panel V Plex Assay (catalog no. K15162C; Meso Scale Discovery) measuring NGAL, osteopontin, KIM-1 and albumin was used to detect these analytes in urine, which was diluted 1:10 and the assay performed according to manufactures instructions. Each plate required 20 μl of urine per sample. Standard duplicates showed good reproducibility having a low average signal confidence of variability (CV), (average sample CV% = 2.9647).
| Sample processing and histology
At postnatal day, 14 pups were sacrificed with Euthasol (100 mg/kg; Virbac) followed by exposure of abdominal and chest organs by midline incision. Vasculature was cleared by injection into the right atrium with heparin sodium (1 unit/gram body weight), papaverine hydrochloride (1.2 mg dose), and 0.9% sodium chloride (until perfusate ran clear). Tissues were then perfusion fixed with 2.5% glutaraldehyde in 0.1 M phosphate buffer. F I G U R E 1 Rice-Vannucci model of HIE and sample collection schedule. HIE: Seven-day-old Sprague-Dawley rat pups underwent surgery for ligation of the left carotid artery, recovery for 1 h with their dam, then were placed in a hypoxia chamber at 8% oxygen for 120 min and returned to their dam to recover. Temperature was maintained throughout at 37 degrees. Controls: no surgery or anesthesia. Sham: Anesthesia and skin incision. Sample collection: Daily weights and urine were collected prior to intervention on D0, post-intervention on D1-3, and at sacrifice, D7.
| Kidney histology
The right kidney was excised and immersed in 2.5% glutaraldehyde in phosphate buffer for 5 h then transferred to 10% buffered formalin for 48 h then placed in 70% EtOH until processed for paraffin embedding which included bisection along midline axis through the hilum and placing both cut surfaces placed face down into the cassette. Kidney sections were stained with Hematoxylin and Eosin (H & E) and examined by an experienced histopathologist (JED). Full cross sections of each half of the kidney were examined that included the cortex, medulla, and renal pelvis of each pup.
| Brain histology
Whole brains were excised and placed in 10% formalin for 10 days for complete fixation then placed in 70% EtOH until processed. Prior to paraffin embedding permanent black histology dye was used to identify the left hemisphere post-processing. The cerebrum was separated and cut into three sections: fronto, parietal, and parietooccipital areas, and the same distance between the cuts was maintained in each animal. The brain stem and cerebellum were separated from each other and bisected longitudinally. Brain sections were stained with H&E and examined by a histopathologist (JED). Whole mount sections were examined to compare both cerebral hemispheres and the hippocampus. The striatum and thalamus were not assessed.
| Statistics
GraphpadPrism 9.4 software was used for the statistical analyses (Graphpad Software) of urinary markers. Data are presented as mean ± SEM. Statistical significance was determined by two-way ANOVA followed by Sidak's multiple Comparison Test when ANOVA showed significant differences. Reported comparisons include between HIE and control or sham groups at each time point. For correlations between biomarker values and body weight, data were fit using least squares regression and R squared values were calculated to quantify goodness-of-fit. For comparison significance a sum of squares F test was used. A value of p < 0.05 was considered significant.
| RESULTS
Histopathological analysis of kidneys and brains were performed 7 days following hypoxic-ischemic injury to assess development of HIE and AKI (Table 1). Kidney injury was observed by light microscopy in 46% of the animals assessed. The changes were consistent with resolving acute tubular necrosis (ATN). The insult resulted in damage to the proximal tubules of the kidney as evidenced by tubular luminal dilatation (Figure 2a; green arrow heads) and loss of the brush border, as well as simplified epithelial lining, and increased mitotic activity. In general, the changes were mild, and in some cases focal, reflecting the fact that 7 days had elapsed from the time of the insult to harvesting of the organs for histological assessment. There was not obvious loss of nuclei and no casts were observed in the lumen, as commonly seen in early ATN. No histological changes were evident in the kidneys of the control (Figure 2b) or sham (Figure 2c) pups. Mitoses were present in the tubules of the kidneys of all groups as expected at this gestational age. The cerebral cortex of the pups exposed to the HIE insult contained pyknotic neurons ( Figure 2d; green arrow heads) on the side corresponding to ligated carotid artery in 70% of the animals assessed, but there were no significant ischemic changes to neurons in the contralateral hemisphere (Figure 2g,h). No liquefactive necrosis, hemorrhage, obvious gliosis, or cyst formation was evident 7 days post-injury, so no quantification of infarct volume was performed. The control ( Figure 2e) and sham (Figure 2f) pups showed no significant histological changes in their brains on light microscopy. Of note, all of the subjects with histologically detectable kidney injury also showed pyknotic nuclei in the cerebral cortex.
As a measure of health condition, morbidity, and an indicator of toxicity, body weight was recorded starting at 7 days of age, the time of HIE insult, sham surgery, or control handling (experimental day 0), and continued daily for the duration of the experiment ( Figure 3, Table 2). No significant differences were observed in starting weight between sexes. Male and female rat pups weighed on average 14.9 g and 15.3 g, respectively, and average weights of pups in HIE, sham surgery, and control groups were 13.4 g, 15.8 g, and 16.5 g, respectively, differences that were not statistically significant (Figure 3a, Table 2). Both absolute weight and weight gain were significantly reduced in HIE pups compared to control and sham pups, starting at day 1 and continuing throughout the course of the experiment until sacrifice 7 days later. Seven days post-start of the experiment, controls were 81.2% above their starting weight compared to 52.1% in HIE pups (Figure 3b, Table 2). The effect of HIE on weight gain did not differ between sexes in this study with gains at D7 averaging 51.3% and 53.0% for male and female pups, respectively. No significant reduction in weight gain was observed in pups receiving sham surgery.
Biomarkers of AKI were measured in urine collected from pups, starting immediately prior to the start of the experiment, and at days 1, 2, 3, and 7 ( Figure 4, Table 3). Day 0 results are presented in Table S1 and did not significantly differ from control values. Pups receiving HIE had significantly elevated urinary levels of NGAL and albumin compared to sham or controls days 1-3 post-insult, returning to control levels day 7 post-insult (Figure 4a,b, Table 3). KIM-1 was significantly increased compared to control and sham groups for 2 days post-HIE. By day 3 elevations in KIM-1 in HIE pups were statistically different from sham but not control levels ( Figure 4c, Table 3). HIE insult did not significantly elevate osteopontin levels above control and sham groups at any time measured in the experiment (Figure 4d, Table 3). Hypoxia exposure alone without carotid artery ligation resulted in a nonstatistically significant elevation in NGAL on day 1, which occurred to a lesser extent than HIE injury, and did not impact the other biomarkers examined ( Figure S1). In order to determine whether urinary biomarkers of AKI correlated with health status for an individual, each analyte was correlated with the corresponding body weight recorded at the time of urine collection for each subject. Days 1-3 were included in the analysis, as significant changes in analyte abundance occurred during this time frame. Body weight was significantly inversely correlated with the abundance of NGAL and albumin in animals receiving HIE injury but not in non-injured control F I G U R E 2 Photomicrographs of kidneys (a-c) and brains (d-f) from rat pups on day 7 following insult: (a) kidney sections from HIE insult showed damage to the proximal tubules of the kidney, which included tubular luminal dilatation (green arrow heads), simplified epithelial lining and brush border loss, without obvious loss of nuclei. No histological changes were observed in the control (b) or sham (c) groups. The cerebral cortex from pups receiving HIE insult (d) contained pyknotic neurons (green arrow heads) while the brains from the control (e) or sham (f) groups showed no histological changes (H&E stain, original magnification ×200). Following HIE procedure pyknotic neurons are visible in the cerebral cortex (g) and hippocampus (h) in the ligated hemisphere while the contralateral hemisphere did not ischemic change in the cerebral cortex or hippocampus (original magnification ×12.5; insets ×100 and ×200). n = 10-13 pups/treatment group. animals ( Figure 5). No significant correlations were observed with KIM-1 or OPN in both HIE and non-injured control animals.
| DISCUSSION
The Rice-Vannucci model is a widely utilized experimental approach for studying perinatal HIE and has been used extensively for identifying mechanisms of brain injury related to birth asphyxia (Millar et al., 2017;Rice et al., 1981;Vannucci & Vannucci, 1997. Less well studied, but of equal importance, are the renal pathologies associated with this model, since it potentially replicates a component of the kidney injury that is observed in up to 40% of neonates that have experienced perinatal asphyxia. Additionally, the development of AKI strongly contributes to poorer short-term outcomes as well as negative longterm consequences in those affected (Harer et al., 2017;Tanigasalam et al., 2016). Despite this clinical evidence, few animal models of these short-and long-term outcomes currently exist. Xu et al. has recently published a histological characterization of the pathological processes occurring in the kidney resulting from the HIE model, reporting significant swelling of tubular epithelial cells, interstitial edema, and necrotic changes in the renal cortex, as well as disruption of glomerular filtration barriers (Xu et al., 2017). Their histological characterization showed this occurred between 3 and 72 h following the Rice-Vannucci HIE procedure. Our findings showing that AKI changes had largely resolved by day 7 are not unexpected given the brush border loss/ regeneration cycle that occurs after ischemic injury (Venkatachalam et al., 1978). Moreover, the absence of severe ischemia in the brain is also not unexpected given that even after bilateral carotid artery ligation in rats, other investigators have found absence of severe ischemia due to the presence of anastomotic channel function to effectively perfuse the forebrain (Brown, 1966). Of note, although we were able to detect short-term kidney injury with this model, an important next step will be to determine whether kidney dysfunction persists into adulthood in these animals.
Here we further expand the utility of this model by presenting a noninvasive approach to longitudinally track the progression of AKI through urinary biomarkers, which F I G U R E 3 Body weight (a) and percent weight gain above D0 body weight (b) of pups receiving either HIE injury, sham surgery, or no intervention control, at 0, 1, 2, 3, and 7 days from start of experiment. Symbols represent SEM of n = 6 pups/treatment group/time point. *significantly different (p < 0.05) from time point matched controls. # Significantly different (p < 0.05) from time point matched shams. can be used to monitor the efficacy of therapeutic interventions. Our histological outcomes displayed evident kidney pathology that were preceded by elevation of established urinary biomarkers of AKI.
The clinically established method for diagnosing AKI is via an elevation of serum creatinine, which is cleared through the glomerulus, and is therefore a proxy of glomerular filtration rate and renal functioning (Xu et al., 2018). It has served as an important indicator of injury or toxicity to the kidney that would disrupt or impair this process. However, injury to the kidney can occur before a decrease in filtration rate becomes detectable; therefore, the need exists to develop biomarkers for clinical use, not only in adults, but also in children and neonates, that are more sensitive in detecting renal injury and which are rapidly responsive (Edelstein, 2017;Sandokji & Greenberg, 2020). This is critical because early detection provides additional time and opportunity for intervention and enables treatment to occur at an earlier stage of pathogenesis (Edelstein, 2017). Several biomarkers have been examined in both humans and in preclinical animal studies and meet the criteria of being sensitive, specific, and predictive of AKI (Bolisetty & Agarwal, 2011;Edelstein, 2017;Mishra et al., 2003;Sandokji & Greenberg, 2020;Vaidya et al., 2006). These include the analytes presented in this manuscript.
We present an experimental approach that utilizes biomarkers to enable early and sensitive detection of AKI resulting from HIE. Urinary NGAL has been established as an early responding biomarker of AKI, not only in rats following early ischemic AKI and in mice from cisplatin toxicity (Mishra et al., , 2004, but also in both adult and pediatric patients after cardiac surgery (Mishra et al., 2005;Wagener et al., 2006), and in critically ill children with heterogeneous illness (Zeid et al., 2019). In both preclinical experiments and in clinical studies of pediatric cohorts, NGAL elevation preceded rises in other early responding markers , and this is in line with our observations showing significant increases in NGAL as early as 1 day following injury, the earliest time point assayed in this study. Urinary albumin, also becomes present in the urine in response to stress to renal tubules and after various glomerulopathies and in mice has been detected as early as 4 h after AKI induction (Ware et al., 2011). Accordingly, we also observed early increases in urinary albumin, detected at 1 day following HIE. In investigations of another biomarker of renal injury, KIM-1, studies in pediatric patients found that this marker was elevated after cardiac surgery; however, this was delayed compared to NGAL detection (Devarajan, 2011;Dong et al., 2017). In line with this, the time dependent increase in KIM-1 was affected by AKI, becoming elevated within the first 3 days and reaching its greatest values by 7 days following HIE induction, a time point in which NGAL and albumin levels returned to baseline. This supports the use of a panel of biomarkers that includes not only those which can be used to rapidly identify the initial development of AKI but also those that are sensitive F I G U R E 5 Correlation between urinary biomarkers of AKI and body weight within individual pups receiving either HIE injury or no intervention control, at 1, 2, and 3, days from start of experiment. Data points represent a single time point for an individual animal. Lines represent linear fits of data sets associated with p and R 2 values. n = 4-6 pups/ treatment group/time point.
to lasting impairments. Interestingly osteopontin did not respond to AKI in our HIE model, and recovery from sham surgery may have impacted biomarker levels, specifically at the 3 day time point. Determination of the specific mechanisms that relate the responsiveness of these biomarkers to the nature and extent of renal damage in this model is beyond the scope of this study, but provides potential opportunities for further investigation. These biomarkers differ from other measures of functional output of the kidney, such as serum creatinine, in that they are responsive to and, ideally, should be able to differentiate between, a variety of factors including tubular injury, glomerulonephritis, and interstitial nephritis (Edelstein, 2017). The differential responses of the biomarkers in our study would enable these types of mechanistic investigations. Our findings of a correlative relationship between biomarker changes and weight gain (or lack thereof) also provide potential for their use in AKI prediction, providing the possibility of relating the extent of their elevation to the severity of AKI. This relationship between overall body condition and renal injury mirrors the increased morbidity and poorer outcomes observed in neonates that develop AKI as an aspect of HIE/perinatal asphyxia (Cavallin et al., 2020;Robertsson Grossmann et al., 2022). Importantly, biomarker sensitivity to injury severity could potentially serve as an indicator of efficacy for novel therapeutics in preventing or reversing AKI. The relationship between HIE and AKI is well documented in the clinic (Durkan & Alexander, 2011); however, mechanistic studies investigating the processes that lead to this outcome, such as oxidative stress, are less well understood and could be aided by the utilization of urinary biomarkers in this rat model. One of the limitations of this study is that although we did not observe any differences in outcomes between sexes, larger group sizes are needed to assess the role of sex on sensitivity to AKI. Further investigations should consider sex as a biological variable, include more detailed comparisons of urinary biomarker change with kidney functioning by serum creatine, and assess this model for any lasting long-term effects or susceptibility to kidney disease later in life.
| CONCLUSIONS
The Rice-Vannucci neonatal model of HIE produces an AKI. This model provides a feasible experimental design to examine the renal injury resulting from HIE. Importantly, the utilization of a panel of biomarkers that are rapid and sensitive in detecting and monitoring AKI provides a valuable system to investigate the effectiveness of potential therapeutic interventions and their influence on morbidity and mortality. | 2022-12-24T16:37:41.729Z | 2022-12-01T00:00:00.000 | {
"year": 2022,
"sha1": "a2486db69a8df9d0d6f7755695a74eda6b1d969d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bb6689c3e7b5730af8295eaa27a0b7823b3b9431",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.